Friday, July 20, 2012

Back to it - Building my own remote-access CCIE Lab Rack

Wow, it has been over five months since I deigned to do a blog post here.  New job, gym training and driving back to Toowoomba from Brisbane every weekend to do the lawns and such has meant that my CCIE study has been pretty much on hold.  However, it is time to get back into it and that means building a new lab.

Previously, I was using equipment borrowed from my last employer.  I felt pretty uncomfortable having several grand worth of someone else's gear in my house, especially as I don't work for them any more.  So I gave that gear back, and I started building the new lab.

As I am splitting my time between Brisbane and Toowoomba, I wanted a lab I could control and access remotely.  That meant not using my laptop for running GNS3 in a VM as I had been doing previously.  So, the first task was to build a new PC.  I had sworn to myself a couple of years ago that I was giving up the whole "build your own" game and moving to off-the-shelf, but here I was again, making up a shopping list of bits to put together myself.

I chose an MSI Z77A-G43 motherboard, as it has 3 PCI slots (initially I was going to go the route of multiple NICs directly connected, rather than the breakout switch, but as I'll explain later, that didn't work out).  An i5-3550 processor, 8 Gb of RAM, a cheap 500 Gb hard drive and a cheap case, and I was under way.

Next, I was off to eBay to look for some quad NICs and switches.  The switches were no problem; I got two 3750-24TSs and two 3550-24s.  The NICs were a different question - these seem to be harder to find than in the past, and I had to dig around and source them from all over.  This may be part of the downfall of the multi-NIC approach I was going for.  I purchased a couple of old D-Link DFE-570TX quad cards, and Intel Pro/1000 VT quad NIC and an Intel Pro/1000 PT dual NIC.  I was looking to build a full IPexpert topology, which has 20 connections, but I was finding cheap cards harder to find, so I got to business while I kept looking.

I installed Ubuntu and GNS3 and fired everything up, and bingo, I was working.  However, after a few minutes of working, the whole machine would lock up solid - no keyboard, mouse, nothing.  I spent some time trolling the logs and forums.  I played around with IRQs; setting the OS to irqpoll; all sorts.  I tried Ubuntu server, Fedora, swapping out the cards.  Still no luck.  Finally I tried Windows 7, but then found there are no drivers for the D-Link or the Pro VT.

Given that the point of this whole exercise is to get my lab working, and not to debug IRQs, NICs and OSes (plus this is way outside my skill set these days), I returned to the tried-and-true breakout switch method.  So back to eBay to buy a 3560-24TS with the Advanced Services image (makes the CDP l2protocol-tunnel work properly bi-directionally).

After the first false start I got to business again.

I racked up the switches in an old 12-RU wall frame I recovered from one of my current work sites and cabled up to the breakout switch.


For a long time I have had a Keyspan USA-49WG 4-port USB-to-serial adapter, so I was set for console access to the 4 switches (have to SSH to the breakout switch).

Next, I got GNS3 going.  I decided to use Windows 7 as the operating system for reasons that will become clearer shortly.  I built the network as per the IPexpert topology, with 11 routers, a frame switch and the connection to the breakout switch.

GNS Breakout Topology

IPexpert Topology

I kept the Intel PRO/1000 PT card as my interface facing the breakout switch, and the on-board interface is my Internet-facing port.  The reason is that I needed to make sure the VLAN tags were not automatically stripped from the packets coming into the interface.  For those interested, the instructions are here for Intel cards.

So far, so good.  So now I needed to be able to access the system remotely.  First I got myself a DynDNS name.  Next, I installed WinSSHD (free for personal use), meaning I only need to open one port on my router.  Using PuTTY SSH tunnels, I am able to do an RDP session to kick off GNS3, and thereafter I could telnet over the SSH tunnel to the GNS3 console ports.  Getting to the console ports of the switches remotely without using the RDP was a bit trickier.  On Ubuntu I would have used ser2net, but the free options on Windows were a bit thin on the ground.  Eventually I found  While not perfect (I get some odd echo effects sometimes, and the tab and arrow keys are a bit flaky), I found it to be quite serviceable.  Again, I assigned ports and can access over the SSH tunnel.

PuTTY Tunnels - Can now telnet or RDP to localhost ports which are then forwarded over the SSH tunnel

So I have everything set.  But I wanted to be able to power cycle the switches if required, and to turn the gear off when not in use.  There are some nice IP-aware power rails available, but they were all quite expensive.  Eventually I took a punt and bought a USB controllable single-outlet power unit, on special from a distributor for $49.  The interface was Windows-only, which is the reason I went for Windows (yes, I know I could have done Linux with a Windows VM, etc, etc) OS in the end.  The unit's interface is pretty clunky, but it works.  Being a single outlet, a power cycle drops all the switches at once, but that is OK for what I am doing.  In addition, it works essentially as a USB-to-serial device, and when you run the software, it locks open ALL the COM ports.  Urgh.  But this can be circumvented by opening up my console sessions to the switches first, then firing up the power control.  A bit clunky, but a simple script at startup sorts it all out.

Finally, I set the Wake on Lan features, so I can also power down the PC or wake it up after a power failure.

So there it is.  My GNS3-breakout switch-real switches for the IPexpert topology.  Now down to business doing some study so I can have another crack at the Lab later this year.

The full setup

Parts for the GNS3 PC:

  • MSI Z77A-G43 motherboard
  • i5-3550 Processor
  • Corsair Vengeance DDR3 (2 x 4 Gb)
  • 500 Gb SATA3 Hard Drive
  • Intel PRO/1000 PT dual port NIC

  • Windows 7 Pro OS
  • WinSSHD
  • Putty
  • GNS3
 Other Hardware:

  • Keyspan USA-49WG 4-port USB-to-serial
  • Aviosys USB Power 8800
  • 12U Rack
  • 6-outlet powerboard

  • 1 x 3560-24TS (breakout switch)
  • 2 x 3750-24TS
  • 2 x 3550-24


  1. This comment has been removed by the author.

  2. Sounds like a pain to get it all working. I guess you've got a few network cards you can re-sell on ebay if you like. I also like the little rack you've setup, but do you have any overheating problems? I doesn't look like it's got great airflow...?

    1. Not a real problem. They are non-PoE and have good fans. The rack is open at the back and has side vents.

  3. Hello there, listing im trying to build my lab too, i was told that the DFE-580TX where the ones!! but im not able to get them where i live.. so i found some DFE-570TX could you let me know if the have independent IRQ? or does it share them among the ports, thanks

    1. Hi Eduardo. I think it shares them. I think my problem was related to the sheer number of ports I was trying to run. There is a good write-up of using the DFE-570TX cards here:

  4. Hi,

    Could you please state exactly which image version you uses for the 3550 that's support CDP tunneling both ways? I have read that the 3560 doesn't support it but I guess that must be old info on the gns3 sites.



    1. fingers slipped. I meant the 3560.

    2. Hi Martin.

      I am using IP Services 12.2(55)SE5. I definitely have both-direction CDP working fine with that image.

      Each port on the switch is configured:

      interface FastEthernet0/3
      description R4 F0-0 - C1 F0-4
      switchport access vlan 103
      switchport mode dot1q-tunnel
      l2protocol-tunnel cdp
      l2protocol-tunnel stp
      l2protocol-tunnel vtp
      no cdp enable
      spanning-tree portfast

      and the uplink to the GNS3 PC is configured:

      interface FastEthernet0/24
      description PC Uplink
      switchport trunk encapsulation dot1q
      switchport trunk allowed vlan 1,101-121
      switchport mode trunk
      l2protocol-tunnel cdp
      l2protocol-tunnel stp
      l2protocol-tunnel vtp
      no cdp enable
      spanning-tree portfast

  5. hi, is it mandatory to have Quad-NIC? is it possible just to have a single USB-Ethernet adaptor with 802.1q trunking and connect to a breakout switch and then to the real switches?