Monday, February 16, 2015

Moving on up!

As you may have noticed, recently I have been blogging about a few different vendors. Because of this, I felt it was time to give the blog a new name - and, while I was at it, a new home.

So, without further ado, here it is!


New domain name, new blogging software (WordPress), same great content :)

I'm currently in the process of migrating all of my content over to it's new home. Once I'm done, I'll begin putting up new posts as usual.

As always, if you have any questions or have a topic that you would like me to discuss, please feel free to post a comment at the bottom of this blog entry, e-mail at will@oznetnerd.com, or drop me a message on Twitter (@OzNetNerd).

Note: This website is my personal blog. The opinions expressed in this blog are my own and not those of my employer.

Thursday, February 5, 2015

NetApp From the Ground Up - A Beginner's Guide Part 12

Volume and Aggregate Reallocation

Summary

  • Volume Reallocation: Spreads a volume across all disks in an aggregate
  • Aggregate Reallocation: Optimises free space in the aggregate by ensuring free space is contiguous.

Details

One of the most misunderstood topics I have seen with NetApp FAS systems is reallocation. There are two types of reallocation that can be run on these systems; one for files and volumes and another for aggregates. The process is run in the background, and although the goal is to optimize placement of data blocks both serve a different purpose. Below is a picture of a 4 disk aggregate with 2 volumes, one orange and one yellow.


If we add a new disk to this aggregate, and we don't run volume level reallocation all new writes will happen on the area in the aggregate that has the most contiguous free space. As we can see from the picture below this area is the new disk. Since new data is usually the most accessed data you now have this single disk servicing most of your reads and writes. This will create a “hot disk”, and performance issues.


Now if we run a volume reallocation on the yellow volume the data will be spread out across all the disks in the aggregate. The orange volume is still unoptimized and will suffer from the hot disk syndrome until we run a reallocation on it as well.


This is why when adding only a few new disk to an aggregate you must run a volume reallocation against every volume in your aggregate. If you are adding multiple disks to an aggregate (16, 32, etc) it may not be necessary to run the reallocate. Imagine you add 32 disk to a 16 disk aggregate. New writes will go to 32 disk instead of the 16 you had prior so performance will be much better without taking any intervention. As the new disk begin to fill up writes will eventually hit all 48 disks in your aggregate. You could of course speed this process up by running manual reallocation against all volumes in the aggregate.

The other big area of confusion is what an aggregate reallocation actually does. Aggregate reallocation “reallocate -A” will only optimize free space in the aggregate. This will help your system with writes as the easier it is to find contiguous free space the more efficient those operations will be. Take the diagram below as an example of an aggregate that could benefit from reallocation.


This is our expanded aggregate that we only reallocated the yellow volume. We see free space in the aggregate where the blocks were distributed across the other disk. We also see how new writes for the orange volume stacked up on the new disk as that is where we had the most contiguous free space. I wonder if the application owner has been complaining about performance issues with his orange data? The picture below shows us what happens after the aggregate reallocate.


We still have the unoptimized data from the volume we did not reallocate. The only thing the aggregate reallocate did was make the free space in it more contiguous for writing new data. It is easy to see how one could be confused by these similar but different processes, and I hope this helps explain how and why you would use the different types of reallocation.

Zeroing & Sanitising Disks

Information

If a disk has been moved around, or previously had data on it, you’ll need to zero it before it can be re-used. The “disk zero spares” command does the job, depending on the size of the disk will depend on how long it takes, usually no more than 4 hours even for the largest of disks (at time of writing, 1TB disks).

More Information

When drives in a NetApp are being obsoleted and replaced we need to make sure we securely erase all data that used to be on them. Unless you’re just going to crush your disks.

In this example we’ve got an aggregate of 14 disks (aggr0) that need to be wiped and removed from our NetApp so they can be replaced with new, much larger disks.

There are two methods that you can use to wipe disks using your NetApp. The first is to simply delete the aggregate they are a member of, turning them into spares and then running “disk zero spares” from the command line on your NetApp. This only does a single pass and only zero’s the disks. There are arguments I’ve seen where some people say this is enough. I honestly don’t know and we have a requirement to do a 7 pass wipe in our enterprise. You could run the zero command 7 times but I don’t imagine that would be as effective as option number two. The second option is to run the ‘disk sanitize’ command which allows you to specify which disks you want to erase and how many passes to perform. This is what we’re going to use.

The first thing you’ll need to do is get a license for your NetApp to enable the ‘disk sanitize’. It’s a free license (so I’ve been told) and you can contact your sales rep to get one. We got ours for free and I’ve seen forum posts from other NetApp owners saying the same thing.

There is a downside to installing the disk sanitization license. Once it’s installed on a NetApp it cannot be removed. It also restricts the use of three commands once installed:
  • dd (to copy blocks of data)
  • dumpblock (to print dumps of disk blocks)
  • setflag wafl_metadata_visible (to allow access to internal WAFL files)

There are also a few limitations regarding disk sanitization you should know about:
  • It is not supported in takeover mode for systems in an HA configuration. (If a storage system is disabled, it remains disabled during the disk sanitization process.)
  • It cannot be carried out on disks that were failed due to readability or writability problems.
  • It does not perform its formatting phase on ATA drives.
  • If you are using the random pattern, it cannot be performed on more than 100 disks at one time.
  • It is not supported on array LUNs.
  • It is not supported on SSDs.
  • If you sanitize both SES disks in the same ESH shelf at the same time, you see errors on the console about access to that shelf, and shelf warnings are not reported for the duration of the sanitization. However, data access to that shelf is not interrupted.
I’ve also read that you shouldn’t sanitize more then 6 disks at once. I’m going to sanitize our disks in batches of 5, 5 and 4 (14 total). I’ve also read you do not want to sanitize disks across shelves at the same time.

Fractional Reserve

Information on Fractional Reserve:

As per the following links, Fractional Reserve should be disabled for LUNs:

Other Posts in this Series:

As always, if you have any questions or have a topic that you would like me to discuss, please feel free to post a comment at the bottom of this blog entry, e-mail at myciscolabsblog@gmail.com, or drop me a message on Twitter (@OzNetNerd).

Note: This website is my personal blog. The opinions expressed in this blog are my own and not those of my employer.

Wednesday, February 4, 2015

NetApp From the Ground Up - A Beginner's Guide Part 11

Capacity

Right-Sizing

Disk drives from different manufacturers may differ slightly in size even though they belong to the same size category. Right sizing ensures that disks are compatible regardless of manufacturer. Data ONTAP right sizes disks to compensate for different manufacturers producing different raw-sized disks.

More Information

Much has been said about usable disk storage capacity and unfortunately, many of us take the marketing capacity number given by the manufacturer in verbatim. For example, 1TB does not really equate to 1TB in usable terms and that is something you engineers out there should be informing to the customers.

NetApp, ever since the beginning, has been subjected to the scrutiny of the customers and competitors alike about their usable capacity and I intend to correct this misconception. And the key of this misconception is to understand what is the capacity before rightsizing (BR) and after rightsizing (AR).

(Note: Rightsizing in the NetApp world is well documented and widely accepted with different views. It is part of how WAFL uses the disks but one has to be aware that not many other storage vendors publish their rightsizing process, if any)

Before Rightsizing (BR)

First of all, we have to know that there are 2 systems when it comes to system of unit prefixes. These 2 systems can be easily said as
  • Base-10 (decimal) – fit for human understanding
  • Base-2 (binary) – fit for computer understanding
So according the International Systems of Units, the SI prefixes for Base-10 are


In computer context, where the binary, Base-2 system is relevant, that SI prefixes for Base-2 are


And we must know that the storage capacity is in Base-2 rather than in Base-10. Computers are not humans.

With that in mind, the next issue are the disk manufacturers. We should have an axe to grind with them for misrepresenting the actual capacity. When they say their HDD is 1TB, they are using the Base-10 system i.e. 1TB = 1,000,000,000,000 bytes. THIS IS WRONG!

Let’s see how that 1TB works out to be in Gigabytes in the Base-2 system:

1,000,000,000/1,073,741,824 = 931.3225746154785 Gigabytes

Note: 2^30 = 1,073,741,824

That result of 1TB, when rounded, is only about 931GB! So, the disk manufacturers aren’t exactly giving you what they have advertised.

Thirdly, and also the most important factor in the BR (Before Rightsizing) phase is how WAFL handles the actual capacity before the disk is produced to WAFL/ONTAP operations. Note that this is all done before all the logical structures of aggregates, volumes and LUNs are created.

In this third point, WAFL formats the actual disks (just like NTFS formats new disks) and this reduces the usable capacity even further. As a starting point, WAFL uses 4K (4,096 bytes) per block.

Note: It appears that the 4K block size is not the issue, it's the checksum that is the problem.

For Fibre Channel disks, WAFL then formats these blocks as 520 bytes per sector. Therefore, for each block, 8 sectors (520 x 8 = 4160 bytes) fill 1 x 4K block, leaving a remainder of 64 bytes (4,160 – 4,096 = 64 bytes). This additional 64 bytes per block is used as a checksum and is not displayed by WAFL or ONTAP and not accounted for in its usable capacity, therefore the capacity seen by users is further reduced.

512 bytes per sector are used for formatting SATA/SAS disks and it consumes 9 sectors (9 x 512 = 4,608 bytes). 8 sectors will be used for WAFL’s 4K per block (4,096/512 = 8 sectors), while the 9th sector (512 bytes) is used partially for its 64 bytes checksum. As with the Fibre Channel disks, the unused 448 bytes (512 – 64 = 448 bytes) in the 9th sector is not displayed and not part of the usable capacity of WAFL and ONTAP.

And WAFL also compensates for the ever-so-slightly irregularities of the hard disk drives even though they are labelled with similar marketing capacities. That is to say that 1TB from Seagate and 1TB from Hitachi will be different in terms actual capacity. In fact, 1TB Seagate HDD with firmware 1.0a (for ease of clarification) and 1TB Seagate HDD with firmware 1.0b (note ‘a’ and ‘b’) could be different in actual capacity even when both are shipped with a 1.0TB marketing capacity label.

So, with all these things in mind, WAFL does what it needs to do – Right Size – to ensure that nothing get screwed up when WAFL uses the HDDs in its aggregates and volumes. All for the right reason – Data Integrity – but often criticized for their “wrongdoing”. Think of WAFL as your vigilante superhero, wanted by the law for doing good for the people.

In the end, what you are likely to get Before Rightsizing (BR) from NetApp for each particular disk capacity would be:


* The size of 34.5GB was for the Fibre Channel Zone Checksum mechanism employed prior to ONTAP version 6.5 of 512 bytes per sector. After ONTAP 6.5, block checksum of 520 bytes per sector was employed for greater data integrity protection and resiliency.

From the table, the percentage of “lost” capacity is shown and to the uninformed, this could look significant. But since the percentage value is relative to the Manufacturer’s Marketing Capacity, this is highly inaccurate. Therefore, competitors should not use these figures as FUD and NetApp should use these as a way to properly inform their customers.

NetApp Figures




RAID & Right Sizing

See Part 3 for information on RAID & Right Sizing.

4K Blocks


Other Posts in this Series:

As always, if you have any questions or have a topic that you would like me to discuss, please feel free to post a comment at the bottom of this blog entry, e-mail at myciscolabsblog@gmail.com, or drop me a message on Twitter (@OzNetNerd).

Note: This website is my personal blog. The opinions expressed in this blog are my own and not those of my employer.

Monday, February 2, 2015

NetApp From the Ground Up - A Beginner's Guide Part 10

OnCommand Overview

  • Reference: NetApp Training - Fast Track 101: NetApp Portfolio
OnCommand management software helps your customers to monitor and manage their NetApp storage as well as multi-vendor storage environments, offering cost-effective and efficient solutions for their clustered, virtualized and cloud environments. With OnCommand, our customers are able to optimize utilization and performance, automate and integrate processes, minimize risk and meet their SLAs. Our objective is to simplify the complexity of managing today’s IT infrastructure, and improve the efficiency of storage and service delivery.

Multiple Clustered NetApp Systems

  • Reference: NetApp Training - Fast Track 101: NetApp Portfolio
Manage and automate your NetApp storage at scale. For your customers who are growing and require a solution to manage multiple clustered NetApp systems, they can turn to OnCommand Unified Manager, Performance Manager, and Workflow Automation. These three products work together to provide a comprehensive solution for today’s software-defined data center. Also your customers can analyze their complex virtualized environment and cloud infrastructure using NetApp OnCommand Balance.

NetApp Storage Management

  • Reference: NetApp Training - Fast Track 101: NetApp Portfolio













OnCommand Insight (Multi-vendor Storage Management)

  • Reference: NetApp Training - Fast Track 101: NetApp Portfolio







Integration

  • Reference: NetApp Training - Fast Track 101: NetApp Portfolio


















System Manager

  • Reference: NetApp Training - Fast Track 101: NetApp Portfolio
Many of our NetApp customers start out using OnCommand System Manager for simple, device–level management of individual or clustered systems. System Manager features a simple, browser-based interface with a dashboard, graphical reports, and automated workflows. It is designed to provide effective storage management for virtualized data centers through a simple user interface. For instance, using OnCommand System Manager, a customer was able to simplify storage management and achieve more than 80% storage utilization, while keeping costs low and using less power and space. It also supports the latest features in clustered Data ONTAP, such as High Availability pairs, Quality of Service, and Infinite Volumes.

Unified Manager

  • Reference: NetApp Training - Fast Track 101: NetApp Portfolio















Performance Manager

  • Reference: NetApp Training - Fast Track 101: NetApp Portfolio









Workflow Automation

  • Reference: NetApp Training - Fast Track 101: NetApp Portfolio















Balance

  • Reference: NetApp Training - Fast Track 101: NetApp Portfolio










Insight

  • Reference: NetApp Training - Fast Track 101: NetApp Portfolio
OnCommand Insight provides your customers with capabilities such as capacity planning, configuration and change management, showback and chargeback reporting, virtual machine optimization, and monitoring to provide added insight into multivendor, multiprotocol shared infrastructures. These analytics help your customer better understand how to optimize the data and storage to help make better decisions, improve efficiency, and reduce costs.

Other Posts in this Series:


As always, if you have any questions or have a topic that you would like me to discuss, please feel free to post a comment at the bottom of this blog entry, e-mail at myciscolabsblog@gmail.com, or drop me a message on Twitter (@OzNetNerd).

Note: This website is my personal blog. The opinions expressed in this blog are my own and not those of my employer.

Sunday, February 1, 2015

Guide: Building a Self-Contained Virtual Steelhead Lab - Part 4

Note: This post carries on from Part 3.

Testing & Reporting

Now that you’ve set up your lab, you can start working on the VSHs.

Test #1 - TCP Options

Usually when telnetting between two routers (e.g PC-A-01 and PC-B-01), the MSS negotiated is 536 and no other options are specified. However, after configuring the VSHs to optimise telnet traffic, you can see the VSHs and Enhanced Auto discovery in action: 
 

These enhanced settings won’t make much of a difference where Telnet traffic is concerned, but this sort of test can be useful for those who cannot run the Windows XP VMs due their PC’s performance.

 Test #2 - HTTP

In this example VM-PC-2 is running a web server and VM-PC-1 downloads a 10mb file from it. See the results below:

Cold Transfer:
 


10mb was downloaded in 1 minute and 12 seconds. The transfer rate was 142KB/sec.

When the file is downloaded a second time the results are very different:

Warm Transfer:

10mb was downloaded in 3 seconds. The transfer rate was 3.33MB/sec.

Results
As explained in the “Transfer Speeds” section in Part 3, GNS3’s slow throughput dramatically, resulting in a speed which replicates a very slow WAN link. We see this in the the "Cold Transfer" test above. However, as per the "Warm Transfer" test, the VSH’s local caching is able to resolve this issue the second time the data is required.

Test #3 - iperf

In this test iperf is run between VM-PC-1 and VM-PC-2. The test runs for 15 seconds and outputs the results every 1 second.

The first test is run with optimisation disabled and achieves an average speed of 811Kb/sec and transfers a total of 1.62MB.

C:\>c:\iperf\iperf.exe -c 10.3.7.32 -i 1 –t 15
------------------------------------------------------------
Client connecting to 10.3.7.32, TCP port 5001
TCP window size: 64.0 KByte (default)
------------------------------------------------------------
[  3] local 10.1.4.31 port 1052 connected with 10.3.7.32 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0- 1.0 sec   128 KBytes  1.05 Mbits/sec
[  3]  1.0- 2.0 sec   128 KBytes  1.05 Mbits/sec
[  3]  2.0- 3.0 sec  0.00 Bytes  0.00 bits/sec
[  3]  3.0- 4.0 sec   128 KBytes  1.05 Mbits/sec
[  3]  4.0- 5.0 sec   128 KBytes  1.05 Mbits/sec
[  3]  5.0- 6.0 sec   128 KBytes  1.05 Mbits/sec
[  3]  6.0- 7.0 sec  0.00 Bytes  0.00 bits/sec
[  3]  7.0- 8.0 sec   128 KBytes  1.05 Mbits/sec
[  3]  8.0- 9.0 sec   128 KBytes  1.05 Mbits/sec
[  3]  9.0-10.0 sec   128 KBytes  1.05 Mbits/sec
[  3] 10.0-11.0 sec  0.00 Bytes  0.00 bits/sec
[  3] 11.0-12.0 sec   128 KBytes  1.05 Mbits/sec
[  3] 12.0-13.0 sec   128 KBytes  1.05 Mbits/sec
[  3] 13.0-14.0 sec   128 KBytes  1.05 Mbits/sec
[  3] 14.0-15.0 sec   128 KBytes  1.05 Mbits/sec
[  3]  0.0-16.8 sec  1.62 MBytes   811 Kbits/sec


The second test is run with optimisation enabled and achieves an average speed of 194Mb/sec and transfers a total of 347MB.

C:\>c:\iperf\iperf.exe -c 10.3.7.32 -i 1 –t 15
------------------------------------------------------------
Client connecting to 10.3.7.32, TCP port 5001
TCP window size: 64.0 KByte (default)
------------------------------------------------------------
[  3] local 10.1.4.31 port 1055 connected with 10.3.7.32 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0- 1.0 sec  9.38 MBytes  78.6 Mbits/sec
[  3]  1.0- 2.0 sec  12.5 MBytes   105 Mbits/sec
[  3]  2.0- 3.0 sec  18.1 MBytes   152 Mbits/sec
[  3]  3.0- 4.0 sec  30.8 MBytes   258 Mbits/sec
[  3]  4.0- 5.0 sec  12.4 MBytes   104 Mbits/sec
[  3]  5.0- 6.0 sec  47.0 MBytes   394 Mbits/sec
[  3]  6.0- 7.0 sec  12.9 MBytes   108 Mbits/sec
[  3]  7.0- 8.0 sec  23.8 MBytes   199 Mbits/sec
[  3]  8.0- 9.0 sec  43.2 MBytes   363 Mbits/sec
[  3]  9.0-10.0 sec  44.2 MBytes   371 Mbits/sec
[  3] 10.0-11.0 sec  18.5 MBytes   155 Mbits/sec
[  3] 11.0-12.0 sec  15.5 MBytes   130 Mbits/sec
[  3] 12.0-13.0 sec  8.88 MBytes  74.4 Mbits/sec
[  3] 13.0-14.0 sec  22.5 MBytes   189 Mbits/sec
[  3] 14.0-15.0 sec  27.0 MBytes   226 Mbits/sec
[  3]  0.0-15.0 sec   347 MBytes   194 Mbits/sec


Note: The reason why there are such high speeds in the second test is because iperf most probably sends the same data over and over again, therefore the transfer would be mostly local between the PC and the VSH.

Reports

After running the above tests, you can view the various reports the VSHs create. Below are a couple of examples:





Final Notes

Physical & Virtual Labs

Using GNS3’s Cloud nodes and few physical NICs in your PC, you can connect your VSHs to a real network. There are two major benefits of doing this:

1) You can add IOS switches to your network (which GNS3 does not support).
2) The throughput restrictions caused by GNS3 (as covered in the “Transfer Speeds” section above) can be avoided completely. You can then add as many routers to your topology as you like. Shaping can then be used on the “WAN” router’s edge ports to emulate a WAN link’s speed.

See my Ultimate Cisco Lab website for a step by step guide on how to achieve this.

Host File & DNS

You can edit your PC and VM’s Host files so that you can use names instead of IP addresses when accessing devices and performing pings. However, it would be a better idea to set up a small DNS server so that your VSH’s reports will be more informative.

Adding NetApp Simulators

If you're interested in extending your lab, take a look at my Connecting the NetApp Simulator to your Virtual and Physical Labs and Trunking VLANs between the NetApp Simulator and GNS3 posts.

This will give you a lab which consist of Cisco routing and switching, Riverbed WAN Optimisation and NetApp Storage, all in a single PC/laptop. How great is that?

As always, if you have any questions or have a topic that you would like me to discuss, please feel free to post a comment at the bottom of this blog entry, e-mail at myciscolabsblog@gmail.com, or drop me a message on Twitter (@OzNetNerd).

Note: This website is my personal blog. The opinions expressed in this blog are my own and not those of my employer.

Guide: Building a Self-Contained Virtual Steelhead Lab - Part 3

Note: This post carries on from Part 2.

Virtual Steelhead

Installation

Now that the vSwitches are set up, the next thing you will need to do is install the VSHs.

1) Through the vSphere client, install the VSH by clicking on “File” > “Deploy OVF Template”.

2) When asked for the name of the VM, type “VSH-A”.

3) When configuring the NICs, match the “Source Networks” to their appropriate “Destination Networks”, as per the image below.
 

4) Power on the VM and open a vSphere client console session to it.

5) Log in to the appliance using “admin” and “password” as the credentials.

6) You will then be presented with the initial configuration wizard. See the responses to the wizard’s questions in the image below:


7) Press enter twice to save the configuration.

Note: You can now access VSH-A through your web browser by browsing to http://10.1.2.252 – and, once you’ve completed Step 8, you can access VSH-B by browsing to http://10.3.5.252

8) Repeat steps 1 through 7 but this time use VSH-B’s details while deploying the VM:
  • Name the VM “VSH-B”
  • Use the “VSH-B” destination networks
Also, use VSH-B’s details while completing the initial configuration wizard:

Step:
Setting:
Step 1: Hostname
VSH-B
Step 3: Primary IP address
10.3.5.252
Step 5: Default gateway
10.3.5.15
Step 14: In-Path IP
10.3.7.254
Step 16: In-Path Default gateway
10.3.7.14

Note: Refer to the “Final Diagram” diagram in Part 1 for more information about the interfaces and their IP addresses.

Note: If you make a mistake while completing the wizard, you can re-run it by issuing the following commands:

enable
configuration terminal

configuration jump-start

Licencing 

Once you’ve completed the VSH installs and have logged in to them via a web browser, you will see an alert which states that optimisation is not properly licenced. To resolve this issue, complete the following steps on both VSHs:

1) Obtain a couple of tokens from Riverbed.

2) On your VSHs, navigate to the “Licenses” page and paste one of the tokens in to the “License Request Token” field, then click “Generate License Request Key”.

3) The page will reload and you will be given a “license request key”. Copy and paste it in to Riverbed’s licences page.

4) After completing the previous step, you will be presented with a list of licences. Click on the “View Keys - text” link which is located near the top of the page, then copy all of the licences which are at the bottom of the page.

5) Go back to your VSH’s licences page and click the “Add a New License” button. Paste all of the licences in to the box and then click “Add”.

6) You must now restart the Optimisation Service. You do this by navigating to “Configure” > ”Maintenance” > “Services”, and clicking the “Restart” button.

7) Refresh the page and the licencing alert should no longer appear. The appliance’s status will also change from “Critical” to “Healthy”.

8)  Click the “Save” at the top of the page.

GNS3

To tie everything together, and to add routing to your lab topology, you’ll now need to set up GNS3.

VLAN Configuration

As mentioned earlier, you can configure SW1 and SW2 to allow multiple devices to connect to the VSH’s Primary, Auxiliary and lan0_0 subnets. To do this, use the following mappings:


SW1
Switch Port #:
VLAN:
Connects to:
Description:
1
VLAN 2
VMnet1 (VSH-A Primary interface)
VSH-A Primary Interface VLAN
2
VLAN 2
R3, Fa0/0
3
VLAN 3
VMnet2 (VSH-A Auxiliary interface)
VSH-A Auxiliary Interface VLAN
4
VLAN 3
R3, Fa0/1
5
VLAN 4
VMnet3 (VSH-A lan0_0 interface)
VSH-A lan0_0 Interface VLAN
6
VLAN 4
R3, Fa1/0
 
SW2
Switch Port #:
VLAN:
Connects to:
Description:
1
VLAN 5
VMnet5 (VSH-B Primary interface)
VSH-B Primary Interface VLAN
2
VLAN 5
R5, Fa0/0
3
VLAN 6
VMnet6 (VSH-B Auxiliary interface)
VSH-B Auxiliary Interface VLAN
4
VLAN 6
R5, Fa0/1
5
VLAN 7
VMnet7 (VSH-B lan0_0 interface)
VSH-B lan0_0 Interface VLAN
6
VLAN 7
R5, Fa1/0

However, before you can map the VSHs’ interfaces to GNS3 (the VMnet interfaces in the above tables), first you must use GNS3 “Cloud” nodes to represent the VSHs and attach the VMnet interfaces to them.

Note: You can use this method to connect any equipment (whether it be physical or virtual) to GNS3, not just Virtual Steelheads.

Cloud Nodes

When you want to connect GNS3 routers to equipment which resides outside of GNS3, you need to use Cloud nodes. These nodes allow you to bind both physical and virtual NICs to your GNS3 topology. You then connect these NICs to your equipment and you’ll then be able to pass traffic between GNS3 and your equipment.

You can then change the cloud’s icon to something that more closely relates to the equipment you’re using. In the case of the GNS3 topology discussed in this guide, the icons for VM-PC-1 and VM-PC-2 (the Windows XP VMs) were changed from clouds to servers, and the VSH’s icons were changed from clouds to firewalls (seeing as though there is no Steelhead” icon).

Note: The icons for PC-A-01, PC-A-02 and PC-B-01 (which are GNS3 routers, not clouds) were changed to PCs because they’re being used to perform similar jobs to the test PCs.

To connect the VSHs’ interfaces to GNS3, complete the following steps:

1) Drag two GNS3 switches in to the topology pane and configure their interfaces and VLAN assignments in accordance with the information provided in the “VLAN Configuration” section above.

2) Drag a Cloud node in to the topology pane.

3) Change the icon to something other than a cloud. As mentioned above, I used a Firewall icon.

4) Change the cloud’s name to VSH-A.

5) Double click on the cloud. When the new window opens, click on the cloud’s name in the left pane.

6) From the “Generic Ethernet NIO” dropdown menu, select the VMnet1 interface and then click “Add”.

The below image shows VMNet1 being selected for the VSH-A node.


7) Using the table below, repeat Step 6 until all of the VSH-A VMnet interfaces have been added the cloud:
 
Cloud Node:
VMnet:
VSH-A
VMnet1
VSH-A
VMnet2
VSH-A
VMnet3
VSH-A
VMnet4
VSH-B
VMnet5
VSH-B
VMnet6
VSH-B
VMnet7
VSH-B
VMnet8


Important: The order in which you add each VMnet interface to the Cloud node is very important. This is because when you’re linking these interfaces to other interfaces in GNS3, the VMnet interfaces will be listed in the same order but won’t be shown in an understandable format. For example, when connecting VSH-A to SW1 you will be presented with a list similar to this:


By adding the interfaces in the order which they appear in the table above, you will know that the first interface in the list is VMnet1, the second in the list is VMnet2, and so on. If you add them in a random order it will be difficult to decipher which is which.

8) Repeat Steps 2 through 7 for VSH-B.

9) Once all interfaces are mapped to their corresponding VSH icons, connect them to the destination ports as shown in the “VLAN Configuration” section above.

10) Connect VMnet4 and VMnet8 to R1 and R4’s Fa0/1 interfaces respectively.

Router Configuration

To configure the routers, simply look at the “Final Diagram” in Part 1 and configure the interfaces accordingly, then use your favourite routing protocol(s) to enabling end to end connectivity.

Transfer Speeds

Any traffic that flows through a GNS3 router is slowed dramatically. Generally it is not possible to get throughput of more than 5MB/sec. This is why, as per the “Final Diagram” in Part 1, the two Windows XP VMs do not connect to the VSHs through a GNS3 router. If they did, you would not see any of the throughput improvements the VSHs provide.  

Having said that though, it’s not all bad news. Because the two VSHs are separated by GNS3 routers the traffic that flows between the two "sites" is extremely slow. This results in the perfect testing environment to demonstrate the advantages of using features like SDR. (See the test results in Part 4 for more information).

If you’d like to connect your VSHs to real lab equipment and avoid the slow speeds GNS3 introduces, see the “Final Notes” section in Part 4.  


As always, if you have any questions or have a topic that you would like me to discuss, please feel free to post a comment at the bottom of this blog entry, e-mail at myciscolabsblog@gmail.com, or drop me a message on Twitter (@OzNetNerd).

Note: This website is my personal blog. The opinions expressed in this blog are my own and not those of my employer.