Pages

Sunday, July 24, 2011

Cisco Nexus 1000v Gotchas!

Have you deployed the new Cisco N1KV yet or thinking about doing it?

There are many online tutorials and of course Cisco documentation is great and very useful as always for installing and setting up Nexus 1000v distributed virtual switch. However, nothing beats first hand experience of testing, playing and implementing a new technology in production environment. That's why I though I should share some of the important points and guidelines from my experience for installing N1KV in our VMWare environment. It was a hassle to get it right but once done it turned out to be the next greatest thing in virtual Networking for us.

1. VLANs: As you probably read it, you need several new private VLANS (Control, Packet, and Management) for N1KV and these have to exists on the system uplink. However, you also need to put vCenter and vMotion VLANs on the system uplink port-profile as well. To do so, do the following:

N1KV84(config)#port-profile type ethernet system-uplink
N1KV84(config)#switchport mode trunk

N1KV84(config)#switchport trunk allowed vlan 111,113,249,261-262

Here,

VLAN-111 is vCenter Management VLAN
VLAN-113 is vMotion VLAN
VLAN-249 is for Nexus 1000v Management IP (This VLAN can be same as for vCenter but we chose it to be different since we have dedicated management VLAN in our environment.)
VLAN-261 Control
VLAN-262 Packet
 
2. How many N1KV dvSwitches do I need for my VMware environment?
First of all you should know that you need 2 VSM - Virtual Supervisor Module - virtual machines (VMs) per N1KV.  This will vary from environment to environment but for our environment, we created 2 N1KV swtiches across two datacenters with each datacenter hosting 3-4 clusters. 
Also, our datacenters are separated at physical boundaries so it made more sense for us to have 2 dvSwitches. Otherwise if we chose 1 per cluster, we will be creating 16 VSM - Virtual Supervisor Engine - VMs which I think is an overkill.

3. NLB - Network Load Balancing:
If you currently have Windows Network Load Balancing in your VMware environment, you will have to disable IGMP snooping in Nexus 1000v on VLANs to which NLB VIP (Virtual IP) is bound or to which vNIC (port group) NLB enabled VMs are connected. Further, remember that only multicast and IGMP-mulitcast are supported on Nexus 1000v distributed virtual switch. Unicast is not supported on Nexus 1000v.

4. LACP (No static LAGs): You can't create static LAG- Link Aggregation - between a physical switch (or stack) and Nexus 1000v to achieve higher bandwidth and port redundancy. To achieve more than 1 Gbps speed, you must enable LACP feature in Nexus using following command:

 N1KV84(config)# feature lacp

Then configure / activate LACP on ethernet port-profile like this:

 N1KV84(config)#port-profile type ethernet system-uplink N1KV84(config)#vmware port-group
 N1KV84(config)#channel-group auto mode active

Make sure LACP is also enabled on physical switch / stack as passive on ports connected to Nexus 1000v.
 
5. Persistent Connections across Host (Server) reboots:
To make sure that upstream connectivity stay intact during normal reboots or server failures, you need to define certain VLANs as system VLANs for uplinks configured as trunks. These include:
 
  A. vMotion, vCenter, Control, Packet, and Management for system uplink
 
        N1KV84(config)#port-profile type ethernet system-uplink    N1KV84(config)#system vlan 111,113,249,261-262
   N1KV84(config)#vmware port-group

 
 B. Storage VLAN on storage uplink(s) for iSCSI or NFS
 
   N1KV84(config)#port-profile type ethernet storage-uplink-iscsi  
   N1KV84(config)#vmware port-group
   N1KV84(config)#switchport mode access
   N1KV84(config)#switchport access vlan 321
   N1KV84(config)#mtu 9000
   N1KV84(config)#no shutdown
   N1KV84(config)#system vlan 321
 
   N1KV84(config)#port-profile type ethernet storage-uplink-nfs
   N1KV84(config)#vmware port-group
   N1KV84(config)#switchport mode access
   N1KV84(config)#switchport access vlan 320
   N1KV84(config)#mtu 9000
   N1KV84(config)#channel-group auto mode active
   N1KV84(config)#no shutdown
   N1KV84(config)#system vlan 320
   N1KV84(config)#max-ports 32
   N1KV84(config)#state enabled
 
  C. Any VLAN(s) used for for data uplinks
 
  D. You don't need to define system vlans for access ports.
 
6. VSM Management IP Address - Make sure you assign the same management IP address during the installation of both VSMs.
 
7. L2/L3 (Layer 2 or Layer 3) - During the setup you will be asked to configure N1KV for L2 or L3 mode. If your upstream physical switch to which Nexus 1000v will directly connect is running as L2 mode (no routing) then you should configure N1KV in L2 mode otherwise if upstream switch is running in Layer 3 mode (switching as well as routing) then configure Nexus in L3 mode. Our was L2.

8. Finally, start with latest version of Nexus 1000v because earlier releases are buggy. If because of you ESX/ESXi version you can't install latest version then read the release notes and install any patches available.

Hope after reading this post your experience with N1KV won't be as rocky as mine. :-)

4 comments:

  1. Thanks for this configuration for the CISCO Nexus NV. This will be put to good use.

    ReplyDelete
  2. Thks for heads up.
    Did you apply QOS at all on the N1Kv, if so what method did you use for setting DSCP or COS i.e. apply class on a virual port group level with each group being a access vlan (or app) ?

    ReplyDelete
  3. Regarding point 3; Microsoft Unicast NLB is now supported on the Nexus 1000V switch. The feature was added in 4.2(1)SV1(5.1)

    best regards
    Atle Ørn Hardarson

    ReplyDelete
  4. Thanks for this post, it's very helpfull. I have a question in regards to turning off igmp snooping on a Cisco Nexus1000v along with Windows NLB.

    Do you need to turn off IGMP Snooping if you are using Multicast only or is it purely if you are using IGMP Multicast NLB option ?

    ReplyDelete