Standard vSwitches have been around in VMware ESX since the very earliest days – they are a robust, well-known technology and despite the advances in networking with onset of Distributed vSwitches, VMware NSX, VMware VXLAN, and vCloud Networking and Security (vCNS) – they remain very popular. Standard vSwitches reside on the VMware ESX hosts and can be managed both in stand-alone way with the vSphere Client, or via the vCenter Server using the web-client. Additionally, there are extensive PowerShell cmdlets available, together with commands on the ESX host. These commands on the ESX host itself can be very useful when used in conjunction with scripted installation for laying down the basic networking layer required for other functions and features such as access IP-based storage (NFS and iSCSI) as well as advanced features such as vMotion and Fault Tolerance.
Standard Switches are packed with features – but they are not as functional or easy to manage as the Distributed vSwitch. You may find that other technologies available from VMware strongly recommend or require the use of Distributed vSwitches. For example VMware’s vCloud Director (vCD) leans heavily on the Distributed vSwitch, and although Standard vSwitches are supported – it is substantially more effort to manage vCD without Distributed vSwitches. With that said, Standard vSwitches remain popular for environments that are mainly geared around primary compute virtualization, and this due in no small part to the Standard vSwitch being available in all distributions of vSphere, whereas the Distributed vSwitch is currently only available within the Enterprize Plus editions of the platform.
Some organizations prefer to use a combination of both Standard vSwitches and Distributed vSwitches. In this scenario they use Standard vSwitches to manage all “host” based networking, and use Distributed vSwitches to manage all “VM” Networking.
Network Patching of Standard vSwitches
Mode 1: Internal Standard vSwitch
In this case no physical network card (referred to as vmnic’s in VMware terminology) is mapped to the virtual switch. In this case all network communication remains within the walls of the physical server, and only systems configured to the same “internal” Standard vSwitch can communicate with each other. As such their usage remains limited to specific situations such as:
- Creating a test network within to run VMs isolated from others
- Creating a “firewall-in-a-box” scenario were one VM contains two NICs connected to two different vSwitches for purposes of being the Firewall/NAT/Router between the two networks.
- Used by VMware Site Recovery Manager to create a “bubble network” during tests to prevent network conflicts during the use of recovery plans for DR purposes
Mode 2: Basic Standard vSwitch
In this case just one vmnic is mapped to the vSwitch providing basic communication to outside the world. This could be used for network process where redundancy is not a priority. Alternatively, it maybe that a path from the host is duplicated elsewhere and redundancy is taken care of. For instance it’s not unusual to have two “management” interfaces for VMware High Availability – two provide two separate paths to validate the state of the cluster.
Mode 3: Teamed Standard vSwitch
In this case more than one physical vmnic is mapped to the Standard vSwitch. With this configuration all communication passing through the vSwitch is load-balanced and offers network redundancy. This a common configuration for traffic that needs to be highly redundant. For instance IP Storage such as NFS and iSCSI could be backed by vSwitch backed with multiple NIC to boost throughput and ensure the if a network card or physical switch, there would be an available path to the storage.
Types of Communication
Host Communication Every Standard vSwitch has portgroups as sub-component. Portgroups can be of a “vmkernel” type and hold an IP configuration (IP/Subnet Mask/Gateway). These can be enabled for different traffic including vMotion, Management and Fault-Tolerence as well as used to connect to IP storage. Portgroups supports a VLAN configuration using the 803.3q VLAN Tagging standard. As ethernet frames leave the ESX hosts, additional bytes are added to the header. This includes part that indicates the ethernet is tagged, and the VLAN ID itself.
Virtual Machine Communication Portgroups are used for virtual machine communication. When a VM is created it is patched to a portgroup on the vSwitch. Again, VLAN Tagging is supported and the VM and the guest operating system within uses the inherent load-balancing and redundancy – for this reason there is no need to configure VLAN information within the operating system, or any need to install special vendor drivers to support network teaming.
Typical Standard vSwitch Configurations
For a physical server with many network cards, it is feasible to create a Standard vSwitch for each type of traffic. This guarantees that bandwidth is dedicated to that function, and its very clear where the traffic is following. This possible with rack mounted equipment fitted with quad-port cards, or blade architectures that allow for I/0 virtualization where the management layer allows for appearance of many network adapters.
vSwitch0/vmnic0 – Management vSwitch1/vmnic1/2 – IP Storage vSwitch2/vmnic3/4 – Fault Tolerance vSwitch3/vmnic5 – vMotion vSwitch4/vmnic6 – High Availability Heartbeat vSwitch5/vmnic7/8 – Virtual Machines traffic with multiple portgroups for multiple VLANS
Typically, a host maybe more limited when it comes to the number of physical NICs
The next part of the blog takes you through a couple of typical examples of creating Standard vSwitches – including an internal vSwitch, Basic vSwitch, NIC-Teamed vSwitch, and vSwitch configuration specifically designed for meeting the requirements for load-balancing iSCSI traffic with VMware ESX own iSCSI Software Initiator.
Creating Standard vSwitch (Web-Client)
Read the rest of this entry »