The UK VMUG 2015 – VMworld SwagBag Competition…

Well, would you believe where the time has gone? Another year and another UK VMUG beckons as well as new SwagBag Competition. In case you don’t know – for the last couple of years I’ve ‘bagged” a VMworld Bag, and stuff it with quality ‘swag’ gathered over the year. Each year this bag is raffled off at the UK VMUG event held in the November. The money raised is donated to good cause. This years good cause is “Code Club” ( it’s goal is to help 9-11 learn the first principles of programming – rather than just being users of Word, Excel and the Internet.

To stand ANY chance of winning the Swagbag you must attend the VMUG vCurry event or the UK VMUG event itself. The UK VMUG event is held on Thursday 19 November 2015 – at National Motorcycle Museum, Solihull. The vCurry Event happens in the same venue the night before, and usually incorporates a quiz. The UK Event has special guest visitors including Josh Atwell and the know legend in his own lifetime, John Troyer.

Full Details for the UK VMUG and vCurry Event are here – Register Today!

Anyway, that’s it from me – lets have a look at the bag and this years award winners…. I call it the Oscars for Swag!

UPDATE: Oh, I forgot to mention to other additions to the bag. Firstly, I’ve got one of those “Tile” things to give away. It was vExpert gift. Basically, you stick a ’tile’ on something important that you frequently lose (like your keys for instance), and your phone will locate it with a special app. Sadly, you cannot use a tile to find your phone (which for me is more common…). Ravello has also offerred a free subscription to their service – in case you don’t know Ravello allows you to run nested ESX in the Amazon EC2 cloud – which could be the next incarnation of the homelab. Finally, PluralSight have included a free subscription to their training. You might know PluralSight acquired TrainSignal a while back which was the go-to source for training on VMware technologies.


Posted by on November 2, 2015 in VMUG

Comments Off on The UK VMUG 2015 – VMworld SwagBag Competition…

EVO:RAIL Under The Covers: How DNS works

How DNS works in EVO:RAIL

One of the big differences in how vSphere works as deployed by EVO:RAIL is with DNS. As you might know vSphere has many requirements for name resolution, and often various vSphere features will not function or setup correctly without DNS being available. A classic example is simply opening a Remote Console window on a VM. Although that request might be triggered from a vCenter session, it’s actually the VMware ESXi host that handles the redirection of the video, and allows for Keyboard, Mouse and Screen (KMS) functionality. Remote Console sessions require name resolution to the VMware ESXI host to work. I could go on at length with other examples but you get the picture.

The good news is the EVO:RAIL Appliance takes care of all these requirements. In fact EVO:RAIL has its own built in DNS service. This means that there are no service dependencies required to setup the appliance at a green-field location. That’s right, the EVO:RAIL appliance will configure itself – even if there’s no DNS, DHCP or Active Directory.

This does mean that the way name resolution is achieved is different from standard vSphere as deployed manually by customers. With vSphere the path of the name resolution from the VMware ESXi host is via its management network. For example after installing VMware ESXi the customer assign a static IP address and configures the VMware ESXI host for its Primary and Secondary DNS, as well as its domain suffix, using something like the Direct Console User Interface.

Screen Shot 2015-09-16 at 15.00.38

In this case the VMware ESXI host queries the corporate DNS server directly. With EVO:RAIL this behaviour is similar, but different. The EVO:RAIL Configuration Engine will set static IP addresses for the ESXi management network and also set the preferred DNS settings – however, what is queried is the built-in DNS Server of the EVO:RAIL.

So in this case the DNS query takes this path:

ESX Host >> vCenter Server Appliance DNS Service >> If not internally resolved forward it is forwarded on to the corporate DNS server.

You can tell that a DNS service is running with the vCenter Server Appliance using the command “netstat –natlp | grep ‘:53’. As you might know all DNS queries are responded to by listening on TCP port 53. This will show that there is a “dnsmasq” service running.

Screen Shot 2015-09-16 at 15.05.23

The dnsmasq service holds hostname records in a text file on the vCenter Server Appliance /var/lib/vmware-marvin/dnsmasq/hosts. Usually, this will contain at least the four VMware ESXI hosts that make up the EVO:RAIL Appliance together with the IP address and FQDNs for the vCenter Server Appliance. In the new release of EVO:RAIL we will have a dedicated virtual appliance for managing the physical appliance that we are calling the “EVO:RAIL Orchestration Appliance”. You can see it listed in the screen grab as evo04-evorail.vsphere.local.

Screen Shot 2015-09-16 at 15.27.44

If you add a second appliance to double your compute and storage resources the hosts file would be updated to include those FQDNs for the new ESXI hosts. In the example above, no corporate DNS server was specified, so the EVO:RAIL dnsmasq service is the source for all queries. It’s rare to actually need to modify this file, although one situation could happen is if you decide to change the management IP or FQDN of the servers listed here.

As for the forwarding of queries for systems not listed in the hosts file – that’s held in file dedicated to the dnsmasq configuration. So its not the usual /etc/resolv.conf file that usually holds the DNS Primary/Secondary IPs for Linux. The file used is called /etc/dnsmasq.conf held within the server= setting. We do have KB Article 2107249 ( which describes the file and how to edit it. For instance you may wish to change the corporate DNS server entry if the IP address for the DNS service has changed or if you have fat-fingered the setting.

So to summarize. EVO:RAIL has its own DNS service that allows us to meet the requirements of vSphere for DNS. That’s ideal for greenfield deployments because we have no dependency on DNS, DHCP or Microsoft Active Directory. You can, of course point the EVO:RAIL DNS service to an ‘external’ or corporate DNS server for all other queries.


Posted by on October 6, 2015 in EVO:RAIL

Comments Off on EVO:RAIL Under The Covers: How DNS works

The Ultimate Deployment Appliance adds VMware ESXi 6 Support

Actually, this happened last week – but I was so flattened by work leading up to VMworld – the joint announcement planned by myself and Carl fell flat on it face! That’s completely my mistake, as I totally dropped the ball on this one.

In case you don’t know the Ultimate Deployment Appliance (UDA) is a Community Project that I have promoted and used for some years – its an all-in-one PXE/DHCP/TFTP Appliance that massively simplifies the deployment of many operating systems – and I primarily use it for deploying VMware ESXi.

In my tests i found that merely selecting ESXi 5 Installable in the UDA menus and then selecting the ESXi 6 .iso worked right of the box. So it was a piece of cake for Carl to produce a patch bundle that allows you to select ESXi 6 from the menus to keep things both neat and logical.

Screen Shot 2015-08-27 at 10.47.37

The patch bundle can be download either from my site or Carl’s

From uda-2.0.26.tgz

From uda-2.0.26.tgz


Posted by on August 27, 2015 in vSphere

Comments Off on The Ultimate Deployment Appliance adds VMware ESXi 6 Support

Scripted VMware ESXi 5.5 Installs – Error: Read-only file system during write on

I’ve been recently doing some scripting work with the Ultimate Deployment Appliance (UDA) which was developed by Carl Thijsen of the Netherlands. The reason for this work is to make it easy for me to switch between different versions of EVO:RAIL using my SuperMicro systems. I want to be able to easily flip between different builds, and its seemed like the easiest way to do this remotely was using my old faithful the UDA. This means I can run EVO:RAIL 1.2.1 which based on vSphere5.5, and then rebuild the physical systems around our newer builds, which incidentally use vSphere6.0.

Anyway, I encountered an odd error when scripting the install of VMware ESXi 5.5. One hadn’t seen with VMware ESXi 6.0. The error looked like said :Error: Error: Read-only file system during write on /dev/disks/naa.blah.blah.blah.

Screen Shot 2015-08-04 at 13.46.16

Normally, the lines:

clearpart –alldrives –overwritevmfs
install –firstdisk=ST300MM0026,local –overwritevmfs

Would be enough to wipe any existing installation and VMFS volume. But the installer wasn’t happy. Incidentally “ST300MM0026” is the boot disk, a Seagate drive. However, that didn’t seem to work. I had to modify the ‘clearpart’ line like so:

clearpart –firstdisk=ST300MM0026 –overwritevmfs
install –firstdisk=ST300MM0026,local –overwritevmfs

I think what was happening was that clearpart wasn’t seeing the drive properly, and specifing it by model number allowed the VMFS partition to properly cleared.

Anyway, I doubt this will matter to most people, but I thought I would share in case someone else sees this…

UPDATE: Well, after automating the install of VMware ESXi 5.5, decided to flip back to VMware ESXi 6.0. I encountered the exact same error. So now both my 5.5 and 6.0 scripts include the change to clearpart.


Posted by on August 4, 2015 in vSphere

Comments Off on Scripted VMware ESXi 5.5 Installs – Error: Read-only file system during write on

VMUG WebCast: Overview of EVO:RAIL and Deep Dive into Version 1.2 Features


EVO:RAIL is the first 100% VMware powered Hyper-Converged Infrastructure Appliance (HCIA). EVO:RAIL delivers compute, network, storage and management resources integrated onto an optimized 2U/4N hardware platform; all available via our 8 Qualified EVO:RAIL Partners and backed by single point of contact for both hardware and software. EVO:RAIL has gained a lot of momentum in a very short timeframe and the EVO:RAIL team brings continuously new capabilities to improve performance, scale, automation.

Join this session to get an overview of EVO:RAIL, a dive deep into the new EVO:RAIL 1.2 and a product demonstration from EVO:RAIL Product Marketing Manager and Product Manager.

Presented by: Michael Gandy and Justin Smith, VMware

Registration: Click Here!



Posted by on July 31, 2015 in EVO:RAIL

Comments Off on VMUG WebCast: Overview of EVO:RAIL and Deep Dive into Version 1.2 Features

EVO:RAIL Under The Covers: What is “Link and Launch”, and how does it work?

Around the end of 2014 the EVO:RAIL team released an update to their core software in the shape of the 1.0.1 release. One of the key features the release introduced was something we call “Link and Launch”, an optional feature used by our partners. As you might know, from a hardware perspective most EVO:RAIL appliances present pretty much the same amount of CPU/Memory/Disk and Network throughput – and that’s all to set to change with the announcement of more “flexible configs”. Some of our “Qualified EVO:RAIL Partner” (QEPs) differentiate themselves in the market place with their various software add-ons. EVO:RAIL’s “Link and Launch” feature gives our QEPs an engine to both automate the deployment of these add-ons, which often take the form of virtual appliances, as well offering links to these appliances. Sometimes these virtual appliances merely extend the functionality of the vSphere Web Client, at other times they offer a dedicated UI for managing the add-on.

The process begins at the factory. As you might know from reading this series of blog posts, node01 acts as a “bootstrap”, for want of a better word, for getting the EVO:RAIL appliance up and running. On node01 you will find the VMware “System VMs” in the shape of the vCenter Server Appliance and vRealize Log Insight Appliance. If the QEP is adding value with additional appliances they will be listed alongside the VMware “System VMs” and we often refer to these as QEP “System VMs”. In the screen grab below you can see vCSA, Log Insight alongside two ‘sample’ QEP VMs that I use to test this feature called “Test VM Number 1” and “Test VM Number 2”. These VMs would normally contain a product name and reference to the vendor. Notice also how neither the Log Insight nor these QEP VMs are powered on. They are only powered if needed (this is the case with Log Insight) or when the configuration of the EVO:RAIL completes (this is the case with QEP System VMs). We often refer to QEP System VMs that come with two components as the “Primary” and “Secondary” VMs.


Along side the QEP VMs we also get our partners to configure a small “manifest” file. This manifest file is a text file which contains friendly labels for populating the UI together with references to company logos such as the Dell or EMC logo. It’s this “manifest” file that populates the “QEP” section of the EVO:RAIL Configuration UI. In my case I used the generic “ACME” as the name of the vendor and QEP. In a production environment you would be more likely to see the vendor’s name such as HDS (Hitachi Data Services) or SMC (Supermicro).


Since the 1.0.1 release, when “Link and Launch” was made available to our partners, we have supported a new attribute to the JSON file. As you might remember from my other posts on EVO:RAIL it’s possible to have all the settings required for the EVO:RAIL Configuration engine stored in a text file with a JSON extension. EVO:RAIL supports the configuration of a single QEP System VM or two System VMs. In the screen grab below you can see the JSON file that I use in the hands-on-lab. If you look to the bottom you can see two additional, optional entries under the catagory of “vendor”.


It starts with the “vendor” attribute, and can be used to configure the two QEP VMs that have been imported into the system. Remember this is all done at the factory, so as a customer you merely need to provide your preferred IP for the QEP System VMs – and the EVO:RAIL engine will take care of deploying them for you.

Once the EVO:RAIL Configuration engine has deployed the appliance, at the very end it powers on the QEP System VMs and applies the IP configuration supplied. Once you login to the EVO:RAIL Management UI, you should see a “QEP” node in the left-hand sidebar.


In my case I just used a generic “ACME” style logo, and when you click to launch “ACME Test VM No.1” it just connects to a web-service.

This isn’t yet available to demo in our hands-on-lab, although I’m toying with the idea of including it in this year’s VMworld Labs. Our partners have already made great use of “Link and Launch” not least EMC, who have produced their own VPEX Blue management UI which has the look and feel of the core VMware EVO:RAIL Management UI.



Posted by on July 21, 2015 in Uncategorized

Comments Off on EVO:RAIL Under The Covers: What is “Link and Launch”, and how does it work?

Under The Covers – What happens when…EVO:RAIL Replaces a node (Part 3)

In my previous blog post I walked you through what happens when adding an additional EVO:RAIL appliance to an existing deployment or cluster. Now I want to look at the next important workflow. You could relate this to the issue of serviceability. There are a number of scenarios that need to be considered from a hardware perspective, including:

  • Replacing an entire node within the EVO:RAIL appliance
  • Replacing a boot disk
  • Replacing a HHD or SSD used by VSAN
  • Replacing a failed NIC

There are surprisingly few circumstances that would trigger the replacement of an entire node. They usually fall into the category of a failed CPU, memory or motherboard. It’s perhaps worth stating that our different Qualified EVO:RAIL Partners (QEP) have customized the procedure of how they handle these sorts of failures relative to how they handle these issues for other hardware offerings. For instance one partner might prefer to replace the motherboard if it fails, whereas another will see this as being easier to address by shipping a replacement of the node altogether. That’s the subject of this blog post – the scenario where an entire node is replaced by the QEP.

As you might know from my previous post every EVO:RAIL has its own unique appliance ID, say MAR12345604, and every node within that appliance has its own node ID expressed with a dash and number, for instance -01, -02, -03 and -04. When the appliance ID and node ID are combined together it creates a global unique identifier that reflects that node on the network. These values are stored in the “AssetTag” part of each node’s system BIOS settings, and are generated and assigned at the factory.

So if for instance node03 died and had an identity of MAR12345604-03, then a replacement node would be built at the factory and shipped to the customer with the same ID. The old node would be removed and dumped in the trash, and the new node would be slotted into its place, and powered on for the first time. At this point a little EVO:RAIL magic takes place. When the replacement node is powered on for the first time, it advertises itself on the network using the “VMware Loudmouth” daemon. This advertisement is picked up by the existing EVO:RAIL appliance, and it recognizes firstly, that this node should be part of the same appliance, because it has a matching appliance ID, and secondly that it is specifically used to replace a failed node.

In the EVO:RAIL UI this appears as an “Add EVO:RAIL Node” pop-up message – indicating that a node was “serviced” and can be “replaced”.

Screen Shot 2015-04-14 at 16.16.27

The steps taken by this workflow are similar but different to adding additional appliances to an existing cluster:

  1. Check Settings
  2. Unregister conflicting ESXi host from vCenter Server
  3. Delete System VMs from replacement server
  4. Place ESXi hosts into maintenance mode
  5. Set up management network on hosts
  6. Configure NTP Settings
  7. Configure Syslog Settings
  8. Delete Default port groups on ESXi host
  9. Disable Virtual SAN on ESXi host
  10. Register ESXi hosts to vCenter
  11. Setup NIC Bonding on ESXi host
  12. Setup FQDN on ESXi host
  13. Setup Virtual SAN, vSphere vMotion and VM Networks on ESXi host
  14. Setup DNS
  15. Restart Loudmouth on ESXI host
  16. Setup clustering for ESXi host
  17. Configure root password on ESXI host
  18. Exit maintenance mode on the ESXi host

Once again, you’ll notice I’ve highlighted a key step in bold – that’s Step 2. One process that “Add EVO:RAIL Node” workflow automates (amongst many others!) is clearing out dead, stale and orphaned references to ESXI host in vCenter that have shuffled off this mortal coil.

That might leave you with one question begging. Given that the ‘replacement node’ has the same appliance ID, how does the EVO:RAIL engine “know” that this is a replacement node? The answer is that before the “Add EVO:RAIL Node” pop-up appears the node reports its configuration to the core EVO:RAIL engine running inside the vCenter Server Appliance (vCSA). The EVO:RAIL engine inspects the node to check it is blank and just has a generic factory specification.

If you want to experience this process of adding a replacement EVO:RAIL node at first hand, don’t forget our hands-on-lab now showcases this process. Check out the HOL at:

HOL-SDC-1428 – Introduction to EVO:RAIL


Posted by on July 13, 2015 in EVO:RAIL

Comments Off on Under The Covers – What happens when…EVO:RAIL Replaces a node (Part 3)

Under The Covers – What happens when…EVO:RAIL Adds an additional appliance (Part 2)

In my previous blog post I walked you through what happens when EVO:RAIL is being configured for the very first time. Now I want to look at the next important workflow. As you might know from reading this series of posts, EVO:RAIL has auto-discovery and auto-scale-out functionality. A daemon called “VMware Loudmouth” which is available on each of the four nodes that make up and EVO:RAIL as well as the vCenter Server Appliance, is used to “advertise” additional EVO:RAIL appliances on the network. The idea is a simple one – to make the adding of additional EVO:RAIL appliances, to increase capacity and resources, as easy as typing a password for ESXi and a password for vCenter.

When EVO:RAIL is brought up on the same network as an existing EVO:RAIL deployment the management UI should pick up on its presence using “VMware Loudmouth”. Once the administrator clicks to add the 2nd appliance, this workflow should appear.



So long as there are sufficient IP addresses in the original IP pools defined when the first appliance was deployed, then it’s merely a matter of providing passwords. So what happens after doing that and clicking the “Add EVO:RAIL Appliance” button?

In contrast to the core 30 steps that the EVO:RAIL Configuration completes, the adding additional EVO:RAIL appliances to an existing cluster is significantly less – in total just 17 steps. They are as follows:

  1. Check settings
  2. Delete System VMs from hosts
  3. Place ESXi hosts into maintenance mode
  4. Set up management network on hosts
  5. Configure NTP Settings
  6. Configure Syslog Settings
  7. Delete Default port groups on ESXi hosts
  8. Disable Virtual SAN on ESXi hosts
  9. Register ESXi hosts to vCenter
  10. Setup NIC Bonding on ESXi hosts
  11. Setup FQDN on ESXi hosts
  12. Setup Virtual SAN, vSphere vMotion and VM Networks on ESXi hosts
  13. Setup DNS
  14. Restart Loudmouth on ESXI hosts
  15. Setup clustering for ESXi hosts
  16. Configure root password on ESXI hosts
  17. Exit maintenance mode on the ESXi hosts

One reason why the adding of subsequent appliances takes less than 7 minutes, compared with the initial configuration of 15 minutes, is that components such as setting up vCenter and SSO aren’t needed because they are already present in the environment. So pretty much all the EVO:RAIL engine has to do is setup the ESXi hosts so that they are in a valid state to be added to the existing cluster.

You’ll notice that I’ve chosen to highlight Step 2 in my bulleted list. Every EVO:RAIL that leaves a Qualified EVO:RAIL Partner (QEP) factory floor is built in the same way. It can be used to carry out a net-new deployment at new location or network, or it can be used to auto-scale-out an existing environment. If it is a net-new deployment the customer connects to the EVO:RAIL Configuration UI ( by default). If on the other hand the customer wants to add capacity they would complete the “Add New EVO:RAIL Appliance” workflow. In this second scenario the built-in instances of vCenter Server Appliance and vRealize Log Insight are no longer needed on node01, and so they are removed.

If you want to experience this process of adding a second EVO:RAIL appliance at first hand, don’t forget our hands-on-lab now showcases this process. Check out the HOL at:

HOL-SDC-1428 – Introduction to EVO:RAIL


Posted by on July 6, 2015 in EVO:RAIL

Comments Off on Under The Covers – What happens when…EVO:RAIL Adds an additional appliance (Part 2)

EVO:RAIL – Under The Covers – What happens when EVO:RAIL Configures (Part 1)

One of the things EVO:RAIL excels at is automating stuff. As you might know EVO:RAIL automates the deployment of VMware ESXi, vCenter, Log Insight as well as carrying out many countless configuration steps that allow for a VMware High Availability (HA), Distributed Resource Scheduler (DRS) and VSAN Cluster to work – all this and more in around 15 minutes flat. However, these individual steps are not widely talked about and where this gets “interesting” is when various big tasks are carried out. Understanding these steps helps demystify what EVO:RAIL is doing, and also helps explain some of the messages you see. For me there are three main big tasks the EVO:RAIL engine carries out:

  1. Configuration of the very first EVO:RAIL Appliance
  2. Adding an Additional Appliance expand available compute and storage resources (commonly referred to in marketing-speak as auto-discovery and auto scale-out).
  3. Replacing a failed node with a new node – commonly caused by failure of a motherboard or CPU socket.

This blog post and its two companion posts (Parts 2/3) attempt to describe in a bit more detail what’s happening in these processes, and why there are subtle differences between them. So let’s start with the first – the steps taken during the configuration of the very first EVO:RAIL appliance.

You can see these steps listed in the EVO:RAIL UI during the Initialize/Build/Configure/Finalize process. It’s perhaps not so thrilling to sit and watch that. Believe it or not I recorded the 15min process so I could rewind slowly and document each one below. I guess I could have asked someone in my team internally for this information, but I suspect they have better things to do!

Screen Shot 2015-06-26 at 10.53.46

So here’s the list….

  1. Set password on vCenter
  2. Install private DNS on vCenter
  3. Configure private DNS on vCenter
  4. Configure vCenter to use private DNS
  5. Perform mDNS ESXi host discovery
  6. Setup management network on ESXi hosts
  7. Configure NTP on ESXi Hosts
  8. Configure Syslog on ESXi hosts
  9. Configure vCenters FQDN
  10. Configure NTP on the vCenter
  11. Configure Syslog on the vCenter
  12. Restart Loudmouth on the vCenter
  13. Accept EULA on vCenter
  14. Create vCenter Database
  15. Initialize SSO
  16. Start vCenter (vpxd)
  17. Create management account on vCenter
  18. Register ESXi hosts with vCenter
  19. Configure FQDN on ESXi hosts
  20. Rename vCenter Server management network on ESXi hosts
  21. Configure NIC Team
  22. Setup Virtual SAN, vSphere vMotion, VM Networks on ESXi hosts
  23. Setup DNS on ESXi hosts
  24. Restart Loudmouth on ESXi hosts
  25. Enable vSphere HA/DRS
  26. Create a Storage Policy
  27. Configuring Enhanced vMotion Compatibility
  28. Set vCenter Log Insight to auto-start after Power Cycle events
  29. Configure root password on ESXi hosts
  30. Register EVO:RAIL Service with mDNS

I don’t have much to say about these steps except to make a couple of remarks. Firstly, they form the bedrock of Parts 2/3 of this blog post series. Most of the EVO:RAIL “big tasks” will do some (but critically not ALL) of these steps. For example, there is no point in deploying vCenter when adding an additional appliance – if it has already been done building the first. It is for this reason adding a additional appliance only takes about 7mins – whereas building the first appliance takes around 15mins.

Secondly, knowing the process can help in troubleshooting. For example notice how the vCenter ‘root’ (and administrator@vsphere.local) password is changed at Step 1, whereas the ‘root’ password on the ESXi host is not changed until Step 29. If there was a problem during the configuration process between these two steps – it would mean the password to login to vCenter would be different from the password to log into the ESXi host(s). Incidentally, this separation of the password change is deliberate. We don’t change root password of the VMware ESXi until the very end and when we can guarantee the appliance build process has been successful.


In the best of all possible worlds (to quote Voltaire’s Candide for a moment) you shouldn’t have to know about these steps. But a little knowledge is a dangerous thing (to quote Alexander Pope). I could go on flashing off my literary credentials but if I carry on like this you’ll think I’m just a snooty Brit who thinks he knows it all (incidentally, you’d be right!). And let’s face it, no one likes a clever Richard, do they?

Tune into the next thrilling episode for Parts 2/3 where it gets infinitely more interesting.


Posted by on June 26, 2015 in EVO:RAIL

Comments Off on EVO:RAIL – Under The Covers – What happens when EVO:RAIL Configures (Part 1)

EVO:RAIL Introduces support for Enhanced vMotion Compatibility

It’s my pleasure to say that EVO:RAIL 1.2 has been released with a number of enhancements. In case you don’t know there have been a number of maintenance releases (1.0.1 and 1.0.2 and 1.1) that shipped at the end of last year, and beginning of this. The 1.1 release rolls up all the changes previously introduces, and critically adds a new step to the configuration of the EVO:RAIL appliance – support for “Enhanced vMotion Compatibility” or EVC

The support for EVC is important step because it will allow both Intel Ivybridge and Intel Haswell based EVO:RAIL systems to co-exist in the same VMware HA and DRS cluster. The new EVC support enables support for the EVC Mode of “IvyBridge” which means a new Haswell system can join an Ivybridge based VMware cluster – and still be allow for vMotion events triggered by DRS or maintenance mode to occur.

Screen Shot 2015-06-19 at 5.02.00 AM

Prior to this release of EVO:RAIL, Enhance vMotion Compatibility was not enabled by default. You might ask is it possible on the older release of EVO:RAIL is possible to enabled EVC on systems prior to 1.2? The answer is yes, so long as you follow this order of steps:

  • Upgrade your EVO:RAIL appliance to EVO:RAIL 1.1+.
  • Connect to the vSphere Web Client and login with administrator privileges.
  • From Home, click Hosts and Clusters.
  • Click the EVO:RAIL cluster, Marvin-Virtual-SAN-Cluster-<uuid>.
  • Click Manage>Settings>Configuration>VMware EVC, click
  • Select the “Enable EVC for Intel Hosts” radio button.
  • From the VMware EVC Mode dropdown, select “Intel Ivy Bridge Generation”. Click OK.
  • The VMware EVC settings will now show that EVC is enabled with mode set to “Intel Ivy Bridge Generation”.

This simple step-by-step process can be carried without shutting down any existing VMs, and would allow a VM running on the IvyBridge system to be vMotion to/from the Haswell system without a problem.

These steps are reproduced from this KB article here:


Posted by on June 22, 2015 in EVO:RAIL

Comments Off on EVO:RAIL Introduces support for Enhanced vMotion Compatibility