RSS

The Ultimate Deployment Appliance adds VMware ESXi 6 Support

Actually, this happened last week – but I was so flattened by work leading up to VMworld – the joint announcement planned by myself and Carl fell flat on it face! That’s completely my mistake, as I totally dropped the ball on this one.

In case you don’t know the Ultimate Deployment Appliance (UDA) is a Community Project that I have promoted and used for some years – its an all-in-one PXE/DHCP/TFTP Appliance that massively simplifies the deployment of many operating systems – and I primarily use it for deploying VMware ESXi.

In my tests i found that merely selecting ESXi 5 Installable in the UDA menus and then selecting the ESXi 6 .iso worked right of the box. So it was a piece of cake for Carl to produce a patch bundle that allows you to select ESXi 6 from the menus to keep things both neat and logical.

Screen Shot 2015-08-27 at 10.47.37

The patch bundle can be download either from my site or Carl’s

From mikelaverick.com: uda-2.0.26.tgz

From UltimateDeployment.org: uda-2.0.26.tgz

 

Posted by on August 27, 2015 in vSphere

Comments Off on The Ultimate Deployment Appliance adds VMware ESXi 6 Support

Scripted VMware ESXi 5.5 Installs – Error: Read-only file system during write on

I’ve been recently doing some scripting work with the Ultimate Deployment Appliance (UDA) which was developed by Carl Thijsen of the Netherlands. The reason for this work is to make it easy for me to switch between different versions of EVO:RAIL using my SuperMicro systems. I want to be able to easily flip between different builds, and its seemed like the easiest way to do this remotely was using my old faithful the UDA. This means I can run EVO:RAIL 1.2.1 which based on vSphere5.5, and then rebuild the physical systems around our newer builds, which incidentally use vSphere6.0.

Anyway, I encountered an odd error when scripting the install of VMware ESXi 5.5. One hadn’t seen with VMware ESXi 6.0. The error looked like said :Error: Error: Read-only file system during write on /dev/disks/naa.blah.blah.blah.

Screen Shot 2015-08-04 at 13.46.16

Normally, the lines:

clearpart –alldrives –overwritevmfs
install –firstdisk=ST300MM0026,local –overwritevmfs

Would be enough to wipe any existing installation and VMFS volume. But the installer wasn’t happy. Incidentally “ST300MM0026” is the boot disk, a Seagate drive. However, that didn’t seem to work. I had to modify the ‘clearpart’ line like so:

clearpart –firstdisk=ST300MM0026 –overwritevmfs
install –firstdisk=ST300MM0026,local –overwritevmfs

I think what was happening was that clearpart wasn’t seeing the drive properly, and specifing it by model number allowed the VMFS partition to properly cleared.

Anyway, I doubt this will matter to most people, but I thought I would share in case someone else sees this…

UPDATE: Well, after automating the install of VMware ESXi 5.5, decided to flip back to VMware ESXi 6.0. I encountered the exact same error. So now both my 5.5 and 6.0 scripts include the change to clearpart.

 

Posted by on August 4, 2015 in vSphere

Comments Off on Scripted VMware ESXi 5.5 Installs – Error: Read-only file system during write on

VMUG WebCast: Overview of EVO:RAIL and Deep Dive into Version 1.2 Features

Abstract:

EVO:RAIL is the first 100% VMware powered Hyper-Converged Infrastructure Appliance (HCIA). EVO:RAIL delivers compute, network, storage and management resources integrated onto an optimized 2U/4N hardware platform; all available via our 8 Qualified EVO:RAIL Partners and backed by single point of contact for both hardware and software. EVO:RAIL has gained a lot of momentum in a very short timeframe and the EVO:RAIL team brings continuously new capabilities to improve performance, scale, automation.

Join this session to get an overview of EVO:RAIL, a dive deep into the new EVO:RAIL 1.2 and a product demonstration from EVO:RAIL Product Marketing Manager and Product Manager.

Presented by: Michael Gandy and Justin Smith, VMware

Registration: Click Here!

 

 

Posted by on July 31, 2015 in EVO:RAIL

Comments Off on VMUG WebCast: Overview of EVO:RAIL and Deep Dive into Version 1.2 Features

EVO:RAIL Under The Covers: What is “Link and Launch”, and how does it work?

Around the end of 2014 the EVO:RAIL team released an update to their core software in the shape of the 1.0.1 release. One of the key features the release introduced was something we call “Link and Launch”, an optional feature used by our partners. As you might know, from a hardware perspective most EVO:RAIL appliances present pretty much the same amount of CPU/Memory/Disk and Network throughput – and that’s all to set to change with the announcement of more “flexible configs”. Some of our “Qualified EVO:RAIL Partner” (QEPs) differentiate themselves in the market place with their various software add-ons. EVO:RAIL’s “Link and Launch” feature gives our QEPs an engine to both automate the deployment of these add-ons, which often take the form of virtual appliances, as well offering links to these appliances. Sometimes these virtual appliances merely extend the functionality of the vSphere Web Client, at other times they offer a dedicated UI for managing the add-on.

The process begins at the factory. As you might know from reading this series of blog posts, node01 acts as a “bootstrap”, for want of a better word, for getting the EVO:RAIL appliance up and running. On node01 you will find the VMware “System VMs” in the shape of the vCenter Server Appliance and vRealize Log Insight Appliance. If the QEP is adding value with additional appliances they will be listed alongside the VMware “System VMs” and we often refer to these as QEP “System VMs”. In the screen grab below you can see vCSA, Log Insight alongside two ‘sample’ QEP VMs that I use to test this feature called “Test VM Number 1” and “Test VM Number 2”. These VMs would normally contain a product name and reference to the vendor. Notice also how neither the Log Insight nor these QEP VMs are powered on. They are only powered if needed (this is the case with Log Insight) or when the configuration of the EVO:RAIL completes (this is the case with QEP System VMs). We often refer to QEP System VMs that come with two components as the “Primary” and “Secondary” VMs.

linkandlaunchvms

Along side the QEP VMs we also get our partners to configure a small “manifest” file. This manifest file is a text file which contains friendly labels for populating the UI together with references to company logos such as the Dell or EMC logo. It’s this “manifest” file that populates the “QEP” section of the EVO:RAIL Configuration UI. In my case I used the generic “ACME” as the name of the vendor and QEP. In a production environment you would be more likely to see the vendor’s name such as HDS (Hitachi Data Services) or SMC (Supermicro).

linkandlaunch-config

Since the 1.0.1 release, when “Link and Launch” was made available to our partners, we have supported a new attribute to the JSON file. As you might remember from my other posts on EVO:RAIL it’s possible to have all the settings required for the EVO:RAIL Configuration engine stored in a text file with a JSON extension. EVO:RAIL supports the configuration of a single QEP System VM or two System VMs. In the screen grab below you can see the JSON file that I use in the hands-on-lab. If you look to the bottom you can see two additional, optional entries under the catagory of “vendor”.

linkandlaunch-json

It starts with the “vendor” attribute, and can be used to configure the two QEP VMs that have been imported into the system. Remember this is all done at the factory, so as a customer you merely need to provide your preferred IP for the QEP System VMs – and the EVO:RAIL engine will take care of deploying them for you.

Once the EVO:RAIL Configuration engine has deployed the appliance, at the very end it powers on the QEP System VMs and applies the IP configuration supplied. Once you login to the EVO:RAIL Management UI, you should see a “QEP” node in the left-hand sidebar.

linkandlaunch-launch

In my case I just used a generic “ACME” style logo, and when you click to launch “ACME Test VM No.1” it just connects to a web-service.

This isn’t yet available to demo in our hands-on-lab, although I’m toying with the idea of including it in this year’s VMworld Labs. Our partners have already made great use of “Link and Launch” not least EMC, who have produced their own VPEX Blue management UI which has the look and feel of the core VMware EVO:RAIL Management UI.

vpsex-blue

 

Posted by on July 21, 2015 in Uncategorized

Comments Off on EVO:RAIL Under The Covers: What is “Link and Launch”, and how does it work?

Under The Covers – What happens when…EVO:RAIL Replaces a node (Part 3)

In my previous blog post I walked you through what happens when adding an additional EVO:RAIL appliance to an existing deployment or cluster. Now I want to look at the next important workflow. You could relate this to the issue of serviceability. There are a number of scenarios that need to be considered from a hardware perspective, including:

  • Replacing an entire node within the EVO:RAIL appliance
  • Replacing a boot disk
  • Replacing a HHD or SSD used by VSAN
  • Replacing a failed NIC

There are surprisingly few circumstances that would trigger the replacement of an entire node. They usually fall into the category of a failed CPU, memory or motherboard. It’s perhaps worth stating that our different Qualified EVO:RAIL Partners (QEP) have customized the procedure of how they handle these sorts of failures relative to how they handle these issues for other hardware offerings. For instance one partner might prefer to replace the motherboard if it fails, whereas another will see this as being easier to address by shipping a replacement of the node altogether. That’s the subject of this blog post – the scenario where an entire node is replaced by the QEP.

As you might know from my previous post every EVO:RAIL has its own unique appliance ID, say MAR12345604, and every node within that appliance has its own node ID expressed with a dash and number, for instance -01, -02, -03 and -04. When the appliance ID and node ID are combined together it creates a global unique identifier that reflects that node on the network. These values are stored in the “AssetTag” part of each node’s system BIOS settings, and are generated and assigned at the factory.

So if for instance node03 died and had an identity of MAR12345604-03, then a replacement node would be built at the factory and shipped to the customer with the same ID. The old node would be removed and dumped in the trash, and the new node would be slotted into its place, and powered on for the first time. At this point a little EVO:RAIL magic takes place. When the replacement node is powered on for the first time, it advertises itself on the network using the “VMware Loudmouth” daemon. This advertisement is picked up by the existing EVO:RAIL appliance, and it recognizes firstly, that this node should be part of the same appliance, because it has a matching appliance ID, and secondly that it is specifically used to replace a failed node.

In the EVO:RAIL UI this appears as an “Add EVO:RAIL Node” pop-up message – indicating that a node was “serviced” and can be “replaced”.

Screen Shot 2015-04-14 at 16.16.27

The steps taken by this workflow are similar but different to adding additional appliances to an existing cluster:

  1. Check Settings
  2. Unregister conflicting ESXi host from vCenter Server
  3. Delete System VMs from replacement server
  4. Place ESXi hosts into maintenance mode
  5. Set up management network on hosts
  6. Configure NTP Settings
  7. Configure Syslog Settings
  8. Delete Default port groups on ESXi host
  9. Disable Virtual SAN on ESXi host
  10. Register ESXi hosts to vCenter
  11. Setup NIC Bonding on ESXi host
  12. Setup FQDN on ESXi host
  13. Setup Virtual SAN, vSphere vMotion and VM Networks on ESXi host
  14. Setup DNS
  15. Restart Loudmouth on ESXI host
  16. Setup clustering for ESXi host
  17. Configure root password on ESXI host
  18. Exit maintenance mode on the ESXi host

Once again, you’ll notice I’ve highlighted a key step in bold – that’s Step 2. One process that “Add EVO:RAIL Node” workflow automates (amongst many others!) is clearing out dead, stale and orphaned references to ESXI host in vCenter that have shuffled off this mortal coil.

That might leave you with one question begging. Given that the ‘replacement node’ has the same appliance ID, how does the EVO:RAIL engine “know” that this is a replacement node? The answer is that before the “Add EVO:RAIL Node” pop-up appears the node reports its configuration to the core EVO:RAIL engine running inside the vCenter Server Appliance (vCSA). The EVO:RAIL engine inspects the node to check it is blank and just has a generic factory specification.

If you want to experience this process of adding a replacement EVO:RAIL node at first hand, don’t forget our hands-on-lab now showcases this process. Check out the HOL at:

HOL-SDC-1428 – Introduction to EVO:RAIL

 

Posted by on July 13, 2015 in EVO:RAIL

Comments Off on Under The Covers – What happens when…EVO:RAIL Replaces a node (Part 3)

Under The Covers – What happens when…EVO:RAIL Adds an additional appliance (Part 2)

In my previous blog post I walked you through what happens when EVO:RAIL is being configured for the very first time. Now I want to look at the next important workflow. As you might know from reading this series of posts, EVO:RAIL has auto-discovery and auto-scale-out functionality. A daemon called “VMware Loudmouth” which is available on each of the four nodes that make up and EVO:RAIL as well as the vCenter Server Appliance, is used to “advertise” additional EVO:RAIL appliances on the network. The idea is a simple one – to make the adding of additional EVO:RAIL appliances, to increase capacity and resources, as easy as typing a password for ESXi and a password for vCenter.

When EVO:RAIL is brought up on the same network as an existing EVO:RAIL deployment the management UI should pick up on its presence using “VMware Loudmouth”. Once the administrator clicks to add the 2nd appliance, this workflow should appear.

newappliance

newappliance02

So long as there are sufficient IP addresses in the original IP pools defined when the first appliance was deployed, then it’s merely a matter of providing passwords. So what happens after doing that and clicking the “Add EVO:RAIL Appliance” button?

In contrast to the core 30 steps that the EVO:RAIL Configuration completes, the adding additional EVO:RAIL appliances to an existing cluster is significantly less – in total just 17 steps. They are as follows:

  1. Check settings
  2. Delete System VMs from hosts
  3. Place ESXi hosts into maintenance mode
  4. Set up management network on hosts
  5. Configure NTP Settings
  6. Configure Syslog Settings
  7. Delete Default port groups on ESXi hosts
  8. Disable Virtual SAN on ESXi hosts
  9. Register ESXi hosts to vCenter
  10. Setup NIC Bonding on ESXi hosts
  11. Setup FQDN on ESXi hosts
  12. Setup Virtual SAN, vSphere vMotion and VM Networks on ESXi hosts
  13. Setup DNS
  14. Restart Loudmouth on ESXI hosts
  15. Setup clustering for ESXi hosts
  16. Configure root password on ESXI hosts
  17. Exit maintenance mode on the ESXi hosts

One reason why the adding of subsequent appliances takes less than 7 minutes, compared with the initial configuration of 15 minutes, is that components such as setting up vCenter and SSO aren’t needed because they are already present in the environment. So pretty much all the EVO:RAIL engine has to do is setup the ESXi hosts so that they are in a valid state to be added to the existing cluster.

You’ll notice that I’ve chosen to highlight Step 2 in my bulleted list. Every EVO:RAIL that leaves a Qualified EVO:RAIL Partner (QEP) factory floor is built in the same way. It can be used to carry out a net-new deployment at new location or network, or it can be used to auto-scale-out an existing environment. If it is a net-new deployment the customer connects to the EVO:RAIL Configuration UI (https://192.168.10.200:7443 by default). If on the other hand the customer wants to add capacity they would complete the “Add New EVO:RAIL Appliance” workflow. In this second scenario the built-in instances of vCenter Server Appliance and vRealize Log Insight are no longer needed on node01, and so they are removed.

If you want to experience this process of adding a second EVO:RAIL appliance at first hand, don’t forget our hands-on-lab now showcases this process. Check out the HOL at:

HOL-SDC-1428 – Introduction to EVO:RAIL

 

Posted by on July 6, 2015 in EVO:RAIL

Comments Off on Under The Covers – What happens when…EVO:RAIL Adds an additional appliance (Part 2)

EVO:RAIL – Under The Covers – What happens when EVO:RAIL Configures (Part 1)

One of the things EVO:RAIL excels at is automating stuff. As you might know EVO:RAIL automates the deployment of VMware ESXi, vCenter, Log Insight as well as carrying out many countless configuration steps that allow for a VMware High Availability (HA), Distributed Resource Scheduler (DRS) and VSAN Cluster to work – all this and more in around 15 minutes flat. However, these individual steps are not widely talked about and where this gets “interesting” is when various big tasks are carried out. Understanding these steps helps demystify what EVO:RAIL is doing, and also helps explain some of the messages you see. For me there are three main big tasks the EVO:RAIL engine carries out:

  1. Configuration of the very first EVO:RAIL Appliance
  2. Adding an Additional Appliance expand available compute and storage resources (commonly referred to in marketing-speak as auto-discovery and auto scale-out).
  3. Replacing a failed node with a new node – commonly caused by failure of a motherboard or CPU socket.

This blog post and its two companion posts (Parts 2/3) attempt to describe in a bit more detail what’s happening in these processes, and why there are subtle differences between them. So let’s start with the first – the steps taken during the configuration of the very first EVO:RAIL appliance.

You can see these steps listed in the EVO:RAIL UI during the Initialize/Build/Configure/Finalize process. It’s perhaps not so thrilling to sit and watch that. Believe it or not I recorded the 15min process so I could rewind slowly and document each one below. I guess I could have asked someone in my team internally for this information, but I suspect they have better things to do!

Screen Shot 2015-06-26 at 10.53.46

So here’s the list….

  1. Set password on vCenter
  2. Install private DNS on vCenter
  3. Configure private DNS on vCenter
  4. Configure vCenter to use private DNS
  5. Perform mDNS ESXi host discovery
  6. Setup management network on ESXi hosts
  7. Configure NTP on ESXi Hosts
  8. Configure Syslog on ESXi hosts
  9. Configure vCenters FQDN
  10. Configure NTP on the vCenter
  11. Configure Syslog on the vCenter
  12. Restart Loudmouth on the vCenter
  13. Accept EULA on vCenter
  14. Create vCenter Database
  15. Initialize SSO
  16. Start vCenter (vpxd)
  17. Create management account on vCenter
  18. Register ESXi hosts with vCenter
  19. Configure FQDN on ESXi hosts
  20. Rename vCenter Server management network on ESXi hosts
  21. Configure NIC Team
  22. Setup Virtual SAN, vSphere vMotion, VM Networks on ESXi hosts
  23. Setup DNS on ESXi hosts
  24. Restart Loudmouth on ESXi hosts
  25. Enable vSphere HA/DRS
  26. Create a Storage Policy
  27. Configuring Enhanced vMotion Compatibility
  28. Set vCenter Log Insight to auto-start after Power Cycle events
  29. Configure root password on ESXi hosts
  30. Register EVO:RAIL Service with mDNS

I don’t have much to say about these steps except to make a couple of remarks. Firstly, they form the bedrock of Parts 2/3 of this blog post series. Most of the EVO:RAIL “big tasks” will do some (but critically not ALL) of these steps. For example, there is no point in deploying vCenter when adding an additional appliance – if it has already been done building the first. It is for this reason adding a additional appliance only takes about 7mins – whereas building the first appliance takes around 15mins.

Secondly, knowing the process can help in troubleshooting. For example notice how the vCenter ‘root’ (and administrator@vsphere.local) password is changed at Step 1, whereas the ‘root’ password on the ESXi host is not changed until Step 29. If there was a problem during the configuration process between these two steps – it would mean the password to login to vCenter would be different from the password to log into the ESXi host(s). Incidentally, this separation of the password change is deliberate. We don’t change root password of the VMware ESXi until the very end and when we can guarantee the appliance build process has been successful.

Conclusions:

In the best of all possible worlds (to quote Voltaire’s Candide for a moment) you shouldn’t have to know about these steps. But a little knowledge is a dangerous thing (to quote Alexander Pope). I could go on flashing off my literary credentials but if I carry on like this you’ll think I’m just a snooty Brit who thinks he knows it all (incidentally, you’d be right!). And let’s face it, no one likes a clever Richard, do they?

Tune into the next thrilling episode for Parts 2/3 where it gets infinitely more interesting.

 

Posted by on June 26, 2015 in EVO:RAIL

Comments Off on EVO:RAIL – Under The Covers – What happens when EVO:RAIL Configures (Part 1)

EVO:RAIL Introduces support for Enhanced vMotion Compatibility

It’s my pleasure to say that EVO:RAIL 1.2 has been released with a number of enhancements. In case you don’t know there have been a number of maintenance releases (1.0.1 and 1.0.2 and 1.1) that shipped at the end of last year, and beginning of this. The 1.1 release rolls up all the changes previously introduces, and critically adds a new step to the configuration of the EVO:RAIL appliance – support for “Enhanced vMotion Compatibility” or EVC

The support for EVC is important step because it will allow both Intel Ivybridge and Intel Haswell based EVO:RAIL systems to co-exist in the same VMware HA and DRS cluster. The new EVC support enables support for the EVC Mode of “IvyBridge” which means a new Haswell system can join an Ivybridge based VMware cluster – and still be allow for vMotion events triggered by DRS or maintenance mode to occur.

Screen Shot 2015-06-19 at 5.02.00 AM

Prior to this release of EVO:RAIL, Enhance vMotion Compatibility was not enabled by default. You might ask is it possible on the older release of EVO:RAIL is possible to enabled EVC on systems prior to 1.2? The answer is yes, so long as you follow this order of steps:

  • Upgrade your EVO:RAIL appliance to EVO:RAIL 1.1+.
  • Connect to the vSphere Web Client and login with administrator privileges.
  • From Home, click Hosts and Clusters.
  • Click the EVO:RAIL cluster, Marvin-Virtual-SAN-Cluster-<uuid>.
  • Click Manage>Settings>Configuration>VMware EVC, click
  • Select the “Enable EVC for Intel Hosts” radio button.
  • From the VMware EVC Mode dropdown, select “Intel Ivy Bridge Generation”. Click OK.
  • The VMware EVC settings will now show that EVC is enabled with mode set to “Intel Ivy Bridge Generation”.

This simple step-by-step process can be carried without shutting down any existing VMs, and would allow a VM running on the IvyBridge system to be vMotion to/from the Haswell system without a problem.

These steps are reproduced from this KB article here: http://kb.vmware.com/kb/2114368

 

Posted by on June 22, 2015 in EVO:RAIL

Comments Off on EVO:RAIL Introduces support for Enhanced vMotion Compatibility

EVO:RAIL – Under The Covers – EVO:RAIL Software Versions and Patch Management

If you have access to the EVO:RAIL Management UI it’s very easy to see what version of vCenter, ESX and EVO:RAIL software you are running. Under the “Config” node in the Management UI the first page you see is the “General” page that will show you versions of the software currently in use:

Screen Shot 2015-06-03 at 15.56.10

There are ways of finding out what version of EVO:RAIL software is in use directly from the vCSA and from the ESXi host. You find out the version of EVO:RAIL from the vCSA using the RPM (Redhat Package Management) command. I’ve sometimes used these commands before I do a build of the EVO:RAIL just confirm what version I’m working with. It also reminds me that I may need to do an update once the build process has completed. Finally, I’ve used these commands when I’ve been supporting folks remotely…

rpm -qa | grep “vmware-marvin”

This should print the version of the EVO:RAIL software running inside the vCenter Server Appliance by querying the status of the “vmware-marvin” software.

Screen Shot 2015-06-03 at 15.58.39

From the ESXI host the “esxcli” command has a method of listing all the VIBS (Virtual Infrastructure Bundle) installed to the host, again we can pipe through grep to search for particular string:

esxcli software vib list | grep “marvin”

Screen Shot 2015-06-03 at 15.57.59

So for the most part you can retrieve the EVO:RAIL version number using the EVO:RAIL Management UI, but if you prefer you can also retrieve that information from the command-line.

The patch management process is relatively simple affair. Firstly, where do you get patches for EVO:RAIL from? Answer, from MyVMware. New version of EVO:RAIL are RTM to our partners some weeks before the GA on the vmware.com site. Of course, behind the scenes the process of BETA, RC, RTM and GA that customers don’t have to worry about. Generally, we would recommend checking with your partner before you download the latest bits and upgrade – just to check that they approve it for your system.

The EVO:RAIL patch management system can update – the EVO:RAIL engine (both vCenter and ESXi components), ESXi and vCenter. The patches are distributed as .ZIP files and generally contain bundles of either VIB (Virtual Infrastructure Bundles) for ESXi, or RPMs (Red Hat Package) files. These get up loaded to the VSAN Datastore on the appliance, and then an install process is triggered. The whole process is seemless and automated – putting each ESXi host into maintenance mode, and applying the patch, and then rebooting – once completed the host exits maintenance mode, and the next host is updated. The process is its nature serial on a single appliance with 4-nodes to make sure there’s always 3-nodes available on the cluster – and each node must complete before the next node is updated. So there’s no need for VMware Update Manager or any Windows instances to handle the process.

 

Posted by on June 3, 2015 in EVO:RAIL

Comments Off on EVO:RAIL – Under The Covers – EVO:RAIL Software Versions and Patch Management

VMUG EVALExperience – Now with vSphere6 and VSAN6

Screen Shot 2015-06-03 at 11.46.04

VMware recently announced the general availability of VMware vSphere 6, VMware Integrated OpenStack and VMware Virtual SAN 6 – the industry’s first unified platform for the hybrid cloud! EVALExperience will be releasing the new products and VMUG Advantage subscribers will be able to download the latest versions of:

  • vCenter Server Standard for vSphere 6
  • vSphere with Operations Management Enterprise Plus
  • vCloud Suite Standard
  • Virtual SAN 6
  • *New* Virtual SAN 6 All Flash Add-On

Existing EVALExperience users previous products download has been replaced in order to upgrade you to the latest version of these products. They must visit Kivuto and place an order for the updated products. Please note, the old products and keys will no longer be available, you will need to migrate to the new versions.

For futher info visit: http://www.vmug.com/p/cm/ld/fid=8792

 

 

Posted by on June 3, 2015 in VMUG

Comments Off on VMUG EVALExperience – Now with vSphere6 and VSAN6