RSS

Category Archives: EVO:RAIL

EVO:RAIL Under The Covers: How DNS works

How DNS works in EVO:RAIL

One of the big differences in how vSphere works as deployed by EVO:RAIL is with DNS. As you might know vSphere has many requirements for name resolution, and often various vSphere features will not function or setup correctly without DNS being available. A classic example is simply opening a Remote Console window on a VM. Although that request might be triggered from a vCenter session, it’s actually the VMware ESXi host that handles the redirection of the video, and allows for Keyboard, Mouse and Screen (KMS) functionality. Remote Console sessions require name resolution to the VMware ESXI host to work. I could go on at length with other examples but you get the picture.

The good news is the EVO:RAIL Appliance takes care of all these requirements. In fact EVO:RAIL has its own built in DNS service. This means that there are no service dependencies required to setup the appliance at a green-field location. That’s right, the EVO:RAIL appliance will configure itself – even if there’s no DNS, DHCP or Active Directory.

This does mean that the way name resolution is achieved is different from standard vSphere as deployed manually by customers. With vSphere the path of the name resolution from the VMware ESXi host is via its management network. For example after installing VMware ESXi the customer assign a static IP address and configures the VMware ESXI host for its Primary and Secondary DNS, as well as its domain suffix, using something like the Direct Console User Interface.

Screen Shot 2015-09-16 at 15.00.38

In this case the VMware ESXI host queries the corporate DNS server directly. With EVO:RAIL this behaviour is similar, but different. The EVO:RAIL Configuration Engine will set static IP addresses for the ESXi management network and also set the preferred DNS settings – however, what is queried is the built-in DNS Server of the EVO:RAIL.

So in this case the DNS query takes this path:

ESX Host >> vCenter Server Appliance DNS Service >> If not internally resolved forward it is forwarded on to the corporate DNS server.

You can tell that a DNS service is running with the vCenter Server Appliance using the command “netstat –natlp | grep ‘:53’. As you might know all DNS queries are responded to by listening on TCP port 53. This will show that there is a “dnsmasq” service running.

Screen Shot 2015-09-16 at 15.05.23

The dnsmasq service holds hostname records in a text file on the vCenter Server Appliance /var/lib/vmware-marvin/dnsmasq/hosts. Usually, this will contain at least the four VMware ESXI hosts that make up the EVO:RAIL Appliance together with the IP address and FQDNs for the vCenter Server Appliance. In the new release of EVO:RAIL we will have a dedicated virtual appliance for managing the physical appliance that we are calling the “EVO:RAIL Orchestration Appliance”. You can see it listed in the screen grab as evo04-evorail.vsphere.local.

Screen Shot 2015-09-16 at 15.27.44

If you add a second appliance to double your compute and storage resources the hosts file would be updated to include those FQDNs for the new ESXI hosts. In the example above, no corporate DNS server was specified, so the EVO:RAIL dnsmasq service is the source for all queries. It’s rare to actually need to modify this file, although one situation could happen is if you decide to change the management IP or FQDN of the servers listed here.

As for the forwarding of queries for systems not listed in the hosts file – that’s held in file dedicated to the dnsmasq configuration. So its not the usual /etc/resolv.conf file that usually holds the DNS Primary/Secondary IPs for Linux. The file used is called /etc/dnsmasq.conf held within the server= setting. We do have KB Article 2107249 (http://kb.vmware.com/kb/2107249) which describes the file and how to edit it. For instance you may wish to change the corporate DNS server entry if the IP address for the DNS service has changed or if you have fat-fingered the setting.

So to summarize. EVO:RAIL has its own DNS service that allows us to meet the requirements of vSphere for DNS. That’s ideal for greenfield deployments because we have no dependency on DNS, DHCP or Microsoft Active Directory. You can, of course point the EVO:RAIL DNS service to an ‘external’ or corporate DNS server for all other queries.

 

Posted by on October 6, 2015 in EVO:RAIL

Comments Off on EVO:RAIL Under The Covers: How DNS works

VMUG WebCast: Overview of EVO:RAIL and Deep Dive into Version 1.2 Features

Abstract:

EVO:RAIL is the first 100% VMware powered Hyper-Converged Infrastructure Appliance (HCIA). EVO:RAIL delivers compute, network, storage and management resources integrated onto an optimized 2U/4N hardware platform; all available via our 8 Qualified EVO:RAIL Partners and backed by single point of contact for both hardware and software. EVO:RAIL has gained a lot of momentum in a very short timeframe and the EVO:RAIL team brings continuously new capabilities to improve performance, scale, automation.

Join this session to get an overview of EVO:RAIL, a dive deep into the new EVO:RAIL 1.2 and a product demonstration from EVO:RAIL Product Marketing Manager and Product Manager.

Presented by: Michael Gandy and Justin Smith, VMware

Registration: Click Here!

 

 

Posted by on July 31, 2015 in EVO:RAIL

Comments Off on VMUG WebCast: Overview of EVO:RAIL and Deep Dive into Version 1.2 Features

Under The Covers – What happens when…EVO:RAIL Replaces a node (Part 3)

In my previous blog post I walked you through what happens when adding an additional EVO:RAIL appliance to an existing deployment or cluster. Now I want to look at the next important workflow. You could relate this to the issue of serviceability. There are a number of scenarios that need to be considered from a hardware perspective, including:

  • Replacing an entire node within the EVO:RAIL appliance
  • Replacing a boot disk
  • Replacing a HHD or SSD used by VSAN
  • Replacing a failed NIC

There are surprisingly few circumstances that would trigger the replacement of an entire node. They usually fall into the category of a failed CPU, memory or motherboard. It’s perhaps worth stating that our different Qualified EVO:RAIL Partners (QEP) have customized the procedure of how they handle these sorts of failures relative to how they handle these issues for other hardware offerings. For instance one partner might prefer to replace the motherboard if it fails, whereas another will see this as being easier to address by shipping a replacement of the node altogether. That’s the subject of this blog post – the scenario where an entire node is replaced by the QEP.

As you might know from my previous post every EVO:RAIL has its own unique appliance ID, say MAR12345604, and every node within that appliance has its own node ID expressed with a dash and number, for instance -01, -02, -03 and -04. When the appliance ID and node ID are combined together it creates a global unique identifier that reflects that node on the network. These values are stored in the “AssetTag” part of each node’s system BIOS settings, and are generated and assigned at the factory.

So if for instance node03 died and had an identity of MAR12345604-03, then a replacement node would be built at the factory and shipped to the customer with the same ID. The old node would be removed and dumped in the trash, and the new node would be slotted into its place, and powered on for the first time. At this point a little EVO:RAIL magic takes place. When the replacement node is powered on for the first time, it advertises itself on the network using the “VMware Loudmouth” daemon. This advertisement is picked up by the existing EVO:RAIL appliance, and it recognizes firstly, that this node should be part of the same appliance, because it has a matching appliance ID, and secondly that it is specifically used to replace a failed node.

In the EVO:RAIL UI this appears as an “Add EVO:RAIL Node” pop-up message – indicating that a node was “serviced” and can be “replaced”.

Screen Shot 2015-04-14 at 16.16.27

The steps taken by this workflow are similar but different to adding additional appliances to an existing cluster:

  1. Check Settings
  2. Unregister conflicting ESXi host from vCenter Server
  3. Delete System VMs from replacement server
  4. Place ESXi hosts into maintenance mode
  5. Set up management network on hosts
  6. Configure NTP Settings
  7. Configure Syslog Settings
  8. Delete Default port groups on ESXi host
  9. Disable Virtual SAN on ESXi host
  10. Register ESXi hosts to vCenter
  11. Setup NIC Bonding on ESXi host
  12. Setup FQDN on ESXi host
  13. Setup Virtual SAN, vSphere vMotion and VM Networks on ESXi host
  14. Setup DNS
  15. Restart Loudmouth on ESXI host
  16. Setup clustering for ESXi host
  17. Configure root password on ESXI host
  18. Exit maintenance mode on the ESXi host

Once again, you’ll notice I’ve highlighted a key step in bold – that’s Step 2. One process that “Add EVO:RAIL Node” workflow automates (amongst many others!) is clearing out dead, stale and orphaned references to ESXI host in vCenter that have shuffled off this mortal coil.

That might leave you with one question begging. Given that the ‘replacement node’ has the same appliance ID, how does the EVO:RAIL engine “know” that this is a replacement node? The answer is that before the “Add EVO:RAIL Node” pop-up appears the node reports its configuration to the core EVO:RAIL engine running inside the vCenter Server Appliance (vCSA). The EVO:RAIL engine inspects the node to check it is blank and just has a generic factory specification.

If you want to experience this process of adding a replacement EVO:RAIL node at first hand, don’t forget our hands-on-lab now showcases this process. Check out the HOL at:

HOL-SDC-1428 – Introduction to EVO:RAIL

 

Posted by on July 13, 2015 in EVO:RAIL

Comments Off on Under The Covers – What happens when…EVO:RAIL Replaces a node (Part 3)

Under The Covers – What happens when…EVO:RAIL Adds an additional appliance (Part 2)

In my previous blog post I walked you through what happens when EVO:RAIL is being configured for the very first time. Now I want to look at the next important workflow. As you might know from reading this series of posts, EVO:RAIL has auto-discovery and auto-scale-out functionality. A daemon called “VMware Loudmouth” which is available on each of the four nodes that make up and EVO:RAIL as well as the vCenter Server Appliance, is used to “advertise” additional EVO:RAIL appliances on the network. The idea is a simple one – to make the adding of additional EVO:RAIL appliances, to increase capacity and resources, as easy as typing a password for ESXi and a password for vCenter.

When EVO:RAIL is brought up on the same network as an existing EVO:RAIL deployment the management UI should pick up on its presence using “VMware Loudmouth”. Once the administrator clicks to add the 2nd appliance, this workflow should appear.

newappliance

newappliance02

So long as there are sufficient IP addresses in the original IP pools defined when the first appliance was deployed, then it’s merely a matter of providing passwords. So what happens after doing that and clicking the “Add EVO:RAIL Appliance” button?

In contrast to the core 30 steps that the EVO:RAIL Configuration completes, the adding additional EVO:RAIL appliances to an existing cluster is significantly less – in total just 17 steps. They are as follows:

  1. Check settings
  2. Delete System VMs from hosts
  3. Place ESXi hosts into maintenance mode
  4. Set up management network on hosts
  5. Configure NTP Settings
  6. Configure Syslog Settings
  7. Delete Default port groups on ESXi hosts
  8. Disable Virtual SAN on ESXi hosts
  9. Register ESXi hosts to vCenter
  10. Setup NIC Bonding on ESXi hosts
  11. Setup FQDN on ESXi hosts
  12. Setup Virtual SAN, vSphere vMotion and VM Networks on ESXi hosts
  13. Setup DNS
  14. Restart Loudmouth on ESXI hosts
  15. Setup clustering for ESXi hosts
  16. Configure root password on ESXI hosts
  17. Exit maintenance mode on the ESXi hosts

One reason why the adding of subsequent appliances takes less than 7 minutes, compared with the initial configuration of 15 minutes, is that components such as setting up vCenter and SSO aren’t needed because they are already present in the environment. So pretty much all the EVO:RAIL engine has to do is setup the ESXi hosts so that they are in a valid state to be added to the existing cluster.

You’ll notice that I’ve chosen to highlight Step 2 in my bulleted list. Every EVO:RAIL that leaves a Qualified EVO:RAIL Partner (QEP) factory floor is built in the same way. It can be used to carry out a net-new deployment at new location or network, or it can be used to auto-scale-out an existing environment. If it is a net-new deployment the customer connects to the EVO:RAIL Configuration UI (https://192.168.10.200:7443 by default). If on the other hand the customer wants to add capacity they would complete the “Add New EVO:RAIL Appliance” workflow. In this second scenario the built-in instances of vCenter Server Appliance and vRealize Log Insight are no longer needed on node01, and so they are removed.

If you want to experience this process of adding a second EVO:RAIL appliance at first hand, don’t forget our hands-on-lab now showcases this process. Check out the HOL at:

HOL-SDC-1428 – Introduction to EVO:RAIL

 

Posted by on July 6, 2015 in EVO:RAIL

Comments Off on Under The Covers – What happens when…EVO:RAIL Adds an additional appliance (Part 2)

EVO:RAIL – Under The Covers – What happens when EVO:RAIL Configures (Part 1)

One of the things EVO:RAIL excels at is automating stuff. As you might know EVO:RAIL automates the deployment of VMware ESXi, vCenter, Log Insight as well as carrying out many countless configuration steps that allow for a VMware High Availability (HA), Distributed Resource Scheduler (DRS) and VSAN Cluster to work – all this and more in around 15 minutes flat. However, these individual steps are not widely talked about and where this gets “interesting” is when various big tasks are carried out. Understanding these steps helps demystify what EVO:RAIL is doing, and also helps explain some of the messages you see. For me there are three main big tasks the EVO:RAIL engine carries out:

  1. Configuration of the very first EVO:RAIL Appliance
  2. Adding an Additional Appliance expand available compute and storage resources (commonly referred to in marketing-speak as auto-discovery and auto scale-out).
  3. Replacing a failed node with a new node – commonly caused by failure of a motherboard or CPU socket.

This blog post and its two companion posts (Parts 2/3) attempt to describe in a bit more detail what’s happening in these processes, and why there are subtle differences between them. So let’s start with the first – the steps taken during the configuration of the very first EVO:RAIL appliance.

You can see these steps listed in the EVO:RAIL UI during the Initialize/Build/Configure/Finalize process. It’s perhaps not so thrilling to sit and watch that. Believe it or not I recorded the 15min process so I could rewind slowly and document each one below. I guess I could have asked someone in my team internally for this information, but I suspect they have better things to do!

Screen Shot 2015-06-26 at 10.53.46

So here’s the list….

  1. Set password on vCenter
  2. Install private DNS on vCenter
  3. Configure private DNS on vCenter
  4. Configure vCenter to use private DNS
  5. Perform mDNS ESXi host discovery
  6. Setup management network on ESXi hosts
  7. Configure NTP on ESXi Hosts
  8. Configure Syslog on ESXi hosts
  9. Configure vCenters FQDN
  10. Configure NTP on the vCenter
  11. Configure Syslog on the vCenter
  12. Restart Loudmouth on the vCenter
  13. Accept EULA on vCenter
  14. Create vCenter Database
  15. Initialize SSO
  16. Start vCenter (vpxd)
  17. Create management account on vCenter
  18. Register ESXi hosts with vCenter
  19. Configure FQDN on ESXi hosts
  20. Rename vCenter Server management network on ESXi hosts
  21. Configure NIC Team
  22. Setup Virtual SAN, vSphere vMotion, VM Networks on ESXi hosts
  23. Setup DNS on ESXi hosts
  24. Restart Loudmouth on ESXi hosts
  25. Enable vSphere HA/DRS
  26. Create a Storage Policy
  27. Configuring Enhanced vMotion Compatibility
  28. Set vCenter Log Insight to auto-start after Power Cycle events
  29. Configure root password on ESXi hosts
  30. Register EVO:RAIL Service with mDNS

I don’t have much to say about these steps except to make a couple of remarks. Firstly, they form the bedrock of Parts 2/3 of this blog post series. Most of the EVO:RAIL “big tasks” will do some (but critically not ALL) of these steps. For example, there is no point in deploying vCenter when adding an additional appliance – if it has already been done building the first. It is for this reason adding a additional appliance only takes about 7mins – whereas building the first appliance takes around 15mins.

Secondly, knowing the process can help in troubleshooting. For example notice how the vCenter ‘root’ (and administrator@vsphere.local) password is changed at Step 1, whereas the ‘root’ password on the ESXi host is not changed until Step 29. If there was a problem during the configuration process between these two steps – it would mean the password to login to vCenter would be different from the password to log into the ESXi host(s). Incidentally, this separation of the password change is deliberate. We don’t change root password of the VMware ESXi until the very end and when we can guarantee the appliance build process has been successful.

Conclusions:

In the best of all possible worlds (to quote Voltaire’s Candide for a moment) you shouldn’t have to know about these steps. But a little knowledge is a dangerous thing (to quote Alexander Pope). I could go on flashing off my literary credentials but if I carry on like this you’ll think I’m just a snooty Brit who thinks he knows it all (incidentally, you’d be right!). And let’s face it, no one likes a clever Richard, do they?

Tune into the next thrilling episode for Parts 2/3 where it gets infinitely more interesting.

 

Posted by on June 26, 2015 in EVO:RAIL

Comments Off on EVO:RAIL – Under The Covers – What happens when EVO:RAIL Configures (Part 1)

EVO:RAIL Introduces support for Enhanced vMotion Compatibility

It’s my pleasure to say that EVO:RAIL 1.2 has been released with a number of enhancements. In case you don’t know there have been a number of maintenance releases (1.0.1 and 1.0.2 and 1.1) that shipped at the end of last year, and beginning of this. The 1.1 release rolls up all the changes previously introduces, and critically adds a new step to the configuration of the EVO:RAIL appliance – support for “Enhanced vMotion Compatibility” or EVC

The support for EVC is important step because it will allow both Intel Ivybridge and Intel Haswell based EVO:RAIL systems to co-exist in the same VMware HA and DRS cluster. The new EVC support enables support for the EVC Mode of “IvyBridge” which means a new Haswell system can join an Ivybridge based VMware cluster – and still be allow for vMotion events triggered by DRS or maintenance mode to occur.

Screen Shot 2015-06-19 at 5.02.00 AM

Prior to this release of EVO:RAIL, Enhance vMotion Compatibility was not enabled by default. You might ask is it possible on the older release of EVO:RAIL is possible to enabled EVC on systems prior to 1.2? The answer is yes, so long as you follow this order of steps:

  • Upgrade your EVO:RAIL appliance to EVO:RAIL 1.1+.
  • Connect to the vSphere Web Client and login with administrator privileges.
  • From Home, click Hosts and Clusters.
  • Click the EVO:RAIL cluster, Marvin-Virtual-SAN-Cluster-<uuid>.
  • Click Manage>Settings>Configuration>VMware EVC, click
  • Select the “Enable EVC for Intel Hosts” radio button.
  • From the VMware EVC Mode dropdown, select “Intel Ivy Bridge Generation”. Click OK.
  • The VMware EVC settings will now show that EVC is enabled with mode set to “Intel Ivy Bridge Generation”.

This simple step-by-step process can be carried without shutting down any existing VMs, and would allow a VM running on the IvyBridge system to be vMotion to/from the Haswell system without a problem.

These steps are reproduced from this KB article here: http://kb.vmware.com/kb/2114368

 

Posted by on June 22, 2015 in EVO:RAIL

Comments Off on EVO:RAIL Introduces support for Enhanced vMotion Compatibility

EVO:RAIL – Under The Covers – EVO:RAIL Software Versions and Patch Management

If you have access to the EVO:RAIL Management UI it’s very easy to see what version of vCenter, ESX and EVO:RAIL software you are running. Under the “Config” node in the Management UI the first page you see is the “General” page that will show you versions of the software currently in use:

Screen Shot 2015-06-03 at 15.56.10

There are ways of finding out what version of EVO:RAIL software is in use directly from the vCSA and from the ESXi host. You find out the version of EVO:RAIL from the vCSA using the RPM (Redhat Package Management) command. I’ve sometimes used these commands before I do a build of the EVO:RAIL just confirm what version I’m working with. It also reminds me that I may need to do an update once the build process has completed. Finally, I’ve used these commands when I’ve been supporting folks remotely…

rpm -qa | grep “vmware-marvin”

This should print the version of the EVO:RAIL software running inside the vCenter Server Appliance by querying the status of the “vmware-marvin” software.

Screen Shot 2015-06-03 at 15.58.39

From the ESXI host the “esxcli” command has a method of listing all the VIBS (Virtual Infrastructure Bundle) installed to the host, again we can pipe through grep to search for particular string:

esxcli software vib list | grep “marvin”

Screen Shot 2015-06-03 at 15.57.59

So for the most part you can retrieve the EVO:RAIL version number using the EVO:RAIL Management UI, but if you prefer you can also retrieve that information from the command-line.

The patch management process is relatively simple affair. Firstly, where do you get patches for EVO:RAIL from? Answer, from MyVMware. New version of EVO:RAIL are RTM to our partners some weeks before the GA on the vmware.com site. Of course, behind the scenes the process of BETA, RC, RTM and GA that customers don’t have to worry about. Generally, we would recommend checking with your partner before you download the latest bits and upgrade – just to check that they approve it for your system.

The EVO:RAIL patch management system can update – the EVO:RAIL engine (both vCenter and ESXi components), ESXi and vCenter. The patches are distributed as .ZIP files and generally contain bundles of either VIB (Virtual Infrastructure Bundles) for ESXi, or RPMs (Red Hat Package) files. These get up loaded to the VSAN Datastore on the appliance, and then an install process is triggered. The whole process is seemless and automated – putting each ESXi host into maintenance mode, and applying the patch, and then rebooting – once completed the host exits maintenance mode, and the next host is updated. The process is its nature serial on a single appliance with 4-nodes to make sure there’s always 3-nodes available on the cluster – and each node must complete before the next node is updated. So there’s no need for VMware Update Manager or any Windows instances to handle the process.

 

Posted by on June 3, 2015 in EVO:RAIL

Comments Off on EVO:RAIL – Under The Covers – EVO:RAIL Software Versions and Patch Management

EVO:RAIL – The vSphere Environment – The Physical Resources (Part2)

In my previous blog post I focused on the vSphere ‘metadata’ that makes up each and every configuration of vSphere, and for that matter, EVO:RAIL. Of course what matters is how we carve up and present the all-important physical resources. These can be segmented into compute, memory, storage and networking.

Compute:

The way compute resources are handled is pretty straightforward. EVO:RAIL creates a single VMware HA and DRS cluster without modifying any of the default settings. DRS is set to be fully-automated with the ‘Migration Threshold” left at the center point. We do not enable VMware Distributed Power Management (DPM) because in a single EVO:RAIL appliance with four nodes this would create issues for VSAN and patch management – so all four nodes are always on at all times. This remains true even if you created a fully populated 8-appliance system that would contain 32 ESXi hosts. To be fair this pretty a configuration that dictated by VSAN. You don’t normally make your storage go to sleep to save on power after all…

Screen Shot 2015-04-15 at 17.29.21

Similarly VMware HA does not deviate from any of the standard defaults. The main thing to mention here is that “datastore heartbeats” are pretty much irrelevant to EVO:RAIL, considering one single VSAN datastore is presented to the entire cluster.

Screen Shot 2015-04-15 at 17.31.32

Memory:

The EVO:RAIL Appliance ships with four complete nodes each with 192GB of memory. A fully populated EVO:RAIL environment with 8 appliances would present 32 individual ESXI hosts in a single VMware HA/DRS/VSAN cluster. That’s a massive 384 cores, 6TB of memory, and 128TB of RAW storage capacity. We let VMware DRS use its algorithms to decide on the placement of VMs at power-on relative to the amount of CPU and Memory available across the cluster, and we let VMware DRS control whether a VM should be moved to improve its performance. No special memory reservations are made for the System VMs of either vCenter, Log Insight or our Partner VMs.

Storage:

Every EVO:RAIL ships with 1xSSD for 400GB, and 3×1.2TB 10k SAS drives. When the EVO:RAIL configures it will enroll of this storage into a single disk group. You can see these disk groups in the vSphere Web Client by navigating to the cluster and selecting >>Manage, >> Settings, Virtual SAN and >>Disk Management. Here you can see that each of the four EVO:RAIL nodes are in a single disk group, with all disks (apart from the boot disk, of course) added into the group.

Screen Shot 2015-04-16 at 15.55.37

As for the Storage Policies that control how VMs consume the VSAN datastore, a custom storage policy called “MARVIN-STORAGE-PROFILE” is generated during the configuration of the EVO:RAIL.

Screen Shot 2015-04-16 at 17.05.04

With that said, this custom policy merely has the same settings as VSAN’s default, that is one rule is set making “Number of Failures to Tolerate” be equal to 1. The effect of this policy is such that for every VM created a copy is created on different node elsewhere in the VSAN datastore. This means should a node or disk become unavailable there is a copy held elsewhere in the vSphere Cluster that can be used. Think of it being like a per-VM RAID1 policy.

It’s perhaps worth mentioning that there are slight differences between some QEP’s EVO:RAILs from others. These difference have NO impact on performance. But they are worth mentioning. There two main types. It Type1 the enclosure has 24 drive bays at the front. That’s 6 slots per node – and each node receives a boot drive, 1xSSD drive and 3xHHD drives leaving one slot free. In Type2 system there is an internal SATADOM drive from which the EVO:RAIL boots – and at the front of the enclosure there are 16 drive bays. Each node uses four of those slots – for 1xSSD and 3xHHD drives. As you can tell both Type 1 and 2 system both end up presenting the same amount of storage to VSAN. So at the end of the day it makes little difference. But its subtle difference few publically have picked up on. I think in the long run its likely all our partners will wind up using 24-drive bay system, with an internal SATADOM device. That would free up all 6-drive bays for each node, and would allow for more spindles or more SSD.

Networking:

I’ve blogged previously, and at some length about networking in these posts:

EVO:RAIL – Getting the pre-RTFM in place
EVO:RAIL – Under The Covers – Networking (Part1)
EVO:RAIL – Under The Covers – Networking (Part2)
EVO:RAIL – Under The Covers – Networking (Part3)

So I don’t want to repeat myself excessively here, except to say EVO:RAIL 1.x uses a single Standard Switch(0), and patches both vmnic0 and vmnic2 for network redundancy. The vmnic1 interface is dedicated to VSAN, whereas all other traffic traverses vmnic0. Traffic shaping is enabled for the vSphere vMotion portgroup to make sure that vMotion events do not impact negatively on management or virtual machine traffic.

Summary

Well, that wraps up this two part series that covered the different aspects of vSphere environment once EVO:RAIL has done its special magic. Stay tuned for the next thrilling installment.

 

Posted by on May 21, 2015 in EVO:RAIL

Comments Off on EVO:RAIL – The vSphere Environment – The Physical Resources (Part2)

EVO:RAIL – The vSphere Environment – The Metadata (Part1)

One of the most common questions I get is what the vSphere environment looks like after EVO:RAIL has done its configuration magic. I think this is quite understandable. After all many of us, including myself, have spent many years perfecting the build of the vSphere environment and naturally want to know what configuration the EVO:RAIL Team decided upon. I like to think about this from a “design” perspective. There’s no such thing as a definitive and single method of deploying vSphere (or any software for that matter). All configurations are based on designs that must take the hardware and software into consideration, and that’s precisely what the EVO:RAIL Team has done with VMware’s hyperconverged appliance.

When I’m asked by this question by customers I often use the hardware resources that vSphere and EVO:RAIL present and start from there, although this can overlook certain object types that reside outside of CPU, Memory, Disk and Network – I’m thinking of logical objects such as vCenter Datacenters and Cluster names. There other objects that are created at the network layer which shouldn’t be overlooked, such as portgroups. If you like you could refer to this as the “metadata” that makes up vSphere environment.

So before I forget, let me cover them right away.

Logical Objects or “Metadata”

By default a net-new deployment of an EVO:RAIL appliance instantiates a clean copy of the vCenter Server Appliance. EVO:RAIL creates a vCenter datacenter object currently called “MARVIN-Datacenter” and a cluster called “MARVIN-Virtual-SAN-Cluster” followed by a very long UUID. Incidentally, this is the same UUID that you will see on the VSAN Datastore, and is generated by the Qualified EVO:RAIL Partner (QEP), during the factory build of the EVO:RAIL Appliance

datastores

A common question is whether this datacenter object and cluster object can be renamed to be something more meaningful. The answer is yes. The EVO:RAIL Configuration and Management engine does not use these text labels in any of its processes. Instead “Managed Object Reference” or MOREF values are used. These are system-generated values that remain the same even when objects like this are renamed. As for other vCenter objects such as datastore folders or VM folders, the EVO:RAIL engine does not create these. The System VMs that make up EVO:RAIL such as vCenter, Log Insight and our partner’s VMs are merely placed in the default “Discovered Virtual Machine” folder like so:

vmfolders

Similarly, the datastore that is created by VSAN can be renamed as well. And although technically renaming the “service-datastore” is possible – there’s really little point, as it cannot be used as permanent storage for virtual machines. It’s perhaps worth mentioning that in the EVO:RAIL UI you cannot select which datastore VMs can be used, there is nothing to stop that happening if you use the vSphere Web Client or vSphere Desktop Client.

actuallythisisdatastores

EVO:RAIL uses Standard Switches – as you might know these have always been case-sensitive, and need to be consistent from one ESX host to another. Now, of course EVO:RAIL ensures this consistency of configuration by gathering all your variables and applying them programmatically and consistently. The portgroup names themselves are trickier to change after the fact.

portgroups

It would be relatively trivial to add the virtual machine portgroups above, to Staging, Development and Production. However, it would have to be consistent across every ESXi host. Given how EVO:RAIL 1.1 now allows for up to 8 appliances with 32 nodes per cluster that would not be a small amount of administration. If I were forced to make a change like that, I would probably use PowerCLI with a for-each loop to rename the portgroups for me. If you want an example of that – I have some on the VMUG Wiki Page here:

http://wiki.vmug.com/index.php/Configuring_Standard_Switches#Adding_a_new_VLAN_Portgroup_to_an_existing_Standard_vSwitch

There’s two examples there – of connecting to vCenter, and then create VLAN16 portgroup on every host in a vCenter, and also creating a range of VLAN portgroups VLAN20-25 on every host in the cluster.

As for the EVO:RAIL generated portgroups such as “vCenter Server Network” and the vmkernel ports – I would recommend leaving these alone – unless you have an utterly compelling reason to do so. They aren’t exposed to those who create VMs and consume the resources of the EVO:RAIL.

vCenter Server Appliance (vCSA) Configuration

Finally, I want to draw your attention to the configuration of the vCenter Server Appliance (vCSA). What you might not notice is that, at the factory, the vCSA is modified and we do allocate more resources to it than the default values contained in the .OVF or OVA file. So in EVO:RAIL allocation are changed to 4vCPUs and 16GB of memory. This is essentially a doubling of resource from the default vCSA that would by default normally receive 2vCPU and 8GB of memory.

Well, that wraps up this post about vSphere inventory metadata – on my next post I will be looking at the resources of Compute, Datastores and Networking in more detail….

 

Posted by on May 14, 2015 in EVO:RAIL

Comments Off on EVO:RAIL – The vSphere Environment – The Metadata (Part1)

Bonjour, je m’appelle EVO:RAIL

One of my favorite gags at UK VMUGs when I’m asked to present on EVO:RAIL is to start a demo off in a different language.

bonjour

It’s largely to put the wind up my fellow Brits, as we are somewhat notorious for not being able to speak other European languages. Generally, the British response to being confronted by someone who cannot (or in the case of the French WILL not!) speak English – is TO SPEAK MORE SLOWLY AND LOUDLY AND USE WILD SWEEPING GESTURES!!! My joke is usually to say that EVO:RAIL is soooo easy to configure you could do it in a language which you don’t understand. Look I said it was a joke, I didn’t promise that it would be funny, alright?

Anyway…. The main thing to say is apparently a bit of cash was burned in order provide multi-language support to both the EVO:RAIL Configuration UI and the Management UI. By default we use the browser’s default language settings to display the page. Sadly, most people don’t bother with those web-browser language settings – so all they see is the U.S. English version. [Notice I how I say U.S English, as distinct from British English, Australian English and Canadian English.].

A number of translations were made including:

  • French = FR
  • German = DE
  • Japanese = JA
  • Korean = KO
  • Simplified Chinese = zh-Hans
  • Traditional Chinese = zh-Hant

It is possible to dial-up these translations by piping the ISO language codes to the web-browser with the /?lang=CODE syntax – for example French would be:

https://192.168.10.200:7443/?lang=FR

Web-browsers have their own places for setting language preference. This varies between Windows, Linux and the Mac – and from browser to browser. Don’t cha just love the consistency that web-based platforms deliver? 😉

FireFox on the Mac:

Screen Shot 2015-01-04 at 09.59.04

Google Chrome on the Mac:

Screen Shot 2015-01-04 at 10.00.19

Conclusions:

Impress your colleagues, friends and family with your impeccable multi-lingual skills! What I cannot vouch for is if these translations are any good. To be honest most U.S based software companies do not have a glorious reputation for other languages when it comes to product documentation and the product itself. The less said about special characters in passwords the better. Let’s just gloss over that one shall we?

I was once in Athens, Greece (just in case you thought I was referring to one in Tennessee!) teaching a Virtual Infrastructure “Install and Configure” (ESX3.x/vCenter 2.x) course when I spied a Greek version of Windows XP. I asked my student what he thought of the translation and he said it was “Total ΒΘζζΔΧς”

 

Posted by on May 5, 2015 in EVO:RAIL

Comments Off on Bonjour, je m’appelle EVO:RAIL