EVO:RAIL – The vSphere Environment – The Physical Resources (Part2)

In my previous blog post I focused on the vSphere ‘metadata’ that makes up each and every configuration of vSphere, and for that matter, EVO:RAIL. Of course what matters is how we carve up and present the all-important physical resources. These can be segmented into compute, memory, storage and networking.


The way compute resources are handled is pretty straightforward. EVO:RAIL creates a single VMware HA and DRS cluster without modifying any of the default settings. DRS is set to be fully-automated with the ‘Migration Threshold” left at the center point. We do not enable VMware Distributed Power Management (DPM) because in a single EVO:RAIL appliance with four nodes this would create issues for VSAN and patch management – so all four nodes are always on at all times. This remains true even if you created a fully populated 8-appliance system that would contain 32 ESXi hosts. To be fair this pretty a configuration that dictated by VSAN. You don’t normally make your storage go to sleep to save on power after all…

Screen Shot 2015-04-15 at 17.29.21

Similarly VMware HA does not deviate from any of the standard defaults. The main thing to mention here is that “datastore heartbeats” are pretty much irrelevant to EVO:RAIL, considering one single VSAN datastore is presented to the entire cluster.

Screen Shot 2015-04-15 at 17.31.32


The EVO:RAIL Appliance ships with four complete nodes each with 192GB of memory. A fully populated EVO:RAIL environment with 8 appliances would present 32 individual ESXI hosts in a single VMware HA/DRS/VSAN cluster. That’s a massive 384 cores, 6TB of memory, and 128TB of RAW storage capacity. We let VMware DRS use its algorithms to decide on the placement of VMs at power-on relative to the amount of CPU and Memory available across the cluster, and we let VMware DRS control whether a VM should be moved to improve its performance. No special memory reservations are made for the System VMs of either vCenter, Log Insight or our Partner VMs.


Every EVO:RAIL ships with 1xSSD for 400GB, and 3×1.2TB 10k SAS drives. When the EVO:RAIL configures it will enroll of this storage into a single disk group. You can see these disk groups in the vSphere Web Client by navigating to the cluster and selecting >>Manage, >> Settings, Virtual SAN and >>Disk Management. Here you can see that each of the four EVO:RAIL nodes are in a single disk group, with all disks (apart from the boot disk, of course) added into the group.

Screen Shot 2015-04-16 at 15.55.37

As for the Storage Policies that control how VMs consume the VSAN datastore, a custom storage policy called “MARVIN-STORAGE-PROFILE” is generated during the configuration of the EVO:RAIL.

Screen Shot 2015-04-16 at 17.05.04

With that said, this custom policy merely has the same settings as VSAN’s default, that is one rule is set making “Number of Failures to Tolerate” be equal to 1. The effect of this policy is such that for every VM created a copy is created on different node elsewhere in the VSAN datastore. This means should a node or disk become unavailable there is a copy held elsewhere in the vSphere Cluster that can be used. Think of it being like a per-VM RAID1 policy.

It’s perhaps worth mentioning that there are slight differences between some QEP’s EVO:RAILs from others. These difference have NO impact on performance. But they are worth mentioning. There two main types. It Type1 the enclosure has 24 drive bays at the front. That’s 6 slots per node – and each node receives a boot drive, 1xSSD drive and 3xHHD drives leaving one slot free. In Type2 system there is an internal SATADOM drive from which the EVO:RAIL boots – and at the front of the enclosure there are 16 drive bays. Each node uses four of those slots – for 1xSSD and 3xHHD drives. As you can tell both Type 1 and 2 system both end up presenting the same amount of storage to VSAN. So at the end of the day it makes little difference. But its subtle difference few publically have picked up on. I think in the long run its likely all our partners will wind up using 24-drive bay system, with an internal SATADOM device. That would free up all 6-drive bays for each node, and would allow for more spindles or more SSD.


I’ve blogged previously, and at some length about networking in these posts:

EVO:RAIL – Getting the pre-RTFM in place
EVO:RAIL – Under The Covers – Networking (Part1)
EVO:RAIL – Under The Covers – Networking (Part2)
EVO:RAIL – Under The Covers – Networking (Part3)

So I don’t want to repeat myself excessively here, except to say EVO:RAIL 1.x uses a single Standard Switch(0), and patches both vmnic0 and vmnic2 for network redundancy. The vmnic1 interface is dedicated to VSAN, whereas all other traffic traverses vmnic0. Traffic shaping is enabled for the vSphere vMotion portgroup to make sure that vMotion events do not impact negatively on management or virtual machine traffic.


Well, that wraps up this two part series that covered the different aspects of vSphere environment once EVO:RAIL has done its special magic. Stay tuned for the next thrilling installment.


Posted by on May 21, 2015 in EVO:RAIL

Comments Off


Screen Shot 2015-05-15 at 07.48.15

This little post to tell you about the stuff I’m doing to support the VMUG. You know I really don’t enough, you see. :-)

North East England VMUG Meeting – 21th May

I’m speaking at a couple of VMUGs this year and my first date is with a location that close to my heart in the North-East of the UK. I’ve been U’ North a couple of times in last month checking in with my parents and family up there. Were blessed in the local by presence and talent of guys like Lee Dilworth and David Hill, both hold senior positions at VMware. Lee will be there speaking about vCloud Air, and I will there speaking about EVO:RAIL under the covers.

Register Here:


Central Ohio UserCon – 2nd June

This year I will be at the Central Ohio User Con(ferrence) on the 2nd June. It’s to be held at Hyatt Regency Columbus, 350 N High Street Columbus, Ohio. It has pretty packed agenda already with sessions on EUC, Hybrid Cloud and Emerging technologies, Storage and Availability, as well as the old favourites of vSphere and Virtualization generally.

Register Here:

Charlotte UserCon – 4th June

In the very same week I will be over in Charlotte, North Carolina for their UserCon. I’ve always had a soft spot of the Charlotte VMUG ever since back in the day (2006? 2007?) I was asked by one of the VMUG leaders to come to their event and speak. I think that was first time I ever got up on stage to really big audience. Up until then it had been classrooms and smaller VMUG events of 50-80 people. I like to think Charlotte is where my public speaking apprenticeship started!

I will be delivering the morning keynote which to be honest isn’t very keynotey. It’s more of a breakout session on a big stage, rather than lofty vision-thing still presentation. Once again I will be talking about the nuts and bolts of EVO:RAIL. Delivering the lunch time keynote, will be none other than Chad Sakac of EMC. That’s quite a coup for Charlotte, as Chad is a busy man and everyone wants as slice of him and his time. His subject is “Technology and Industry Disruptions: What’s going on in Applications, Infrastructure, and Operational/Consumption Models”.

Register Here:

After the VMUG I will be spending sometime with my good friends, Raymond Overman – the internationally famous wood turner who has made some instruments for me in the past. My wife will be coming over for the Charlotte event, and we intend to spend the week over in Raleigh-Durham with friends of ours (The Atwells and the Lewis’s) before heading the Pisgah National Forest, home of the Blue Ridge Mountains and the Blue Ridge Parkway for some well-earned R&R in the mountains.



Posted by on May 15, 2015 in VMUG

Comments Off

EVO:RAIL – The vSphere Environment – The Metadata (Part1)

One of the most common questions I get is what the vSphere environment looks like after EVO:RAIL has done its configuration magic. I think this is quite understandable. After all many of us, including myself, have spent many years perfecting the build of the vSphere environment and naturally want to know what configuration the EVO:RAIL Team decided upon. I like to think about this from a “design” perspective. There’s no such thing as a definitive and single method of deploying vSphere (or any software for that matter). All configurations are based on designs that must take the hardware and software into consideration, and that’s precisely what the EVO:RAIL Team has done with VMware’s hyperconverged appliance.

When I’m asked by this question by customers I often use the hardware resources that vSphere and EVO:RAIL present and start from there, although this can overlook certain object types that reside outside of CPU, Memory, Disk and Network – I’m thinking of logical objects such as vCenter Datacenters and Cluster names. There other objects that are created at the network layer which shouldn’t be overlooked, such as portgroups. If you like you could refer to this as the “metadata” that makes up vSphere environment.

So before I forget, let me cover them right away.

Logical Objects or “Metadata”

By default a net-new deployment of an EVO:RAIL appliance instantiates a clean copy of the vCenter Server Appliance. EVO:RAIL creates a vCenter datacenter object currently called “MARVIN-Datacenter” and a cluster called “MARVIN-Virtual-SAN-Cluster” followed by a very long UUID. Incidentally, this is the same UUID that you will see on the VSAN Datastore, and is generated by the Qualified EVO:RAIL Partner (QEP), during the factory build of the EVO:RAIL Appliance


A common question is whether this datacenter object and cluster object can be renamed to be something more meaningful. The answer is yes. The EVO:RAIL Configuration and Management engine does not use these text labels in any of its processes. Instead “Managed Object Reference” or MOREF values are used. These are system-generated values that remain the same even when objects like this are renamed. As for other vCenter objects such as datastore folders or VM folders, the EVO:RAIL engine does not create these. The System VMs that make up EVO:RAIL such as vCenter, Log Insight and our partner’s VMs are merely placed in the default “Discovered Virtual Machine” folder like so:


Similarly, the datastore that is created by VSAN can be renamed as well. And although technically renaming the “service-datastore” is possible – there’s really little point, as it cannot be used as permanent storage for virtual machines. It’s perhaps worth mentioning that in the EVO:RAIL UI you cannot select which datastore VMs can be used, there is nothing to stop that happening if you use the vSphere Web Client or vSphere Desktop Client.


EVO:RAIL uses Standard Switches – as you might know these have always been case-sensitive, and need to be consistent from one ESX host to another. Now, of course EVO:RAIL ensures this consistency of configuration by gathering all your variables and applying them programmatically and consistently. The portgroup names themselves are trickier to change after the fact.


It would be relatively trivial to add the virtual machine portgroups above, to Staging, Development and Production. However, it would have to be consistent across every ESXi host. Given how EVO:RAIL 1.1 now allows for up to 8 appliances with 32 nodes per cluster that would not be a small amount of administration. If I were forced to make a change like that, I would probably use PowerCLI with a for-each loop to rename the portgroups for me. If you want an example of that – I have some on the VMUG Wiki Page here:

There’s two examples there – of connecting to vCenter, and then create VLAN16 portgroup on every host in a vCenter, and also creating a range of VLAN portgroups VLAN20-25 on every host in the cluster.

As for the EVO:RAIL generated portgroups such as “vCenter Server Network” and the vmkernel ports – I would recommend leaving these alone – unless you have an utterly compelling reason to do so. They aren’t exposed to those who create VMs and consume the resources of the EVO:RAIL.

vCenter Server Appliance (vCSA) Configuration

Finally, I want to draw your attention to the configuration of the vCenter Server Appliance (vCSA). What you might not notice is that, at the factory, the vCSA is modified and we do allocate more resources to it than the default values contained in the .OVF or OVA file. So in EVO:RAIL allocation are changed to 4vCPUs and 16GB of memory. This is essentially a doubling of resource from the default vCSA that would by default normally receive 2vCPU and 8GB of memory.

Well, that wraps up this post about vSphere inventory metadata – on my next post I will be looking at the resources of Compute, Datastores and Networking in more detail….


Posted by on May 14, 2015 in EVO:RAIL

Comments Off

VMUG Wiki Update and Thank You


This blogpost is really one long thank you to individual who helped me get the VMUG Wiki of the ground. His name is Jack Collins and I met him via friend of mine in the town where I live. Jack’s been very helpful to me and the VMUG Wiki project by painstakingly converting the many books and blogposts that have contributed to the seeding of the VMUG Wiki prior to the launch. I pretty much realised I wouldn’t be able to convert my EUC and SRM books on my own, plus my commitments to VMware as part of my day job would prevent me from completing that process on time.

Anyway, I managed to secure from those very friendly and helpful “VMware Press” people a collection of books all about VMware Technologies as a thank you. Jack is very much at the beginning of his IT career and keen to learn more about our technologies. So I see him as the next-generation of people who are going to move our industry forward.

It’s really thanks to Jack we have an (almost) completed vSphere 5.5 Wiki; Site Recovery Wiki and VMware View Wiki.

Thank you once again, Jack! :-)



Posted by on May 7, 2015 in VMUG, VMUG Wiki

Comments Off

New Skinny Linux Distro Available – Includes VMware Tools


I have a new skinny linux distribution available on my site. I’m not the author of this release, but a colleague of mine Doug Baer orginally put it together. It’s used primarily in our hands-on-lab environment where you need some small VMs to run on top of nested vESX environment. These nested VMs that run within the content of vESX environment are sometime referred to as vVMs. This whole configuration where a physical system (Workstation, Fusion, pESX) runs VMware ESX in a VM has been dubbed by some in the community as “vInception:

These skinny linux distro uses MicroCore Linux and has VMware Tools installed using the “open source” edition which means its re-distributable in this manner. The environment is non-persistent except for /home and /opt.

I’ve taken this VM and created a .OVA for you download – and you might find it useful in your home lab where memory resources are limited.

The VM has 1xVCPU, 64MB RAM and it has single 1GB Virtual Disk that is thinly provisioned – this VM is available in the download section of my site.




Posted by on May 7, 2015 in Announcements

Comments Off

Bonjour, je m’appelle EVO:RAIL

One of my favorite gags at UK VMUGs when I’m asked to present on EVO:RAIL is to start a demo off in a different language.


It’s largely to put the wind up my fellow Brits, as we are somewhat notorious for not being able to speak other European languages. Generally, the British response to being confronted by someone who cannot (or in the case of the French WILL not!) speak English – is TO SPEAK MORE SLOWLY AND LOUDLY AND USE WILD SWEEPING GESTURES!!! My joke is usually to say that EVO:RAIL is soooo easy to configure you could do it in a language which you don’t understand. Look I said it was a joke, I didn’t promise that it would be funny, alright?

Anyway…. The main thing to say is apparently a bit of cash was burned in order provide multi-language support to both the EVO:RAIL Configuration UI and the Management UI. By default we use the browser’s default language settings to display the page. Sadly, most people don’t bother with those web-browser language settings – so all they see is the U.S. English version. [Notice I how I say U.S English, as distinct from British English, Australian English and Canadian English.].

A number of translations were made including:

  • French = FR
  • German = DE
  • Japanese = JA
  • Korean = KO
  • Simplified Chinese = zh-Hans
  • Traditional Chinese = zh-Hant

It is possible to dial-up these translations by piping the ISO language codes to the web-browser with the /?lang=CODE syntax – for example French would be:

Web-browsers have their own places for setting language preference. This varies between Windows, Linux and the Mac – and from browser to browser. Don’t cha just love the consistency that web-based platforms deliver? ;-)

FireFox on the Mac:

Screen Shot 2015-01-04 at 09.59.04

Google Chrome on the Mac:

Screen Shot 2015-01-04 at 10.00.19


Impress your colleagues, friends and family with your impeccable multi-lingual skills! What I cannot vouch for is if these translations are any good. To be honest most U.S based software companies do not have a glorious reputation for other languages when it comes to product documentation and the product itself. The less said about special characters in passwords the better. Let’s just gloss over that one shall we?

I was once in Athens, Greece (just in case you thought I was referring to one in Tennessee!) teaching a Virtual Infrastructure “Install and Configure” (ESX3.x/vCenter 2.x) course when I spied a Greek version of Windows XP. I asked my student what he thought of the translation and he said it was “Total ΒΘζζΔΧς”


Posted by on May 5, 2015 in EVO:RAIL

Comments Off

Chinwag Reloaded with Amy Lewis (@Commsninja)


This week’s chinwaggie is Amy Lewis (@Commsninja). Her day job is with , and she used to work for Cisco. I first met her some years ago when I was speaking at the VMUG at the Research Triangle Park (RTP)  is one of the largest research parks in the world. It is located near Durham, Raleigh, and Chapel Hill in North Carolina. I was out there at the time visiting NetApp, and getting to know the team out there that was focused on VMware integration. There’s a couple of ways you might know Amy. She’s one of the three voices that host the “Geek Whispers” podcast ( She’s also the lady who spearhead Cisco’s “EngineersUnplugged” series of videos – were two techies just talk each other albeit helped by a whiteboard and unicorns. Amy blogs at - and you find her writing about passion for food as much tecnology.



Posted by on April 27, 2015 in Chinwag

Comments Off

Introducing the VMware EVO:RAIL vSphere Loyality Program

Today, we are delighted to announce the launch of the VMware EVO:RAIL vSphere Loyalty Program.

We developed the VMware EVO:RAIL vSphere Loyalty Program to allow our VMware vSphere customers to apply their licenses to the purchase of VMware EVO:RAIL appliances from our nine Qualified EVO:RAIL Partners. This program enables our customers to preserve their existing investment in VMware software, while reducing their overall cost of an VMware EVO:RAIL appliance purchase.

Customers with licenses obtained through Enterprise Licensing Agreements (ELAs), OEM partners, distribution, or other resale channels, are eligible for the program. Customers will also require a minimum of 8 CPU vSphere Enterprise Plus licenses to commit to one VMware EVO:RAIL appliance.

Read on….


Posted by on April 23, 2015 in Announcements

Comments Off

EVO:RAIL – Under The Covers – Networking (Part3)

This is my third and final blog post about the networking side of the configuration of EVO:RAIL. In this blog post I want to talk about the current configuration of the networking from a vSphere Virtual Switch perspective. But before I get into that here is a reminder of the physical world. Each EVO:RAIL node (there are four of them per appliance) has two 10Gbps network cards which are presented as vmnic0 and vmnic1.

In the diagram below you can see that vmnic0 of node01/02/03/04 is being patched to one physical switch, and vmnic1 is patched to a second – with both switches linked by ISL interfaces for switch-to-switch traffic. This ensures network availability to the appliance.

So how are these physical interfaces used by vSphere once EVO:RAIL has done its work of configuring the appliance itself? Firstly, EVO: RAIL 1.x uses the vSphere Standard Switch. To be more specific it uses vSwitch0 which is built into the VMware ESXi hypervisor, and patches both “vmnic0” and “vmnic1” to the Standard Switch. This means any traffic on the vSwitch0 has network fault tolerance and redundancy.

You can see this configuration on any of the four nodes that make up an EVO:RAIL system from the Web Client. If you select an ESXi host in the Inventory, select the Manage tab and Networking – you can see the vSwitch0 in the interface and see that vmnic0 and vmnic1 are patched to it.

Screen Shot 2015-04-14 at 16.48.50

The “Virtual Machine” portgroups called “Staging”, “Development” and “Production” were created and specified using the EVO:RAIL Configuration UI supplied by the customer. The other “vmkernel” portgroups you see, such as “Virtual SAN” and so on, are system generated by the EVO:RAIL Configuration Engine. Of course, customers are free to add extra virtual machine portgroups as they define new VLANs on the physical switch. EVO:RAIL does support being configured for external IP storage such as NFS or iSCSI using the standard vSphere Clients. But it’s perhaps best not to change the settings to the system generated “vmkernel” portgroups unless you are very experienced with vSphere and know what you’re doing, as casual changes there could cause problems if they aren’t correctly thought through.

It’s worth mentioning that the configuration of the Standard vSwitch0 doesn’t end there, and that per-portgroup settings are applied as well. Essentially, what happens is that the vCenter Server Network, vSphere vMotion, Management Network and MARVIN Management networks are pegged to use vmnic0 as their “active” adapter, with vmnic1 set to be “standby”. You can view this configuration by selecting one of these portgroups, and clicking the ‘pencil icon’ to access the portgroup settings dialog box. In the screen grab below I opened the settings for the vSphere vMotion vmkernel portgroups, and selected the “Teaming and failover” section. Here you can see that per-portgroup settings have been used to peg the vMotion process to vmnic0. This means when all things are good the traffic prefers to use vmnic0. vMotion traffic would only traverse the vmnic1 interface if the vmnic0 failed, or if the physical switch the vmnic1 was attached to failed.

Screen Shot 2015-04-13 at 11.54.00

In contrast the “Virtual SAN” vmkernel portgroup has the reverse configuration – such that vmnic1 is its preferred/active interface, and vmnic0 is the standby.

Screen Shot 2015-04-13 at 11.58.07

Clearly, there are a couple of expected outcomes from this style of configuration. Firstly, Virtual SAN (or VSAN if you prefer) has dedicated 10Gps NIC assigned to it that it does not share with any other process within the vSphere stack. This means it has exclusive access to all the bandwidth that NIC can provide on separate physical switch from other traffic. The only time that all the traffic can be on the same NIC is if there is a NIC failure or switch failure.

Secondly, as the vMotion, Management and Virtual Machine traffic would normally reside on vmnic0, some care has to be taken to make sure that vMotion itself doesn’t ‘tread on the toes’ of other traffic types. This is something the EVO:RAIL engine takes care of automatically for you. The EVO:RAIL Configuration engine will peg the vMotion process to have 4Gps bandwidth.

Screen Shot 2015-04-13 at 12.05.19

Finally, you might be interested to know the role and function of the “MARVIN Management” portgroup. In case you don’t know “MARVIN” was the original project name for EVO:RAIL prior to GA. I suspect that over time we will be replacing this name with the official name of EVO:RAIL. As you can see we have two management networks. The portgroup called “Management Network” holds the customer’s static IP address for each ESXi host in the cluster. This is the primary management port. You could consider the “MARVIN Management” as a type of “Appliance Network”, a network that is used internally by the EVO:RAIL engine for its own internal communications. This means that, should a customer tamper with their own “Management Network”, internal EVO:RAIL processes continue. In an environment where there is no DHCP server residing on the management VLAN, you would expect to see this “MARVIN Management” portgroup default to a 169.254.x.y address. However, if there were a DHCP Server running on the default management network then it would pick up an IPv4 address from it.

Note: In this case as there was no DHCP server running on the network the ESXi host was assigned a ‘link local’ or ‘auto-IP’ IP address. This isn’t a problem. EVO:RAIL uses the VMware Loudmouth service to discover EVO:RAIL nodes on the network. The EVO:RAIL engine will take care of configuring the host and vCenter with the customer IP configuration.

Screen Shot 2015-04-13 at 12.45.46


So to conclude – EVO:RAIL 1.x currently uses Standard Switches in vSphere for networking. All traffic is pegged to vmnic0, except for the VSAN network which gets a dedicated 10Gps NIC. Customers are free to add additional portgroups for the virtual machine network, but it’s perhaps wise to leave the ‘system generated’ vmkernel ports alone. EVO:RAIL has two management networks – one using the static IP pool provided by the customer, and the second which is used by the appliance itself.


Posted by on April 22, 2015 in EVO:RAIL

Comments Off

Introducing VMUG’s WIKI


Hi there, and thank you for reading this. It’s with great delight that I’m able to write to you today about a big project for VMUG and our great community. It’s called “WIKI” and I want to explain what it is and what inspired its creation.

Some years ago, John Troyer floated the idea of a website based on the WIKI format – all about VMware technologies. The idea was to have an independent encyclopedia of all things VMware – with the content being created, edited and maintained by our community. John’s suggestion stuck with me even whilst I was writing traditional books myself. So the idea of a VMUG WIKI has been at the back of my mind for a long time – and I think now might just be the right time for its launch. So why a WIKI?

Firstly, we have all seen in recent years the rapidity of software changes in the datacenter. Nowadays, by the time a book is written, and published, it’s likely to be out dated by a new release of that software within a short time frame. Even more than in the past conventional books need to be timed like a military campaign to be released at the optimum point to reach the biggest audiences (and let’s face it, sales). My fellow authors will tell you that the challenge of writing a book on a technical subject is quite an undertaking. My peers who have written books will attest to that moment of dread when within an hour after publication, they realize they have a typo or technical error on the first page! For me this situation cries out for a method of delivering content that is live, dynamic and updatable at any point in time – either by the “Originating Author” or by the keen reader. Does this mean the end of conventional technology books? No, in fact I’m avidly reading Duncan Epping and Cormac Hogan’s VSAN Book from the VMware Press. I feel that there will always be a demand for classic paperbacks, as well as their digital equivalents. So try to see the VMUG WIKI as complementing these formats, rather than supplanting them – it’s just another educational resource for your virtual bookshelf.

Secondly, in the early days of virtualization and VMware it was possible for one guy or maybe a couple of guys to be able to create a weighty tome about all VMware technologies. That now seems inconceivable. The breadth and depth of VMware’s product portfolio is so vast and expansive, it’s hard to see how anyone could have a handle on the detail of the Big Vision. Sure, the high-level perspective of VMware’s Software-Defined Datacenter (SDDC) is something we can all understand conceptually, but as always the devil resides in the detail in how these technologies are installed, configured and interact with each other. As we have seen that has led to the rise and rise of books that specialize in one given technology. That’s a great development because it really needs someone with laser vision (or should that be tunnel vision?) to create technically dense, but at the same time relevant content.

Thirdly, in recent years I’ve seen our community create content that isn’t really suitable for a blog format. Often these posts form a very long series. Looking back on technical blogging – the original goal was to produce much shorter material that covered discrete experiences – usually some sort of technical problem and how it was resolved – or as a platform for expressing a unique personal opinion about development in our industry. In short, WordPress or a similar blogging platform isn’t always ideal for longer form content that evolves over weeks and months. This seems to suggest to me that a WIKI format might be more useful especially since it automatically creates a Table Of Contents to ease the navigation process. And being web-based it means it’s accessible to all devices from the laptop to the tablet, without the worry about file formats which bedevils e-readers.

Finally, I’ve have always struggled with the idea that a small select bunch of “vRockstars” act as the font of all knowledge about VMware. For sure there will always be a role for traditional books and experts in their fields – but I personally feel that being a dubbed a “vRockstar” is a bit of a double-edged sword. Especially, as we all know we learn something new everyday and wonder how on earth we got through life or work without knowing that fact. I know everyday I Google for the answers to problems – and nine times out of ten – it’s one of you in the community who has the answer. I would love to explode the whole myth of the “vRockstar,” and to recognize that as a community together we have more brainpower, knowledge and experience than a relatively small cabal of individuals.

So how does VMUG WIKI work and what is VMUG’s vision for the future? The VMUG WIKI is powered by the same version of WIKI software that Wikipedia uses. Everyone who is a member of the user group, and has a valid login to, by default will have the rights to both create and edit new and existing content. That’s right, you can now correct all my spelling mistakes and typos live online! Anyone who creates brand new content is given the permanent recognition of being the “Originating Author” – so you will always be recognized for your contribution.

I’ve been seeding out the VMUG WIKI for some months. The content I’ve provided is very much in the “Mike Laverick” style – with tons of screen grabs, practical how-to’s with step-by-step instructions – coupled with warnings and caveats about typical errors and how to resolve them. Often the content I’ve created is supplemented by video material as well – in two formats “Show Me How” which is a brief video demonstration of the technology in question, together with a “Discuss The Options” video where I interview a notable person in our community for their advice and opinions on the best practices and approaches. In short I’ve tried to pull together into a single location online content that is often distributed and fragmented. In recent months I have been able to convert the “Building End-User Computing Solutions with VMware View” that I co-authored with Barry Coombs into the WIKI format. Additionally, the VMware Press have very kindly released my “Administering VMware Site Recovery Manager 5.0” book, to be converted into the WIKI format as well. Finally, my vSphere “Back To Basics” series has now been relocated to the VMUG Wiki as well. I hoping these contributions will act as an inspiration for others join me and also donate content as well.

However, please don’t get the impression that this is a “Mike Laverick” only production. When I was discussing the idea with my friends in the Global VMUG community it was important to me that the VMUG WIKI should be not-for-profit and once launched it should be steered and controlled by an independent VMUG WIKI Foundation. My chief concern is to ensure that VMUG WIKI always remains free to use, and free to contribute to – and isn’t exploited for financial gain. I’m sure you will agree that the Foundation’s members are people who have a long history of contributing to the community and who we can all rely on to make sure its founding principles are respected. They are – Scott Lowe, Edward Halekty, David Davis, Eric Sloof, Jason Boche, and Eric Siebert.

In many respects the inspiration for the VMUG WIKI is the idealism of Wikipedia’s founder, Jimmy Wales. It’s that same idealism that I hope to inspire in my peers and colleagues. By all means carry on creating your excellent blog content, and building your personal brand. But when you have completed that mammoth series of blogposts, please think about donating in a charitable way some of your content to the VMUG WIKI, and by doing so create a testament to your contribution to our community. For existing authors who have an existing book – do you still own the copyright? Even if your content is somewhat dated, it is still useful. You or the community could work together to update that content. As for the wider community – please do contribute. One way to do so is to support this project by promoting its goals to your peers. Another way is to stop by now and then, to review the new content – I hope you will learn something new. But the best way you can contribute is by correcting content that has inaccuracies or errors – and by adding additional material yourself to improve the content that is already there.

As for me, I’m marking my commitment to the VMUG WIKI project by deciding that today marks the end of me writing conventional books. From here on in, my focus will be doing my best to add, extend and maintain the content I create on the VMUG WIKI. Of course, I will be still be blogging at, and it’s likely I will continue to serialize my “Back To Basics” series on there. Once each “chapter” is completed, I will be donating the content in full to the VMUG WIKI project. Over the next couple of days, I will be modifying the older “Back To Basics” posts and redirecting them to the relevant location on the VMUG WIKI. I will also copy over my older vCloud Director content as well. From a logistics perspective we actually have two WIKIs – a DEV and PROD. The DEV is used by major contributors (such as myself) who want a semi-private zone to develop extensive content – and we have a very simple (and I mean very simple!) method of copying content from DEV to PROD.

I hope you will ALL join me in making the VMUG WIKI a long-term success, and that I won’t be left to paint the Golden Gate Bridge on my own!


Posted by on April 21, 2015 in Announcements

Comments Off