Well, actually I was real telly just a couple of weeks ago in the BBC-TV programme called “Marvellous”. If you squint, and look at the back row of the chior you might see me opening my big fat gob (nothing changes there, Mike, I hear you all say!). Last week I had a more close-up opportunity to be interviewed by VMworld TV by the one, the only and the legendary Eric Sloof. Here’s Eric quized me about my move into the EVO:RAIL team, The EVO:RAIL Challenge and my previous life as freelance VMware Certified Instructor (VCI). Enjoy!
I’m attending three VMUGs this November and at each one I’ll be squalking about EVO:RAIL. I’m hoping to be able to pull together a “VMUG” version of EVO:RAIL content, one that dispenses with the corporate deck, and helps me put across my own view point. That’s very much dependent on what time I have over the coming weeks. I’m super bizzy at the moment finishing up a new version of the EVO:RAIL HOL, as well as some internal work have to do help our partners and our own staff get up to speed.
Here’s my itinary:
UK National VMUG User Conferrence
Tuesday 18 November 2014
National Motorcycle Museum
Coventry Road Bickenhill
Solihull, West Midlands B92 0EJ
Agenda & Registration
Once again this event will have vCurry night with a vQuiz. I’m pleased to say my wife, Carmel will be at the vCurry night too!
21st VMUGBE+ Meeting (Antwerp)
Friday 21st November 2014
Filip Williotstraat 9
Agenda & Registration
Again, Carmel will be joining me on the trip – although she will be discovering the delights of old Antwerp. After the VMUG is done she and I will be spending the weekend in Bruges. A place we always wanted to visit – and we hope to get to the Menin Gate to pay our respects.
And Finally. I will be crossing the boarder to Scotland for the Edinburgh VMUG too!
Scotland User Group (Edinburgh)
Thursday 27th November 2014
The Carlton Hotel
19 North Bridge
City of Edinburgh EH1 1SD
Agenda (TBA) and Registration
Since joining the EVO:RAIL team eight short and eventful weeks ago. I’ve been kept awake at night thinking about hyper-converged virtualization – because when I’m excited about a technology from VMware, I often can’t locate off switch for my brain! I’ve spent the last couple of weeks on the developer side of the hands-on-lab, and attending a local Proof of Concept meeting I’m starting to get a feel for what I think needs to be asked. In addition to this I’m doing a round of VMUGs and podcasts – and I’ve been getting all manner of questions fired at me. Some questions I can answer right now, but some I have to find the answers for, and for others sit back and have a good think about what the right answer would be! This is my own personal view on what I think customers should be asking themselves, and an attempt to relate those back to EVO:RAIL. This whole process began being thrown in at the deep-end speaking to my own colleagues at the VMware Tech Summit EXPO (it’s like an internal only VMworld for SEs/TAMs) and then later on the floor of Solutions Exchange at VMworld. Incidentally, that was my first bit of booth babe duty proper in my life. I left the event with a tremendous amount of respect in the folks who do these huge EXPO style shows. It’s incredibly hard work, but for me it was made easy by the sheer volume of interest in EVO:RAIL. I was glad I wasn’t in one of those small booths on a periphery of the event doing 10am-5pm straight for four days!
One of my early jokes about convergence and hyper-convergence was that despite the name, as an industry no one has ‘converged’ the same way either from a technology standpoint or delivery model. In short the converged market place is ironically a very divergent one or hyper-divergent. Geddit?
Q. What’s the architecture model for your vendor’s (hyper)convergence?
If you look at the converged marketplace you will find VCE vBlock, NetApp/Cisco FlexPod, HP Matrix, Dell vStart and so on. Each of those solutions are constructed very differently, and so is the go-to-market strategy. A converged model is basically one that brings together what I like to call the three S’ of Servers/Switches/Storage, each as discrete physical components, albeit made much easier to deploy than buying all the bits separately and rigging them together.
Similarly, on the surface hyper-converged systems all look very similar, but the servers and storage are delivered within the context of a single chassis, where a combination of local HDD/SSDs are brought together to provide the storage for virtual machines. This model generally benefits from an overall lower entry-price point, and allows you to scale-out (for computer AND storage) by adding more appliances. Interestingly most hyper-converged solutions do not bundle a physical switch – that’s something you are supposed to have already. It’s well worth spending time researching the network requirements both in terms of bandwidth and features required on that physical switch before jumping in with both feet. [More about these network requirements in later posts!]
For me the big architecture difference between hyper-converged vendors is that most hyper-converged systems deploy some type of “Controller” VM that resides on each physical appliance – call it a virtual appliance if you like – running on top of the physical box. This “Controller” VM is granted access to the underlying physical storage, and by hook or by crook it then presents the storage back in a loop-back fashion – not just to the host it’s running on, but the entire cluster. This has to be done using a protocol recognizable by the hypervisor (in my case vSphere), and most commonly this is as an NFS export, although there are some vendors who are using iSCSI – and some that support SMB because they support Microsoft Hyper-V (Boo, hiss…).
In contrast EVO:RAIL uses VMware’s Virtual SAN which is embedded into the vSphere platform, and resides in the VMware ESXi kernel. Just to be crystal clear there’s no “Controller” VM in EVO:RAIL. Once the EVO:RAIL configuration is completed you have precisely same version of vSphere, vCenter, ESX, and Virtual SAN you would have if you’d taken the longer route of building your own VSAN from the HCL or if you’d acquired a VSAN Ready-node and manually configured and installed and configured all the software.
Now, I’m NOT saying that one architecture is better than other, in the current climate that would be incendiary. What I am saying is they are DIFFERENT. And customers will need to look at these different approaches and decide for themselves which offers the best match for their needs and requirements – balanced against the simplicity of deployment and support. Without beating my chest too much about VMware, I think you’ll know which one I think is the more elegant approach.
Q. Does your hyper-convergence vendor seek to complement or supplant your existing infrastructure?
I’m uneasy with the idea that hyper-convergence can produce the “Jesus Appliance” that is the panacea for all your problems. I’ve been around in the industry long enough that every 3 or 4 years there’s a magic pill to solve all datacenter problems. The reality is that most new game-changing technologies generally fix a set of challenges – only to add new ones for the business to wrestle with. Such is life.
Personally, I think it’s a mistake to paint the converged “Three S” model of Servers/Switches/Storage out of the equation altogether. For a certain set of workloads or customer requirements I think there’s still a truckload of value in the model. I see hyper-convergence as complementing a customer’s existing infrastructure model rather than utterly supplanting it (although there will be use cases where it can and does). That includes both building your three stack model using different vendors or going down the converged route with something like a FlexPod or vBlock.
I’m pleased to say that there is some healthy skepticism and debate out there around hyper-convergence – a good place to start is with a dose of ‘wake up and smell the bacon’. I think Christian Mohn’s article “Opinion: Is Hyper-converged the be-all, end-all? No.” is just the sort of reality check our community is famous for. Christian correctly points out that with the hyper-converged model – as you add more compute you add more storage, or as you add more storage you add more compute. What about a customer who doesn’t consume these resources in equal measure? What about a customer for whom their data footprint is increasing faster than their compute needs? In a way that’s the point of hyper-convergence – it’s meant to simplify your consumption. But if your consumption is more nuanced than hyper-convergence allows for it will it be always the best fit in all cases? There’s a danger (as with all solutions) that if all you have is a hammer, every problem looks like nail.
I found one of the most well-argued and well articulated counter-viewpoints on hyper-convergence is Andy Warfield of CoHo Data Hyperconvergence is a lazy ideal. In fact I’d go so far to say that Andy’s post is one the best-written blog posts I’ve read in a long while. And I’m a coffee drinker. If you want a contrasting perspective, then Chuck Hollis’ deconstruction of Storage Swiss The Problem with Storage Swiss Analysis on VSAN is a good read. If you’re looking for an independent comparison of differing hyper-converged solutions, Trevor Potts’ summary on Spiceworks is both an interesting and amusing read. Just to be clear, I don’t agree with everything these guys say, but they make for interesting reading for precisely that reason. I like people who make me think, and make me laugh. Generally, I’m against the concept of mindless agreement – I think it leads to dangerous tunnel vision.
As for myself, I had a conversation with a customer at VMworld that might illustrate my point better. They are a large holding company in the US, with a couple of very densely populated datacenters using the three S’ model – but they have over 300 subsidiaries dotted around the country. Historically, the subsidiaries have been “managed” as separate entities. They’ve even had their own budget to blow on IT resources, and for legal purposes they’ve had clear blue water from the holding company. Unfortunately, this has lead to non-standard configurations at each of the subsidiaries; lots of re-inventing the wheel and wider support issues, as each subsidiary makes its own decisions. The subsidiaries are used to having their own gear on site and they regard that as an important “asset” (a concept I find difficult to understand, but I’ve learn to bend with the wind when it comes to ideological held beliefs – for me anything that devalues and depreciates over time can hardly be classed as an asset). But it makes support a nightmare, and every other month gear at one or other subsidiary is expiring – and they keep on asking the holding company for advice about what to do in the future…
Now one solution would be for the holding company to become a private cloud provider – hosting each subsidiary in a multi-tenancy cloud. However, there are some upfront cost issues to consider here, and it breaks the history of on-premise(s) resources. Additionally, some subsidiaries could chose to ignore this private cloud altogether, and carry on spending their money upgrading local gear. And to the holding company there is a perceived risk of what happens if the subsidiaries don’t buy in… What if you build a cloud and the ‘owner-occupiers’ chose to stay in their own homes, rather than ‘renting’ an apartment in the sky?
So for them a combination of the Three-S convergence at the corporate datacenter with hyper-convergence at the subsidiary is a model that works well. The on-ramp is not too steep. The holding company could offer EVO:RAIL as a solution to the subsidiaries – whilst allowing the subsidiary to select its preferred supplier out of many Qualified EVO:RAIL Partners (QEPs). Over time as one subsidiary’s gear goes out of date, the holding company can offer them a choice of EVO:RAIL – and over time that’s how they will get a consistently configured environment, whilst the subsidiary holds on to what they value. Yes, this sounds like I’m promoting EVO:RAIL, but hey I’m in that team so you would expect me to say that!
The point of this little story is that it demonstrates simplistic “SAN Killer” statements are to be treated with an air of caution. There’s plenty of life in the old three S’ dog yet. It’s like Pat Gelsinger said at VMworld – so far IT has been all about either/or equations, and that’s a model that leads to some unhappy compromises in the datacenter. At VMware we want to allow customers to have their cake and eat it – once size does not fit all. J
Q. Does the hyper-converged vendors business model resonate with you.
I’m not a big fan of touting the “vendor lock-in” line. It’s generally associated with FUD arguments. Occasionally, I’ve heard a customer raise concerns about vendor lock-in with VMware, only to ignore the other places where they seem totally comfortable with being ‘shackled’ to another vendor. Ah, they say – that’s part of our “strategy”, as if by labeling something a “strategy” you can automagically make it disappear in a puff of logic and verbal gymnastics.
What I do think interesting is that 99% of the hyper-converged vendors are the sole supplier of their appliance. After all it’s much more challenging to develop a partner led model, rather than merely signing up channel-partners. If you’re a company with the sort of influence and contacts that VMware has, it can be done. It’s not the first time that VMware has helped create multi-vendor programs that bring technology to market – Site Recovery Manager, VAAI, and VASA are all great examples. But more importantly I believe that, by not getting into the hardware game directly with EVO:RAIL – VMware has created a competitive market place – both between the partners, and with the rest of the hyper-converged industry. I’d go so far as to say that it isn’t VMware who is competing directly in the hyper-converged market, but its partners, and I think this is brilliant for customers. Competition drives innovation and in the main makes for more interesting negotiations on price. And it always is negotiation isn’t it? I mean if you’re buying 1,000 hyper-converged appliances you’d expect negotiation wouldn’t you? If you are buying just one – well that’s a different matter…
But putting that all aside I think the main benefit of the EVO:RAIL business model is being able to deal with truly global hardware providers who have been in the game for decades. For some customers it means they can also leverage their existing relationships with the likes of Dell, EMC , HP, Fujitsu and so on.
Q. Are your hyper-converged appliance and hypervisor licenses included in one single SKU?
You might be surprised to know that some hyper-converged appliances ship with no hypervisor at all. Instead you have to use secondary tools to get the hypervisor on to the unit. To be fair, from what I’ve heard this is a relatively easy and trivial step – but it is an additional step nonetheless. Other vendors install an evaluation copy of VMware ESXi, and leave it to the customer to bring licenses to the table. That’s fine if you have an ELA or enough CPU socket licenses left in vCenter to just add the host, and license it. In contrast EVO:RAIL is an all-inclusive licensing model. The box ships with vSphere Enterprise Plus 5.5 U2 and includes licenses needed for vCenter, ESX, VMware VSAN and LogInsight. License the appliance, and you’ve licensed the entire stack. The setup should take less than 15 mins, if all is in place from a networking perspective. It’s a deployment model that is dead simple, and could potentially redefine how folks acquire the vSphere platform.
[This part actually comes from a previous blog post – but I felt repeating again here works.]
The truth is installing VMware ESXi is totally trivial event – the fun starts in the post-configuration phases. That’s why I think EVO:RAIL will be successful. Looking back over the years, I’ve personally done a lot of automation. It started with simple “bash” shell scripts in ESX 2.x, and then evolved to using the UDA to install ESXi 3.x from a PXE boot environment with the esxcfg- commands. About the time of vSphere4 I moved away from bash shell scripting to building out environments with PowerCLI. It has literally hundreds of cmdlets and can handle not just ESXi but vCenter configuration too. I burned a lot of time building and testing these various deployment methods. Now, EVO:RAIL has come along allows me to do that in less than 15mins.
For me that doesn’t mean all that previous hard work has been for naught – after all I believe there are still legs in other models for delivering infrastructure. I still will still support those methods, but what EVO:RAIL has delivered is much more automated, standardized and simpler method of doing the same thing. As former independent it always sort of irked me that VMware didn’t have a pre-packaged, shrink-wrapped method of putting down vSphere, and it was sort of left to the community to develop its own methodology. The trouble with that is everyone has his or her own personal taste on how that should be done. And we all know that leads to things being not standard between organizations, and in some cases within organizations. Despite ITIL and change-management controls, configuration drift from one BU or geo to another is a reality for many organizations. I see EVO:RAIL as offering not just a hyper-converged consumption model, but an opportunity to standardize – especially for companies with lots of branch offices and remote locations.
One of the things I didn’t get across in my previous post about “I Don’t Believe IT” was those capital letters. It’s a bit of bad pun – “I don’t believe Information Technology”. Basically, this series is homage to my every increasing “Grumpy Old Man” syndrome about technology. One of the slightly depressing things about being IT is the ludicrious opitimism that abounds the area of technology. It’s like people will think that Technology will always ride into town and save the day. I don’t really see it that way.
Don’t get me wrong I’m internal optimist by prediclition – but what agreeves me is the blind faith people put into technology. It seems people are all too willing to forget that we are monkey’s with monkey brains, and human flaws are often revealed in flawed technology and flawed business processes.
So anyway, this weeks “I Don’t Believe IT” concerns our friend (or enemy) Apple Mac iPhoto. I’m lazy you see and tend to use the default apps that ship with the Mac. Although somewhere between Mountain Lion and Mavericks – iPhoto stopped being free to new uses, and now you have to pay for the darn thing. Here’s the thing – when take a photo in iPhoto and send it to trash – it doesn’t actually delete it.
I’ve noticed that if you select an “event” in and select File, Reveal in Finder, and Original – you’ll find that the files are still cuffing there!
Why? WTF. If I send something to the trash, it should be deleted or least be sent to the trash can. I’ve been remiss in trying to work out WHY this happens or how to actually removed these orphaned and unwanted image files (some being anywhere from 1MG-5MB depending on the format used on my iPhone).
Things came to ahead this weekend, when I found my SSD drive was almost full. So I decided to google for iPhoto – as I thought that might be good place to try and free up some precious space. It turns out iPhoto has its own “empty the trash” option – that I’d never heard of before. It’s not suprising as its not in the main File/Edit/Photos menu bar, but under the iPhoto menu itself.
I wasn’t disappointed. I had 4,500 orphaned files. Emptying the very special iPhoto Trash freed up 5GB of space.
Of course, there will be those who will tell me that iPhoto a PoS, and I should be using something else. Like Windows for instance. But blow me, I assumed that when I delete files they actually deleted. It sound more like the “Trash” is more like a “Remove from Inventory” like you get in the vSphere Client(s), rather than a “Delete from Disk”.
As you might know vINCEPTION is my term for what many others called “Nested” virtualization. It’s the peculiar moment when you realise VMware software can eat itself – by being able to run VMware ESXi in a Virtual Machine running on top of either VMware Fusion, Workstation or even VMware ESXi itself. I’ve been experimenting with running a nested version of VSAN in my home lab, with the prime reason of wanting to be able to run my own private version of EVO:RAIL in a nested environment.
As you probably/hopefully know by now EVO:RAIL is physcial 2U appliance housing 4 independent server nodes. The EVO:RAIL is delivered by partners in a highly controlled process. So it’s not like I could just slap the binaries that make up EVO:RAIL (that I have private access too from our buildweb) on to my existing homelab servers and expect it all to work. The EVO:RAIL team have worked very hard with our Qualified Partners to ensure consistency of experience – it’s the partnership between a software vendor and hardware vendors that delivers the complete package.
Nonetheless we can, and do have EVO:RAIL running in a nested environment (with some internal tweaks) and it’s sterling bit of work by one of our developers Wit Riewrangboonya – I’m now responsible for maintaining, improving and updating our HOL – and if I’m honest I do feel very much like I’m standing on the shoulders of giants. If you have not checked out the EVO:RAIL HOL it’s over here – HOL-SDC-1428 VMware EVO:RAIL Introduction. Anyway, I wanted to go through the process of reproducing that environment on my homelab, mainly so I could absorb and understand what needed to be done to make it all work. And that’s what inspired this blogpost. It turns out the problem I was experiencing had nothing to do with EVO:RAIL. It was a VSAN issue, and specifically a mistake I had made in the configuration of the vESXI node…
I managed to get the EVO:RAIL part working beautifully. The trouble was the VSAN component was not working as expected. I kept on getting “Failed to join the host in VSAN Cluster on my 2nd nested EVO:RAIL appliance. Not being terrifically experienced with EVO:RAIL (I’m in Week8) or VSAN (I’m into chapter 4 of Duncan & Cormac’s book) I was bit flummoxed.
I wasn’t initially sure if this was – a problem with EVO:RAIL, a VSAN networking issue (multicast and all that) or some special requirement needed in my personal lab to make it work (like some obscure VMX file entry that everyone else, but me knows about). Looking back there’s some logic here that would have prevented me barking up the wrong tree. For instance, if the first 4-nodes (01-04) successful joined and formed a VSAN cluster – then why wouldn’t nodes (05-08)? As I was working in a nested environment was concerned perhaps I was meeting the network requirements properly. This blogpost was very useful in convincing me this was NOT the case. But I’m referencing it because it’s a bloody good troubleshooting article for situations where it is indeed the network!
You could kinda understand me think it was network related – after all status messages on the host would appear to indicate this as a fact:
But this was merely symptom not a cause. The host COULD communicate with each other – but only if osfsd starts. No osfsd, no VSAN communication. That was indicated by the fact that the VSAN Service whilst enabled, had not started.
and after all the status on the VSAN cluster clearly indicated that networking was not an issue. If it was the status would state a “misconfiguration” in the network status…
As an experiment I setup the first nested EVO:RAIL appliance – and tried doing the 2nd appliance on my own as if it was just another bunch of servers – pretty much I got exactly the same error. That discounted in my mind that this issue had anything to do with EVO:RAIL Configuration engine, and that source of my problem laid elsewhere.
Of course, a resolution had been staring me in the face from way back. Whenever you get errors like this – then google is your friend. In fact (believe it or not) I would go so far to say I love really cryptic and obtuse error messages. Search on “Failed to start osfsd (return code 1)” is like to yield more specific results than some useless generica error message like “Error: An error has occurred”. This took me to this community thread which is quite old. It dates back to 6 months ago or more, and is about some of the changes to VSAN introduce at GA. I must admit I did NOT read it closely enough.
It lead me to Cormac Hogan’s VSAN Part 14 – Host Memory Requirements where I read the following:
At a minimum, it is recommended that a host has at least 6GB of memory. If you configure a host to contain the maximum number of disks (7HDDs x 5 disk groups), then we recommend that the host contains 32GB of memory.
Sure enough following this link to the online pubs page confirmed the same (not that EVER doubted the Mighty Cormac Hogan for second!)
A quick check of my vNested revealed that nodes01-04 had only 5GB of RAM assigned to them, and inexplicably I’d configured nodes05-08 with 4GB RAM. I’d failed to meet the minimum pre-reqs. Of course, you can imagine my response to this – Total FacePalm.
Well you live and don’t learn – always read the pre-reqs and RTFM before jumping in with both boots before embarking on something, especially if you deviating from the normal config.
Well. The good news is finally out – I’m sorry please to hear that HP and HDS have joined the EVO:RAIL program. If you are at the event this week, we have the HP appliance in our booth… I say booth it actually a Resturant that we have taken over.
I’ll be at the booth all this week and occasionally down at the hang space helping out at the challenge…
Pop-up messages. Arghhhh. If your anything like me when your using a computer (regardless of OS) the incessant harassment of pop-up messages goes beyond belief. One thing I’ve sometimes thought is how little software vendors think about the real usage of a computer from the end-users perspective. It seems entirely reasonable to have helpful pop-up messages. The trouble is you may have 20-30-40-50 programs on your computer, not including the other bits of chatty software such as your AV, and pop-ups from helpful applications like Facebook and Twitter and your email – and once they are all being “helpful” you wind-up shouting – **** OFF, and LEAVE ME ALONE!
One word I’ve coined for this sort of intrusion is “Nagware” (it’s actually a term used to describe free software that nags you to pay – http://en.wikipedia.org/wiki/Nagware) but for me the term can be extended to all software that bugs the living **** out of you.
For me a classic example of this week was an experience my beautiful wife (she told me to write that) who I adore tremendously (she told me to write too) when she was away from her computer – she was only away for 10mins…. Apparently, we need new toner on HP Printer. That’s another one of IT great IDBI – the whole rip off surrounding printers, cartridges and being told your out of ink or toner.
I have an idea for a start-up called “NagAway” which blocks all these pop-up messages. I bet I’d make an absolute fortune!
Yes, I know it sounds like the Pepsi challenge!
Here’s what it is. In the Hangspace at VMworld there will be two EVO:RAILs and to entrants in the challenge will race against each other to setup an EVO:RAIL appliance. The fastest individual will win a coveted golden ticket to VMworld 2015. Space is VERY limited as only a group of 30 attendees to participate in the EVO: RAIL Challenge. If selected, you’ll be given the chance to compete in a race against time to build an EVO:RAIL Appliance. The two challenge participants with the overall best times will compete in the Final Challenge, which will take place on Thursday, 16 October at 12:00 in the Hang Space, Hall 7.0.
So if you want to take part. You need to be quick. You enter by completing this survey
Note: ** Pass valid for VMworld 2015 US or Europe Conference Pass only, all travel, hotel and all other expenses are the responsibility of the winner.
The other week I had a chance to speak to the vSoup guys on their podcast (now up to episode 50!). In case you don’t know your venerable hosts are Ed Czerwin, Chris Dearden, and Christian Mohn. In case you don’t know – Chris has the dubious honour of being the very first person on my old podcast – The Chinwag…
Anyway, in this podcast the boys grill me about all things EVO:RAIL:
A couple of weeks ago I announced my new role in the EVO:RAIL Team – the fact is I’ve been working with the team since just before VMworld US, but it wasn’t really until last week when I had finished up my work with my previous team, and the HR wheels had turned, that I was able to tell folks publically. Not to fear, my new role means I’m even more committed to the community – and that includes projects like Feed4ward and speaking at VMUGs generally. I hope to pick up a schedule of events for next year, and that includes coming to the US to speak at the big VMUG User Conferences too.
So I’ve been thinking about the topic of hyper-convergence generally. It’s popular in the US to talk about the ‘journey’ so I want this article, and the articles that will follow it, to be the basis of my own thinking around the subject – call it my own Personal IP. At some stage I need to write my own VMUG style presentation about hyper-convergence. Folks who know me will know I hardly ever use the standard corporate decks when talking about VMware technologies. Right now I am – but going forward I want to put my own personal “Mike Laverick” stamp on that. My hope is that this series of blog-posts will help me develop my own stance and perspective. So here goes…
EVO:RAIL. Already there are innumerable spellings (evo:rail, evo-rail, EVO-rail, Evo:Rail and so on), and it doesn’t help that you can’t actually pronounce “: “. So for the record it’s meant to be EVO:RAIL in capitals. One of the funny things about the name is if said too quickly it could sound like EVIL:RAIL, prompting a typical Dr EVIL:RAIL meme:
I’ve been using this to good effect when anyone asks how much EVO:RAIL costs. The reality is VMware doesn’t set the price. The six different Qualified EVO:RAIL Partners (QEPs) do. VMware licenses the EVO:RAIL engine and associated software (vSphere, VSAN, LogInsight) to the partner and the partner sells the final product to the customer – one throat to choke, as my US friends are fond of saying.
The other amusing thing is what happens if you type EVO:RAIL into Google Images. Along side pictures of servers and logos, you’ll also see pictures like this:
It’s heartening to see an interest in VMware EVO:RAIL is starting to work its way through the Google Images algorithm. A couple of weeks ago there were just photos of semi-automatic weapons! Of course, my lame pun has been – “VMware EVO:RAIL – Your Weapon of choice for the Datacenter”.
So for the record EVO is for evolution – the natural evolution of VMware technologies to be consumed in a hyper-converged fashion. As for RAIL, well if you have ever been into a datacenter hall and racked up some gear… you get the picture…
Hyper-Consolidation – How did we get here?
Although I quite like the term hyper-convergence, I can see already a parlour game of “define your terms” is already cranking up. Sometimes that can be dangerous because if a vendor defines hyper-convergence they are likely to use a definition that presents their product in the best light – it’s a phenomena that customers will be have to cautious about. Personally, I find this game of defining terms slightly pedantic. But sadly, its unavoidable – I think the simplest definition is a server (call it an appliance if you must!) which provides both compute, networking and storage in a single bundle. It’s an approach that scales-out the environment by adding more servers in a block-by-block method with each new server joining the existing environment seamlessly.
Phew, definition done.
What interests me more is where we have come from, and how we got here as an industry and community. For me it seems only right and proper for VMware to deliver (with partners!) a hyper-converged solution, after all VMware was the company that brought server-consolidation to the masses and popularized virtualization. That’s partly the pun in my title – from server-consolidation to hyper-consolidation. For me it makes perfect sense – or the perfect EVOlution for the company that brought server consolidation to the market, to also popularise hyper-consolidation too. Incidentally, I’m not trying to coin a new term here. It’s hyper-convergence, not hyper-consolidation, right? I’m just using the term as metaphor…
Way, way back in 2003, VMware could have taken this route. They could have bought cheap commodity servers from the white box market, yanked the bezel off, and replaced with a VMware bezel and sold that to customers. Mercifully they didn’t take that route at all. They took the market route of being a software vendor, and working with partners and the channel to get VMware ESX and VMware VirtualCenter (to use its old name for a moment) as a way of popularizing virtualization. It’s an approach that helped take the company from a handful of employees with a couple of hundred customers – to being a billion dollar company with 20K+ employees, and 450K customers. So what I like about EVO:RAIL is it delivers on the appliance model for the consumption of VMware technologies, whilst at the same time staying within a tradition and business-model that has been tried and tested before.
I remember when ESXi first came out, and it became possible to boot the hypervisor from a USB/SD-Card. There was talk back then of server vendors shipping the hardware with VMware ESXi already pre-installed, in a sort of embedded format. In my mind I imagined that when the server first booted, you’d merely select which hypervisor you wanted (the best one, of course!). As far as I could tell that never really ever happened. Quite possibly because there wasn’t enough add-value for either the server vendors or customers to make it happen. The truth is installing VMware ESXi is totally trivial event – the fun starts in the post-configuration phases. That’s why I think EVO:RAIL will be successful. Looking back over the years, I’ve personally done a lot of automation. It started with simple “bash” shell scripts in ESX 2.x, and then evolved to using the UDA to install ESXi 3.x from a PXE boot environment. About the time of vSphere4 I moved away from bash shell scripting to building out environments with PowerCLI. It has literally hundreds of cmdlets and can handle not just ESXi but vCenter configuration too. I burned a lot of time building and testing these various deployment methods. Now, EVO:RAIL has come along allows me to do that in less than 15mins.
For me that doesn’t mean all that previous hard work has been for naught – after all I believe there is still legs in other model for delivering infrastucture. I still will still support those methods, but what EVO:RAIL has delivered is much more automated, standardized and simpler method of doing the same thing. As independent it always sort of irked me that VMware didn’t have a pre-package, shrink wrapped method of putting down vSphere, and it was sort of left to the community to develop its own methodology. The trouble with that is everyone has their own personal taste on how that should be done. And we all know that leads to things being not standard between organizations, and in some cases within organizations. Despite ITIL and change-management controls, configuration drift from one BU or geo to another is a reality for many organizations. I see EVO:RAIL as offering not just hyper-converge consumption model, but an opportunity to standardize – especially for companies with lots of branch offices and remote locations.
It means that VMware has empowered the partners to enter the market place rapidly, and provide to customer’s choice, bringing much needed competition to the hyper-converged space where previously there was much narrower range of options. It’s also a further step down the road to hyper-convergence becoming as mainstream as virtualization. Once companies such as VMware, Dell, Fujitsu, EMC, and SuperMicro join the party, hyper-convergence will come to the attention of everyone in the industry – from the guy who racks ‘n’ stacks to the management team.