RSS
 

FUD Wars…

30 Oct

This week I caught up on a new(ish) podcast which is done through the medium of Google+ Hangouts. I enjoyed immensely, and that was by no measure of it containing many people who I know from the community (Josh Atwell, Amy Lewis et al).

I had hoped to tune in live but I was elsewhere. I forget what I was at the time, but most likely I was at choir or at rehearsals in a local play I’m in at the end of the week (not a starring role, just a walk on part – I play the 6th soldier who slopes on embarrassed at the back!).

This week the guys talked about FUD, and the various backbiting and unpleasantness that’s circulating online – often generated by folks in the pay of a vendor. This is a bit of bubble I guess – with a lot of the people like me on active on SocialMedia. So bit like any bubble – The Beltway, Westminster and so on – it might be only of interest to folks working in that field. There are a couple of choice examples of where things have “turned personal”, and drifted into elements of “mud slinging”. My heart always sinks when I see this – it reminds me of the trolls on Facebook and Twitter  – who deliberately go out their way to be unpleasant or cruel to someone. It is a very public display of the worst aspects of humanity. I don’t really like to be reminded of how horrible humans can be to one another. I see it enough already on the evening news, to want to witness it amongst my peers. And no, I’m not going to do a link-o-rama to those posts. Why feed the trolls anymore than they feed themselves. Right? It’s tempting to repost –  but it feels like  thesocial-media equivalent of slowing down on highway to gorp at a car-crash.

An yway, I wanted to add my own pennyworth to the debate generally.

In many ways I feel a bit of interloper. I spent 90s as employee for UK-based training company. The 00’s were spent as independent freelance trainer (and some minor consultancy gigs). Late 00’s saw me be a tech journalist at TechTarget for 2 year stint. And then for the 1st time in my life I joined a vendor – VMware. I ended up in the competition team at VMware, before moving to EVO:RAIL team about 7 weeks ago. Incidentally, there’s isn’t much relationship between the two roles. It’s helpful to have links back to the Competition Team, but this is a net-new role for me in kind of Tech Marketing position. So the reason I feel a bit of interloper is for the vast majority of my career I’ve not been on the vendor side.

What I think is interesting about this – is would thought a majority of those people doing my sort of role are generally former customers, or worked out of the partner/channel. For various reasons these folks personal-star took off, and they were picked up by a vendor because they came with a ready-made audience. A couple of years ago, an impression was built-up that all you need was a blog and a couple of K of twitter followers – and you could line yourself up a cushy job with a big company or start-up. I think that’s a mistaken perception. It took me a good 18 months to find a role with a vendor – and I wasn’t just looking at VMware at the time. My reputation in the community was a door opener – nothing more. If you have social-media/blogging reputation its only going to carry so far up the corporate ladder until you meet more and more people who go “RTFM, who?”. My point here is simple one – I’m personally uncomfortable with the vRockstar title. It can distort people’s sense of reality and perspective, and can make them jumped up “Don’t you know who I am types”. As ever, I’m going off topic.

I think the reason some many of us (who work in this aspect of the industry – outbound, social, tech-marketing types) feel uncomfortable with recent developments. Is many of us aren’t corporate types who went straight from college into some Uber-Corporate machine, and worked ourselves up the ladder. So FanBoism, FUD, aggressively competitive activity – something most of us feel is a bit icky. Nonetheless, customers and audience will always want us to compare our tech with the alternatives. That’s understandable because customers want to know the differences – and there is precious small amounts of truly independent analysis (by which that I mean non-vendor affiliated, and free from personal ‘axe-to-grind’ bias). Independents are often under attack by the vendors for their lack of hands-on knowledge – whilst at the same time doing nothing to help them get their hands on the products.

So the question is how do write about your technology as compared to a competitor technology – whilst avoiding FUD or being accused of FUD. The answer is with great difficulty because whatever you do, you can be accused of FUD or FanBiosm by others. I think there’s only one way to do it. If you can – get hands-on. There were many times in my previous role where I felt the urge to write to something about competitive product. Most of my content was made for internal consumption only, but there was times I wished I could just blog about my findings. In the end I didn’t. Why? Well, because my blog content is usually known for being practical and hands-on. Generally, I think I’ve built my reputation on helping people – heck I even got comments from Microsoft customers for helping them with the SCVMM deployments. I’m not sure if that’s an outcome that my employer was expecting. But in away it made me smile. Despite being critical of aspects of Microsoft’s technology, I wound up writing stuff that helped one it customers have a better experience. In great ying-yang, instant-karma measure of life, that seems more valuable to me, than putting the boot into Microsoft.

For me the FUD debate really boils down to you as a person. Do you have ethics? Are you nice person? Can you engage with people who disagree with, with decorum and politeness? If you are, then should be able to talk about the advantages of your technology over another without it becoming a slanging match. If not you will find our yourself descending into personal attacks, and defamation. That behaviour will not, and does not enhance your reputation and standing within the community. It’s huge turn off. Why would you want to turn off the audience who listens to you?

 
Comments Off

Posted in Announcements

 

EVO:RAIL VDI Scalability Reference

28 Oct

This blog post is really a short advertisement for someone else’s blog. When I was last at the Bristol (South-West) VMUG in UK, my former co-author on the EUC book (Barry Coombs) asked me a very pertinent question. Being EUC focused, Barry was keen to see whitepapers and performance analysis that could be used to demonstrate the scalability claims made by the EVO:RAIL. Of course, Barry is specifically focused in this case on Horizon View as an example. But the demand is one that I would expect to see across the board for general server consolidation, virtual desktop and specific application types. Just to give you an idea of the publically stated scalability numbers this chart is a handy reminder:

Screen Shot 2014-10-23 at 15.35.53

At the time I pointed out that there is plenty of Virtual SAN performance data in the public domain. A really good example is the recent study that was undertaken to benchmark Microsoft Exchange mailboxes on Virtual SAN as well as posts about performance for Microsoft SQL. I must say that both Wade Holmes and Rawlinson Rivera are doing some stellar work in this area.

http://blogs.vmware.com/vsphere/2014/09/vmware-virtual-san-performance-testing-part.html

http://blogs.vmware.com/vsphere/2014/09/vmware-virtual-san-performance-testing-part-ii.html

http://blogs.vmware.com/vsphere/2014/09/vmware-virtual-san-performance-microsoft-exchange-server.html

http://blogs.vmware.com/vsphere/2014/09/vmware-virtual-san-performance-microsoft-sql-server.html

Great though that is, Barry made what I think is an important point. EVO:RAIL represents quite a prescriptive deployment of Virtual SAN with respect to the hardware used, and the amounts and proportions of HDD to SSD. From his perspective as an architect he needs to be able to point to and justify the selection of any given platform. He needs to be able to judge how much performance a given deployment will deliver per appliance – and then demonstrate that the system will deliver that performance. It’s worth stating what those HDD/SSD amounts/proportions are again, just in case you aren’t familiar.

Each server in the EVO:RAIL 2U enclosure (4 servers per enclosure) has 192GB RAM allocated to it – two pCPU with 6-cores – and 1xSSD (400GB) drive for read/write cache in the Virtual SAN together with 3 SAS HDD drives. For the WHOLE appliance that works out at 14.4TB of HDD raw capacity and 1.6TB of SSD. It’s important to remember that Virtual SAN “Failures to Tolerate” is set to 1 by default – this means for every 1 VM created, a copy is created elsewhere in the cluster. The result is that 14TB of raw storage becomes about 6.5TB usable. If you look at these numbers you will see that a ratio of around 10% of the storage available is SSD based, which largely reflects the best practices surrounding Virtual SAN implementations.

So its with great pleasure I can say that the EUC team has been doing some independent validation of the EVO:RAIL platform specifically for Horizon View. The main take away – our initial claim for 250 virtual desktop VMs – holds true so long as the appliance isn’t housing other VMs at the same time. Basically, the EUC tested a configuration where the appliance is dedicated to just running the virtual desktops, and “virtual desktop infrastructure” components (the Connection server/Security server) are running elsewhere. The other configuration they tested was a more “VDI-in-a-box” configuration where both virtual desktops AND the Horizon View server services were contained in the same appliance. As you might suspect the number of supported virtual desktops comes down to about 200. Remember however that additional EVO:RAIL appliances could be added to exceed this per-appliance calculation to support up to 1000 virtual desktop instances. As the chart above indicates, the assumption is that all the virtual desktops are the same and are configured for 2vCPUs, 2GB RAM and 30GB virtual disk.

One question that occasionally comes up is the ratio of VMs per node. Sometimes people think that the ratio is a bit small. But its important to remember that we need to factor in resource management to any cluster – and work on the assumption of what resources would be available if a server is in maintenance mode for patch management OR if you actually have a physically failed server. As ever its best to err on the side of caution, and build an infrastructure that accommodates at least N+1 availability – rather than being over optimistic by assuming all things run all the time without a problem…

For further information about the VMware EUC Teams work with EVO:RAIL follow this link:

http://blogs.vmware.com/consulting/2014/10/euc-datacenter-design-series-evorail-vdi-scalability-reference.html

 
Comments Off

Posted in EVO:RAIL

 

Back To Basics: Introduction: vSphere High Availability

27 Oct

HA Overview

The primary role of High Availability (HA) in vSphere environment is to restart VMs if a vSphere Host experiences as catastrophic failure. This could be caused by any number of issues such as power outage, and failure of multiple hardware components such that operation of the VM is impacted. VMware HA is part of number of “clustering” technologies including Distributed Resource Management (DRS) and Distributed Power Management – that intend to gather the individual resources of physical resources, and represent them as logical pool of resources that can be used to run virtual machines. Once the clustering technologies are enabled administrators a liberated from the constraints of the physical world, and the focus is less on the capabilities of an individual physical server, and more about the capacity and utilization of the cluster. HA is not the only availability technology available – once enabled administrator have the option to enabled “Fault Tolerance” on selected VMs that benefit from its features. In order for FT to be enabled, so must HA.

In recent version of HA, more focus has been made on the availability of the VM generally – and so it is now possible to inspect the state of the VM itself, and to restart it – based on monitoring services within the guest operating system itself. The assumption being if core VMware services that run inside the Guest Operating system have stopped – this is likely to be good indication that the VM has serious issue, and end-users have already been disconnected.

In terms of configuration – VMware HA shares many of the same pre-requisites as VMware VMotion such as shared storage, access to consistently named networks and so on. As the VM is restarted there is no specific requirement for matching CPUs, although the reality is that because of vMotion and DRS this is often the case anyway.

Under the covers vSphere HA has a Master/Slave model where the first vSphere Host to join the cluster becomes the “master”. If the master becomes unavailable an election process is used to generate a new master. In simple configuration vSphere HA uses the concept of the “slot” to calculate the free resources available for new VMs to be created and join the cluster. The “slot” is calculated by working out the VMs size in terms of memory and CPU resources. When all the slots have been used, no more VMs can be powered on. The concept is used to stop a cluster becoming over-statuated with VMs, and stops the failure of one or more hosts from degrading overall performance, by allowing too many VMs to run on too few servers.

HA and Resource Management

If you lose a vSphere Host simultaneously the clusters has lost its contribution of CPU/Memory resources, and in the case of Virtual SAN – its contribution of storage as well. For this reason planning needs to conducted to work out was “reserve” of resources the cluster will have to accommodate failures. In more classically designs this can be express as N+1 or N+2 redundancy. Where we plan that the number of hosts required to deliver acceptable performances is N, and then we factor in additional hosts for either maintenance windows or failures. Related to this a concept of “Admission Control” which is the logic that either allows or denies power on events. As you might gather, it makes no sense in 32-node cluster, to attempt to power on VM when only one vSphere host is running. Admission control stops failures generating more failures, and decreasing the performance of the cluster, by allowing cascading failures effecting the whole cluster. For instance, if redundancy was set at +2 – VMware HA would allow two vSphere hosts to fail, and would restart VMs on the remaining nodes in the cluster. However, if a third vSphere host failed – the setting of +2 would stop VMs being restarted on the remaining hosts.

VMware HA as number of ways of expressing this reservation of resources for failover. It is possible to use classical +1, +2, and so on redundancy to indicate the tolerate loss of vSphere hosts and resources they provide. Additionally, its possible to break free from constraints of the physical world – and express this reservation in the form of percentage of CPU/Memory resources to be reserved to the failover process. Additionally, its possible to indicate a dedicated host that is use for failover – in classical active/standby approach.

Split-Brain and Isolation

Split-brain and Isolation are terms that both relate to how clustering systems work out that a failure has occurred. For example a host could be incommunicable merely because the network that used to communicate from host-to-host in the cluster has a failure – typically this is the “Management” network address that resolves to the vSphere server FQDN name. For this reason it’s really a requirement of HA that the network have maximum redundancy to prevent split-brain from occurring – situation where the clustering system loses integrity and it becomes impossible to decide which systems are running acceptably or not. There are a couple of different ways of ensuring this which were covered earlier in the networking segments. However, a Standard Switch could be configured for two vmnics, and those vmnics (0 and 1) could be patched into different physical switches. This would guarantee that false failovers wouldn’t occur simply because of switch failure or network card failure. As with all redundancy a penny worth prevention is with a pound of cure – and its best to configure a HA cluster with maximum network redundancy to stop unwanted failovers occurring due to simple network outages.

With that said, HA does come with “isolation” settings which allow you to control what happens should network isolation take place. The HA agent does check external network devices such as routers to calculate if failure has taken place or if merely network isolation has occurred. VMware HA also checks to see if access to external storage is still valid. By these many checks the HA Agent can correctly work out if failure or network isolation has taken place. Finally, VMware HA has per-VM setting that control what happens should network isolation take place. By default network isolation is treated as if the host has physically stopped functioning – and VMs are restarted. However, using per-VM controls its possible to over-ride this behaviour if necessary. For the most part many customers don’t worry about these settings, as they have delivered plenty of network redundancy to the physical host.

Managing VM High Availability

Creating a vSphere HA Cluster

Enabling VMware HA starts with creating a “cluster” in the datacenter that contains the vSphere hosts.

1. Right-click the Datacenter, and select New Cluster

2. In the name field type the name of the cluster. The name can reflect the purpose of the cluster for instance a cluster for virtual desktops. Increasingly, SySAdmins prefer to classify their cluster by their relative capabilities such as Gold, Silver, Bronze and so on. Additionally, clusters can be create with the sole purpose of running the vSphere infrastructure – companies often refer to these as “Management Clusters”. Those with experience generally turn on all the core vSphere clustering features including DRS and EVC.

3. Enable the option Turn On next to vSphere HA

Screen Shot 2014-09-16 at 11.23.20.png

Note: This dialog box only shows a subset of options available once the cluster has been created. For instance the full cluster settings allow for adjustments associated with the “slot” size of VM, as well the Active/Passive or Active/Standby optional configuration.

The option to Enable host monitoring is used to allow vSphere hosts to check each others state. This checks to see if a vSphere host is down or isolated from the network. The option can be temporarily turned off if its felt that network maintainance may contribute to false and unwanted failover. Enable Admission Control can be modified from using a simple count of vSphere hosts to achieve +1, +2 redundancy. Incidentally, this spinner can currently be only increased to a maximum of 31. Alternatively, the administrator can switch admission control to use a percentage to represent reservations of CPU/Memory allocated a reserve of resources held back to accommodate failover. Finally, Admission Control can be turned off entirely. This will allow failovers to carry on even when there’s insufficient resources to power on the VM and achieve acceptable performance. This isn’t really recommended, but maybe required in circumstance where a business critical application must be available, even if it offers degraded performance. In this situation the business is prepared to accept degraded service levels, rather than no service at all. In the ideal world, there should be plenty of resources to accommodate the loss physical servers. VM Monitoring can be used to track the state of VMs. It can be turn on at entire cluster-level with certain VMs excluded as needed, or alternatively it can be enabled on per-VM basis.

Adding Multiple vSphere hosts to a HA Enabled Cluster

Once the cluster has been created vSphere hosts can be added by using drag-and-drop. However, you may find that using “Add Host” for new hosts that need to be joined to the cluster, or using “Move Hosts” for vSphere hosts that have already been added to vCenter.

Screen Shot 2014-09-16 at 12.09.28.png

If the Move Hosts option is used then multiple vSphere hosts can be added to the cluster. During this time the HA Agent is installed and enabled on each host – this can take sometime.

Screen Shot 2014-09-16 at 12.12.52.png

Once the cluster has been created the Summary screen will show basic details such as:

  • Number of vSphere hosts
  • Total/Used CPU/Memory
  • Simple HA Configuration
  • Cluster consumers (Resource Pools, vApps and VMs)

Screen Shot 2014-09-17 at 12.35.34.png

 
Comments Off

Posted in BackToBasics

 

I’m on telly…

24 Oct

Well, actually I was real telly just a couple of weeks ago in the BBC-TV programme called “Marvellous”. If you squint, and look at the back row of the chior you might see me opening my big fat gob (nothing changes there, Mike, I hear you all say!). Last week I had a more close-up opportunity to be interviewed by VMworld TV by the one, the only and the legendary Eric Sloof. Here’s Eric quized me about my move into the EVO:RAIL team, The EVO:RAIL Challenge and my previous life as freelance VMware Certified Instructor (VCI). Enjoy!

 
Comments Off

Posted in EVO:RAIL

 

My VMUG – November

24 Oct

Screen Shot 2014-10-24 at 10.05.41

I’m attending three VMUGs this November and at each one I’ll be squalking about EVO:RAIL. I’m hoping to be able to pull together a “VMUG” version of EVO:RAIL content, one that dispenses with the corporate deck, and helps me put across my own view point. That’s very much dependent on what time I have over the coming weeks. I’m super bizzy at the moment finishing up a new version of the EVO:RAIL HOL, as well as some internal work have to do help our partners and our own staff get up to speed.

Here’s my itinary:

UK National VMUG User Conferrence
Tuesday 18 November 2014
National Motorcycle Museum
Coventry Road Bickenhill
Solihull, West Midlands B92 0EJ
Agenda & Registration

Once again this event will have vCurry night with a vQuiz. I’m pleased to say my wife, Carmel will be at the vCurry night too!

21st VMUGBE+ Meeting (Antwerp)
Friday 21st November 2014
ALM
Filip Williotstraat 9
2600 Berchem
Agenda & Registration

Again, Carmel will be joining me on the trip – although she will be discovering the delights of old Antwerp. After the VMUG is done she and I will be spending the weekend in Bruges. A place we always wanted to visit – and we hope to get to the Menin Gate to pay our respects.

And Finally. I will be crossing the boarder to Scotland for the Edinburgh VMUG too!

Scotland User Group (Edinburgh)
Thursday 27th November 2014
The Carlton Hotel
19 North Bridge
Old Town
City of Edinburgh EH1 1SD
Agenda (TBA) and Registration

 
Comments Off

Posted in VMUG

 

ThinkPiece: EVO:RAIL and Hyper-Divergence

23 Oct

VMW-LOGO-EVO-Rail-108Since joining the EVO:RAIL team eight short and eventful weeks ago. I’ve been kept awake at night thinking about hyper-converged virtualization – because when I’m excited about a technology from VMware, I often can’t locate off switch for my brain! I’ve spent the last couple of weeks on the developer side of the hands-on-lab, and attending a local Proof of Concept meeting I’m starting to get a feel for what I think needs to be asked. In addition to this I’m doing a round of VMUGs and podcasts – and I’ve been getting all manner of questions fired at me. Some questions I can answer right now, but some I have to find the answers for, and for others sit back and have a good think about what the right answer would be! This is my own personal view on what I think customers should be asking themselves, and an attempt to relate those back to EVO:RAIL. This whole process began being thrown in at the deep-end speaking to my own colleagues at the VMware Tech Summit EXPO (it’s like an internal only VMworld for SEs/TAMs) and then later on the floor of Solutions Exchange at VMworld. Incidentally, that was my first bit of booth babe duty proper in my life. I left the event with a tremendous amount of respect in the folks who do these huge EXPO style shows. It’s incredibly hard work, but for me it was made easy by the sheer volume of interest in EVO:RAIL. I was glad I wasn’t in one of those small booths on a periphery of the event doing 10am-5pm straight for four days!

One of my early jokes about convergence and hyper-convergence was that despite the name, as an industry no one has ‘converged’ the same way either from a technology standpoint or delivery model. In short the converged market place is ironically a very divergent one or hyper-divergent. Geddit?

 Q. What’s the architecture model for your vendor’s (hyper)convergence?

If you look at the converged marketplace you will find VCE vBlock, NetApp/Cisco FlexPod, HP Matrix, Dell vStart and so on. Each of those solutions are constructed very differently, and so is the go-to-market strategy. A converged model is basically one that brings together what I like to call the three S’ of Servers/Switches/Storage, each as discrete physical components, albeit made much easier to deploy than buying all the bits separately and rigging them together.

Similarly, on the surface hyper-converged systems all look very similar, but the servers and storage are delivered within the context of a single chassis, where a combination of local HDD/SSDs are brought together to provide the storage for virtual machines. This model generally benefits from an overall lower entry-price point, and allows you to scale-out (for computer AND storage) by adding more appliances. Interestingly most hyper-converged solutions do not bundle a physical switch – that’s something you are supposed to have already. It’s well worth spending time researching the network requirements both in terms of bandwidth and features required on that physical switch before jumping in with both feet. [More about these network requirements in later posts!]

For me the big architecture difference between hyper-converged vendors is that most hyper-converged systems deploy some type of “Controller” VM that resides on eachvsan physical appliance – call it a virtual appliance if you like – running on top of the physical box. This “Controller” VM is granted access to the underlying physical storage, and by hook or by crook it then presents the storage back in a loop-back fashion – not just to the host it’s running on, but the entire cluster. This has to be done using a protocol recognizable by the hypervisor (in my case vSphere), and most commonly this is as an NFS export, although there are some vendors who are using iSCSI – and some that support SMB because they support Microsoft Hyper-V (Boo, hiss…).

In contrast EVO:RAIL uses VMware’s Virtual SAN which is embedded into the vSphere platform, and resides in the VMware ESXi kernel. Just to be crystal clear there’s no “Controller” VM in EVO:RAIL. Once the EVO:RAIL configuration is completed you have precisely same version of vSphere, vCenter, ESX, and Virtual SAN you would have if you’d taken the longer route of building your own VSAN from the HCL or if you’d acquired a VSAN Ready-node and manually configured and installed and configured all the software.

Now, I’m NOT saying that one architecture is better than other, in the current climate that would be incendiary. What I am saying is they are DIFFERENT. And customers will need to look at these different approaches and decide for themselves which offers the best match for their needs and requirements – balanced against the simplicity of deployment and support. Without beating my chest too much about VMware, I think you’ll know which one I think is the more elegant approach. :-)

Q. Does your hyper-convergence vendor seek to complement or supplant your existing infrastructure?

I’m uneasy with the idea that hyper-convergence can produce the “Jesus Appliance” that is the panacea for all your problems. I’ve been around in the industry long enough that every 3 or 4 years there’s a magic pill to solve all datacenter problems. The reality is that most new game-changing technologies generally fix a set of challenges – only to add new ones for the business to wrestle with. Such is life.

Personally, I think it’s a mistake to paint the converged “Three S” model of Servers/Switches/Storage out of the equation altogether. For a certain set of workloads or customer requirements I think there’s still a truckload of value in the model. I see hyper-convergence as complementing a customer’s existing infrastructure model rather than utterly supplanting it (although there will be use cases where it can and does). That includes both building your three stack model using different vendors or going down the converged route with something like a FlexPod or vBlock.

I’m pleased to say that there is some healthy skepticism and debate out there around hyper-convergence – a good place to start is with a dose of ‘wake up and smell the bacon’. I think Christian Mohn’s article “Opinion: Is Hyper-converged the be-all, end-all? No.” is just the sort of reality check our community is famous for. Christian correctly points out that with the hyper-converged model – as you add more compute you add more storage, or as you add more storage you add more compute. What about a customer who doesn’t consume these resources in equal measure? What about a customer for whom their data footprint is increasing faster than their compute needs? In a way that’s the point of hyper-convergence – it’s meant to simplify your consumption. But if your consumption is more nuanced than hyper-convergence allows for it will it be always the best fit in all cases? There’s a danger (as with all solutions) that if all you have is a hammer, every problem looks like nail.

I found one of the most well-argued and well articulated counter-viewpoints on hyper-convergence is Andy Warfield of CoHo Data Hyperconvergence is a lazy ideal. In fact I’d go so far to say that Andy’s post is one the best-written blog posts I’ve read in a long while. And I’m a coffee drinker. :-)  If you want a contrasting perspective, then Chuck Hollis’ deconstruction of Storage Swiss The Problem with Storage Swiss Analysis on VSAN is a good read. If you’re looking for an independent comparison of differing hyper-converged solutions, Trevor Potts’ summary on Spiceworks is both an interesting and amusing read. Just to be clear, I don’t agree with everything these guys say, but they make for interesting reading for precisely that reason. I like people who make me think, and make me laugh. Generally, I’m against the concept of mindless agreement – I think it leads to dangerous tunnel vision.

As for myself, I had a conversation with a customer at VMworld that might illustrate my point better. They are a large holding company in the US, with a couple of very densely populated datacenters using the three S’ model – but they have over 300 subsidiaries dotted around the country. Historically, the subsidiaries have been “managed” as separate entities. They’ve even had their own budget to blow on IT resources, and for legal purposes they’ve had clear blue water from the holding company. Unfortunately, this has lead to non-standard configurations at each of the subsidiaries; lots of re-inventing the wheel and wider support issues, as each subsidiary makes its own decisions. The subsidiaries are used to having their own gear on site and they regard that as an important “asset” (a concept I find difficult to understand, but I’ve learn to bend with the wind when it comes to ideological held beliefs – for me anything that devalues and depreciates over time can hardly be classed as an asset). But it makes support a nightmare, and every other month gear at one or other subsidiary is expiring – and they keep on asking the holding company for advice about what to do in the future…

Now one solution would be for the holding company to become a private cloud provider – hosting each subsidiary in a multi-tenancy cloud. However, there are some upfront cost issues to consider here, and it breaks the history of on-premise(s) resources. Additionally, some subsidiaries could chose to ignore this private cloud altogether, and carry on spending their money upgrading local gear. And to the holding company there is a perceived risk of what happens if the subsidiaries don’t buy in… What if you build a cloud and the ‘owner-occupiers’ chose to stay in their own homes, rather than ‘renting’ an apartment in the sky?

So for them a combination of the Three-S convergence at the corporate datacenter with hyper-convergence at the subsidiary is a model that works well. The on-ramp is not too steep. The holding company could offer EVO:RAIL as a solution to the subsidiaries – whilst allowing the subsidiary to select its preferred supplier out of many Qualified EVO:RAIL Partners (QEPs). Over time as one subsidiary’s gear goes out of date, the holding company can offer them a choice of EVO:RAIL – and over time that’s how they will get a consistently configured environment, whilst the subsidiary holds on to what they value. Yes, this sounds like I’m promoting EVO:RAIL, but hey I’m in that team so you would expect me to say that! :-)

The point of this little story is that it demonstrates simplistic “SAN Killer” statements are to be treated with an air of caution. There’s plenty of life in the old three S’ dog yet. It’s like Pat Gelsinger said at VMworld – so far IT has been all about either/or equations, and that’s a model that leads to some unhappy compromises in the datacenter. At VMware we want to allow customers to have their cake and eat it – once size does not fit all. J

Q. Does the hyper-converged vendors business model resonate with you.

I’m not a big fan of touting the “vendor lock-in” line. It’s generally associated with FUD arguments. Occasionally, I’ve heard a customer raise concerns about vendor lock-in with VMware, only to ignore the other places where they seem totally comfortable with being ‘shackled’ to another vendor. Ah, they say – that’s part of our “strategy”, as if by labeling something a “strategy” you can automagically make it disappear in a puff of logic and verbal gymnastics. :-)

Screen Shot 2014-10-22 at 11.16.01 What I do think interesting is that 99% of the hyper-converged vendors are the sole supplier of their appliance. After all it’s much more challenging to develop a partner led model, rather than merely signing up channel-partners. If you’re a company with the sort of influence and contacts that VMware has, it can be done. It’s not the first time that VMware has helped create multi-vendor programs that bring technology to market – Site Recovery Manager, VAAI, and VASA are all great examples. But more importantly I believe that, by not getting into the hardware game directly with EVO:RAIL – VMware has created a competitive market place – both between the partners, and with the rest of the hyper-converged industry. I’d go so far as to say that it isn’t VMware who is competing directly in the hyper-converged market, but its partners, and I think this is brilliant for customers. Competition drives innovation and in the main makes for more interesting negotiations on price. And it always is negotiation isn’t it? I mean if you’re buying 1,000 hyper-converged appliances you’d expect negotiation wouldn’t you? If you are buying just one – well that’s a different matter…

But putting that all aside I think the main benefit of the EVO:RAIL business model is being able to deal with truly global hardware providers who have been in the game for decades. For some customers it means they can also leverage their existing relationships with the likes of Dell, EMC , HP, Fujitsu and so on.

 Q. Are your hyper-converged appliance and hypervisor licenses included in one single SKU?

You might be surprised to know that some hyper-converged appliances ship with no hypervisor at all. Instead you have to use secondary tools to get the hypervisor on to the unit. To be fair, from what I’ve heard this is a relatively easy and trivial step – but it is an additional step nonetheless. Other vendors install an evaluation copy of VMware ESXi, and leave it to the customer to bring licenses to the table. That’s fine if you have an ELA or enough CPU socket licenses left in vCenter to just add the host, and license it. In contrast EVO:RAIL is an all-inclusive licensing model. The box ships with vSphere Enterprise Plus 5.5 U2 and includes licenses needed for vCenter, ESX, VMware VSAN and LogInsight. License the appliance, and you’ve licensed the entire stack. The setup should take less than 15 mins, if all is in place from a networking perspective. It’s a deployment model that is dead simple, and could potentially redefine how folks acquire the vSphere platform.

[This part actually comes from a previous blog post – but I felt repeating again here works.]

The truth is installing VMware ESXi is totally trivial event – the fun starts in the post-configuration phases. That’s why I think EVO:RAIL will be successful. Looking back over the years, I’ve personally done a lot of automation. It started with simple “bash” shell scripts in ESX 2.x, and then evolved to using the UDA to install ESXi 3.x from a PXE boot environment with the esxcfg- commands. About the time of vSphere4 I moved away from bash shell scripting to building out environments with PowerCLI. It has literally hundreds of cmdlets and can handle not just ESXi but vCenter configuration too. I burned a lot of time building and testing these various deployment methods. Now, EVO:RAIL has come along allows me to do that in less than 15mins.

For me that doesn’t mean all that previous hard work has been for naught – after all I believe there are still legs in other models for delivering infrastructure. I still will still support those methods, but what EVO:RAIL has delivered is much more automated, standardized and simpler method of doing the same thing. As former independent it always sort of irked me that VMware didn’t have a pre-packaged, shrink-wrapped method of putting down vSphere, and it was sort of left to the community to develop its own methodology. The trouble with that is everyone has his or her own personal taste on how that should be done. And we all know that leads to things being not standard between organizations, and in some cases within organizations. Despite ITIL and change-management controls, configuration drift from one BU or geo to another is a reality for many organizations. I see EVO:RAIL as offering not just a hyper-converged consumption model, but an opportunity to standardize – especially for companies with lots of branch offices and remote locations.

 
Comments Off

Posted in EVO:RAIL

 

I Don’t Believe I.T. iPhoto Experience – Because Delete Doesn’t mean Delete…

22 Oct

One of the things I didn’t get across in my previous post about “I Don’t Believe IT” was those capital letters. It’s a bit of bad pun – “I don’t believe Information Technology”. Basically, this series is homage to my every increasing “Grumpy Old Man” syndrome about technology. One of the slightly depressing things about being IT is the ludicrious opitimism that abounds the area of technology. It’s like people will think that Technology will always ride into town and save the day. I don’t really see it that way.

Don’t get me wrong I’m internal optimist by prediclition – but what agreeves me is the blind faith people put into technology. It seems people are all too willing to forget that we are monkey’s with monkey brains, and human flaws are often revealed in flawed technology and flawed business processes.

So anyway, this weeks “I Don’t Believe IT” concerns our friend (or enemy) Apple Mac iPhoto. I’m lazy you see and tend to use the default apps that ship with the Mac. Although somewhere between Mountain Lion and Mavericks – iPhoto stopped being free to new uses, and now you have to pay for the darn thing. Here’s the thing – when take a photo in iPhoto and send it to trash – it doesn’t actually delete it.

Screen Shot 2014-10-06 at 14.38.08

I’ve noticed that if you select an “event” in and select File, Reveal in Finder, and Original – you’ll find that the files are still cuffing there!

Screen Shot 2014-10-10 at 08.23.43

Why? WTF. If I send something to the trash, it should be deleted or least be sent to the trash can. I’ve been remiss in trying to work out WHY this happens or how to actually removed these orphaned and unwanted image files (some being anywhere from 1MG-5MB depending on the format used on my iPhone).

Things came to ahead this weekend, when I found my SSD drive was almost full. So I decided to google for iPhoto – as I thought that might be good place to try and free up some precious space. It turns out iPhoto has its own “empty the trash” option – that I’d never heard of before. It’s not suprising as its not in the main File/Edit/Photos menu bar, but under the iPhoto menu itself.

Screen Shot 2014-10-06 at 14.44.10

I wasn’t disappointed. I had 4,500 orphaned files. Emptying the very special iPhoto Trash freed up 5GB of space.

Of course, there will be those who will tell me that iPhoto a PoS, and I should be using something else. Like Windows for instance. But blow me, I assumed that when I delete files they actually deleted. It sound more like the “Trash” is more like a “Remove from Inventory” like you get in the vSphere Client(s), rather than a “Delete from Disk”.

 
Comments Off

Posted in IDBI

 

VSAN vINCEPTION: Failed to join the host in VSAN Cluster (Return Code 1)

20 Oct

As you might know vINCEPTION is my term for what many others called “Nested” virtualization. It’s the peculiar moment when you realise VMware software can eat itself – by being able to run VMware ESXi in a Virtual Machine running on top of either VMware Fusion, Workstation or even VMware ESXi itself. I’ve been experimenting with running a nested version of VSAN in my home lab, with the prime reason of wanting to be able to run my own private version of EVO:RAIL in a nested environment.

As you probably/hopefully know by now EVO:RAIL is physcial 2U appliance housing 4 independent server nodes. The EVO:RAIL is delivered by partners in a highly controlled process. So it’s not like I could just slap the binaries that make up EVO:RAIL (that I have private access too from our buildweb) on to my existing homelab servers and expect it all to work. The EVO:RAIL team have worked very hard with our Qualified Partners to ensure consistency of experience – it’s the partnership between a software vendor and hardware vendors that delivers the complete package.

Nonetheless we can, and do have EVO:RAIL running in a nested environment (with some internal tweaks) and it’s sterling bit of work by one of our developers Wit Riewrangboonya – I’m now responsible for maintaining, improving and updating our HOL – and if I’m honest I do feel very much like I’m standing on the shoulders of giants. If you have not checked out the EVO:RAIL HOL it’s over here – HOL-SDC-1428 VMware EVO:RAIL Introduction. Anyway, I wanted to go through the process of reproducing that environment on my homelab, mainly so I could absorb and understand what needed to be done to make it all work. And that’s what inspired this blogpost. It turns out the problem I was experiencing had nothing to do with EVO:RAIL. It was a VSAN issue, and specifically a mistake I had made in the configuration of the vESXI node…

I managed to get the EVO:RAIL part working beautifully. The trouble was the VSAN component was not working as expected.  I kept on getting “Failed to join the host in VSAN Cluster on my 2nd nested EVO:RAIL appliance. Not being terrifically experienced with EVO:RAIL (I’m in Week8) or VSAN (I’m into chapter 4 of Duncan & Cormac’s book) I was bit flummoxed.

image001

I wasn’t initially sure if this was – a problem with EVO:RAIL, a VSAN networking issue (multicast and all that) or some special requirement needed in my personal lab to make it work (like some obscure VMX file entry that everyone else, but me knows about). Looking back there’s some logic here that would have prevented me barking up the wrong tree. For instance, if the first 4-nodes (01-04) successful joined and formed a VSAN cluster – then why wouldn’t nodes (05-08)? As I was working in a nested environment was concerned perhaps I was meeting the network requirements properly. This blogpost was very useful in convincing me this was NOT the case. But I’m referencing it because it’s a bloody good troubleshooting article for situations where it is indeed the network!

http://blogs.vmware.com/vsphere/2014/09/virtual-san-networking-guidelines-multicast.html

You could kinda understand me think it was network related – after all status messages on the host would appear to indicate this as a fact:

image002

But this was merely symptom not a cause. The host COULD communicate with each other – but only if osfsd starts. No osfsd, no VSAN communication. That was indicated by the fact that the VSAN Service whilst enabled, had not started.

image003

and after all the status on the VSAN cluster clearly indicated that networking was not an issue. If it was the status would state a “misconfiguration” in the network status…

image004

As an experiment I setup the first nested EVO:RAIL appliance – and tried doing the 2nd appliance on my own as if it was just another bunch of servers – pretty much I got exactly the same error. That discounted in my mind that this issue had anything to do with EVO:RAIL Configuration engine, and that source of my problem laid elsewhere.

Of course, a resolution had been staring me in the face from way back. Whenever you get errors like this – then google is your friend. In fact (believe it or not) I would go so far to say I love really cryptic and obtuse error messages. Search on “Failed to start osfsd (return code 1)” is like to yield more specific results than some useless generica error message like “Error: An error has occurred”. This took me to this community thread which is quite old. It dates back to 6 months ago or more, and is about some of the changes to VSAN introduce at GA. I must admit I did NOT read it closely enough.

https://communities.vmware.com/thread/473367?start=0&tstart=0

It lead me to Cormac Hogan’s VSAN Part 14 – Host Memory Requirements where I read the following:

At a minimum, it is recommended that a host has at least 6GB of memory. If you configure a host to contain the maximum number of disks (7HDDs x 5 disk groups), then we recommend that the host contains 32GB of memory.

Sure enough following this link to the online pubs page confirmed the same (not that EVER doubted the Mighty Cormac Hogan for second!)

A quick check of my vNested revealed that nodes01-04 had only 5GB of RAM assigned to them, and inexplicably I’d configured nodes05-08 with 4GB RAM. I’d failed to meet the minimum pre-reqs. Of course, you can imagine my response to this – Total FacePalm.

Well you live and don’t learn – always read the pre-reqs and RTFM before jumping in with both boots before embarking on something, especially if you deviating from the normal config.

 
Comments Off

Posted in EVO:RAIL

 

VMworld 2014 Europe – HP and HDS join the EVO:RAIL party

14 Oct

image

 

Well. The good news is finally out – I’m sorry please to hear that HP and HDS have joined the EVO:RAIL program. If you are at the event this week, we have the HP appliance in our booth… I say booth it actually a Resturant that we have taken over.

I’ll be at the booth all this week and occasionally down at the hang space helping out at the challenge…

 
Comments Off

Posted in EVO:RAIL

 

I Don’t Believe IT: HP Printer is out of toner…

10 Oct

Pop-up messages. Arghhhh. If your anything like me when your using a computer (regardless of OS) the incessant harassment of pop-up messages goes beyond belief. One thing I’ve sometimes thought is how little software vendors think about the real usage of a computer from the end-users perspective. It seems entirely reasonable to have helpful pop-up messages. The trouble is you may have 20-30-40-50 programs on your computer, not including the other bits of chatty software such as your AV, and pop-ups from helpful applications like Facebook and Twitter and your email – and once they are all being “helpful” you wind-up shouting – **** OFF, and LEAVE ME ALONE!

One word I’ve coined for this sort of intrusion is “Nagware” (it’s actually a term used to describe free software that nags you to pay – http://en.wikipedia.org/wiki/Nagware) but for me the term can be extended to all software that bugs the living **** out of you.

For me a classic example of this week was an experience my beautiful wife (she told me to write that) who I adore tremendously (she told me to write too) when she was away from her computer – she was only away for 10mins…. Apparently, we need new toner on HP Printer. That’s another one of IT great IDBI – the whole rip off surrounding printers, cartridges and being told your out of ink or toner.

I have an idea for a start-up called “NagAway” which blocks all these pop-up messages. I bet I’d make an absolute fortune!

hp

 
Comments Off

Posted in IDBI