RSS

EVO:RAIL – Under The Covers – What happens when EVO:RAIL Configures (Part 1)

One of the things EVO:RAIL excels at is automating stuff. As you might know EVO:RAIL automates the deployment of VMware ESXi, vCenter, Log Insight as well as carrying out many countless configuration steps that allow for a VMware High Availability (HA), Distributed Resource Scheduler (DRS) and VSAN Cluster to work – all this and more in around 15 minutes flat. However, these individual steps are not widely talked about and where this gets “interesting” is when various big tasks are carried out. Understanding these steps helps demystify what EVO:RAIL is doing, and also helps explain some of the messages you see. For me there are three main big tasks the EVO:RAIL engine carries out:

  1. Configuration of the very first EVO:RAIL Appliance
  2. Adding an Additional Appliance expand available compute and storage resources (commonly referred to in marketing-speak as auto-discovery and auto scale-out).
  3. Replacing a failed node with a new node – commonly caused by failure of a motherboard or CPU socket.

This blog post and its two companion posts (Parts 2/3) attempt to describe in a bit more detail what’s happening in these processes, and why there are subtle differences between them. So let’s start with the first – the steps taken during the configuration of the very first EVO:RAIL appliance.

You can see these steps listed in the EVO:RAIL UI during the Initialize/Build/Configure/Finalize process. It’s perhaps not so thrilling to sit and watch that. Believe it or not I recorded the 15min process so I could rewind slowly and document each one below. I guess I could have asked someone in my team internally for this information, but I suspect they have better things to do!

Screen Shot 2015-06-26 at 10.53.46

So here’s the list….

  1. Set password on vCenter
  2. Install private DNS on vCenter
  3. Configure private DNS on vCenter
  4. Configure vCenter to use private DNS
  5. Perform mDNS ESXi host discovery
  6. Setup management network on ESXi hosts
  7. Configure NTP on ESXi Hosts
  8. Configure Syslog on ESXi hosts
  9. Configure vCenters FQDN
  10. Configure NTP on the vCenter
  11. Configure Syslog on the vCenter
  12. Restart Loudmouth on the vCenter
  13. Accept EULA on vCenter
  14. Create vCenter Database
  15. Initialize SSO
  16. Start vCenter (vpxd)
  17. Create management account on vCenter
  18. Register ESXi hosts with vCenter
  19. Configure FQDN on ESXi hosts
  20. Rename vCenter Server management network on ESXi hosts
  21. Configure NIC Team
  22. Setup Virtual SAN, vSphere vMotion, VM Networks on ESXi hosts
  23. Setup DNS on ESXi hosts
  24. Restart Loudmouth on ESXi hosts
  25. Enable vSphere HA/DRS
  26. Create a Storage Policy
  27. Configuring Enhanced vMotion Compatibility
  28. Set vCenter Log Insight to auto-start after Power Cycle events
  29. Configure root password on ESXi hosts
  30. Register EVO:RAIL Service with mDNS

I don’t have much to say about these steps except to make a couple of remarks. Firstly, they form the bedrock of Parts 2/3 of this blog post series. Most of the EVO:RAIL “big tasks” will do some (but critically not ALL) of these steps. For example, there is no point in deploying vCenter when adding an additional appliance – if it has already been done building the first. It is for this reason adding a additional appliance only takes about 7mins – whereas building the first appliance takes around 15mins.

Secondly, knowing the process can help in troubleshooting. For example notice how the vCenter ‘root’ (and administrator@vsphere.local) password is changed at Step 1, whereas the ‘root’ password on the ESXi host is not changed until Step 29. If there was a problem during the configuration process between these two steps – it would mean the password to login to vCenter would be different from the password to log into the ESXi host(s). Incidentally, this separation of the password change is deliberate. We don’t change root password of the VMware ESXi until the very end and when we can guarantee the appliance build process has been successful.

Conclusions:

In the best of all possible worlds (to quote Voltaire’s Candide for a moment) you shouldn’t have to know about these steps. But a little knowledge is a dangerous thing (to quote Alexander Pope). I could go on flashing off my literary credentials but if I carry on like this you’ll think I’m just a snooty Brit who thinks he knows it all (incidentally, you’d be right!). And let’s face it, no one likes a clever Richard, do they?

Tune into the next thrilling episode for Parts 2/3 where it gets infinitely more interesting.

 

Posted by on June 26, 2015 in EVO:RAIL

Comments Off on EVO:RAIL – Under The Covers – What happens when EVO:RAIL Configures (Part 1)

EVO:RAIL Introduces support for Enhanced vMotion Compatibility

It’s my pleasure to say that EVO:RAIL 1.2 has been released with a number of enhancements. In case you don’t know there have been a number of maintenance releases (1.0.1 and 1.0.2 and 1.1) that shipped at the end of last year, and beginning of this. The 1.1 release rolls up all the changes previously introduces, and critically adds a new step to the configuration of the EVO:RAIL appliance – support for “Enhanced vMotion Compatibility” or EVC

The support for EVC is important step because it will allow both Intel Ivybridge and Intel Haswell based EVO:RAIL systems to co-exist in the same VMware HA and DRS cluster. The new EVC support enables support for the EVC Mode of “IvyBridge” which means a new Haswell system can join an Ivybridge based VMware cluster – and still be allow for vMotion events triggered by DRS or maintenance mode to occur.

Screen Shot 2015-06-19 at 5.02.00 AM

Prior to this release of EVO:RAIL, Enhance vMotion Compatibility was not enabled by default. You might ask is it possible on the older release of EVO:RAIL is possible to enabled EVC on systems prior to 1.2? The answer is yes, so long as you follow this order of steps:

  • Upgrade your EVO:RAIL appliance to EVO:RAIL 1.1+.
  • Connect to the vSphere Web Client and login with administrator privileges.
  • From Home, click Hosts and Clusters.
  • Click the EVO:RAIL cluster, Marvin-Virtual-SAN-Cluster-<uuid>.
  • Click Manage>Settings>Configuration>VMware EVC, click
  • Select the “Enable EVC for Intel Hosts” radio button.
  • From the VMware EVC Mode dropdown, select “Intel Ivy Bridge Generation”. Click OK.
  • The VMware EVC settings will now show that EVC is enabled with mode set to “Intel Ivy Bridge Generation”.

This simple step-by-step process can be carried without shutting down any existing VMs, and would allow a VM running on the IvyBridge system to be vMotion to/from the Haswell system without a problem.

These steps are reproduced from this KB article here: http://kb.vmware.com/kb/2114368

 

Posted by on June 22, 2015 in EVO:RAIL

Comments Off on EVO:RAIL Introduces support for Enhanced vMotion Compatibility

EVO:RAIL – Under The Covers – EVO:RAIL Software Versions and Patch Management

If you have access to the EVO:RAIL Management UI it’s very easy to see what version of vCenter, ESX and EVO:RAIL software you are running. Under the “Config” node in the Management UI the first page you see is the “General” page that will show you versions of the software currently in use:

Screen Shot 2015-06-03 at 15.56.10

There are ways of finding out what version of EVO:RAIL software is in use directly from the vCSA and from the ESXi host. You find out the version of EVO:RAIL from the vCSA using the RPM (Redhat Package Management) command. I’ve sometimes used these commands before I do a build of the EVO:RAIL just confirm what version I’m working with. It also reminds me that I may need to do an update once the build process has completed. Finally, I’ve used these commands when I’ve been supporting folks remotely…

rpm -qa | grep “vmware-marvin”

This should print the version of the EVO:RAIL software running inside the vCenter Server Appliance by querying the status of the “vmware-marvin” software.

Screen Shot 2015-06-03 at 15.58.39

From the ESXI host the “esxcli” command has a method of listing all the VIBS (Virtual Infrastructure Bundle) installed to the host, again we can pipe through grep to search for particular string:

esxcli software vib list | grep “marvin”

Screen Shot 2015-06-03 at 15.57.59

So for the most part you can retrieve the EVO:RAIL version number using the EVO:RAIL Management UI, but if you prefer you can also retrieve that information from the command-line.

The patch management process is relatively simple affair. Firstly, where do you get patches for EVO:RAIL from? Answer, from MyVMware. New version of EVO:RAIL are RTM to our partners some weeks before the GA on the vmware.com site. Of course, behind the scenes the process of BETA, RC, RTM and GA that customers don’t have to worry about. Generally, we would recommend checking with your partner before you download the latest bits and upgrade – just to check that they approve it for your system.

The EVO:RAIL patch management system can update – the EVO:RAIL engine (both vCenter and ESXi components), ESXi and vCenter. The patches are distributed as .ZIP files and generally contain bundles of either VIB (Virtual Infrastructure Bundles) for ESXi, or RPMs (Red Hat Package) files. These get up loaded to the VSAN Datastore on the appliance, and then an install process is triggered. The whole process is seemless and automated – putting each ESXi host into maintenance mode, and applying the patch, and then rebooting – once completed the host exits maintenance mode, and the next host is updated. The process is its nature serial on a single appliance with 4-nodes to make sure there’s always 3-nodes available on the cluster – and each node must complete before the next node is updated. So there’s no need for VMware Update Manager or any Windows instances to handle the process.

 

Posted by on June 3, 2015 in EVO:RAIL

Comments Off on EVO:RAIL – Under The Covers – EVO:RAIL Software Versions and Patch Management

VMUG EVALExperience – Now with vSphere6 and VSAN6

Screen Shot 2015-06-03 at 11.46.04

VMware recently announced the general availability of VMware vSphere 6, VMware Integrated OpenStack and VMware Virtual SAN 6 – the industry’s first unified platform for the hybrid cloud! EVALExperience will be releasing the new products and VMUG Advantage subscribers will be able to download the latest versions of:

  • vCenter Server Standard for vSphere 6
  • vSphere with Operations Management Enterprise Plus
  • vCloud Suite Standard
  • Virtual SAN 6
  • *New* Virtual SAN 6 All Flash Add-On

Existing EVALExperience users previous products download has been replaced in order to upgrade you to the latest version of these products. They must visit Kivuto and place an order for the updated products. Please note, the old products and keys will no longer be available, you will need to migrate to the new versions.

For futher info visit: http://www.vmug.com/p/cm/ld/fid=8792

 

 

Posted by on June 3, 2015 in VMUG

Comments Off on VMUG EVALExperience – Now with vSphere6 and VSAN6

Chinwag Reloaded with Craig Waters (@cswaters1)

craigwaters

This week’s chinwaggie is Criag Water (@cswaters1). His day job is with Pure Storage, and he used to work for Nutnanix. Prior to switching to vendor-side, Virtualisation Architect and a Data Centre Specialist (Compute/Storage/Network) with over 16 years of experience in the ICT industry. Focusing on virtualisation, Craig has enabled his clients to gain business agility by reducing infrastructure complexity through aggressive virtualisation initiatives. Building on this technology foundation has allowed the implementation of business resilience programs incorporating full disaster recovery solutions. Craig Leads the Melbourne, Australia VMware User Group (VMUG) and is continually contributing to increase its user base.

In this chinwag we talk again about whether convergence/hyper-convergence is leading to similiar convergence in peoples IT skills, as well as talking about SSD is changing the way we do stuff in the datacenter. In the spirit of the new comms age where anything is possible – Craig called in on skype on his phone using 4G from down under with Sydney Opera House as his back-drop. Beat that!

Linkage:

 

Posted by on May 27, 2015 in Chinwag

Comments Off on Chinwag Reloaded with Craig Waters (@cswaters1)

EVO:RAIL – The vSphere Environment – The Physical Resources (Part2)

In my previous blog post I focused on the vSphere ‘metadata’ that makes up each and every configuration of vSphere, and for that matter, EVO:RAIL. Of course what matters is how we carve up and present the all-important physical resources. These can be segmented into compute, memory, storage and networking.

Compute:

The way compute resources are handled is pretty straightforward. EVO:RAIL creates a single VMware HA and DRS cluster without modifying any of the default settings. DRS is set to be fully-automated with the ‘Migration Threshold” left at the center point. We do not enable VMware Distributed Power Management (DPM) because in a single EVO:RAIL appliance with four nodes this would create issues for VSAN and patch management – so all four nodes are always on at all times. This remains true even if you created a fully populated 8-appliance system that would contain 32 ESXi hosts. To be fair this pretty a configuration that dictated by VSAN. You don’t normally make your storage go to sleep to save on power after all…

Screen Shot 2015-04-15 at 17.29.21

Similarly VMware HA does not deviate from any of the standard defaults. The main thing to mention here is that “datastore heartbeats” are pretty much irrelevant to EVO:RAIL, considering one single VSAN datastore is presented to the entire cluster.

Screen Shot 2015-04-15 at 17.31.32

Memory:

The EVO:RAIL Appliance ships with four complete nodes each with 192GB of memory. A fully populated EVO:RAIL environment with 8 appliances would present 32 individual ESXI hosts in a single VMware HA/DRS/VSAN cluster. That’s a massive 384 cores, 6TB of memory, and 128TB of RAW storage capacity. We let VMware DRS use its algorithms to decide on the placement of VMs at power-on relative to the amount of CPU and Memory available across the cluster, and we let VMware DRS control whether a VM should be moved to improve its performance. No special memory reservations are made for the System VMs of either vCenter, Log Insight or our Partner VMs.

Storage:

Every EVO:RAIL ships with 1xSSD for 400GB, and 3×1.2TB 10k SAS drives. When the EVO:RAIL configures it will enroll of this storage into a single disk group. You can see these disk groups in the vSphere Web Client by navigating to the cluster and selecting >>Manage, >> Settings, Virtual SAN and >>Disk Management. Here you can see that each of the four EVO:RAIL nodes are in a single disk group, with all disks (apart from the boot disk, of course) added into the group.

Screen Shot 2015-04-16 at 15.55.37

As for the Storage Policies that control how VMs consume the VSAN datastore, a custom storage policy called “MARVIN-STORAGE-PROFILE” is generated during the configuration of the EVO:RAIL.

Screen Shot 2015-04-16 at 17.05.04

With that said, this custom policy merely has the same settings as VSAN’s default, that is one rule is set making “Number of Failures to Tolerate” be equal to 1. The effect of this policy is such that for every VM created a copy is created on different node elsewhere in the VSAN datastore. This means should a node or disk become unavailable there is a copy held elsewhere in the vSphere Cluster that can be used. Think of it being like a per-VM RAID1 policy.

It’s perhaps worth mentioning that there are slight differences between some QEP’s EVO:RAILs from others. These difference have NO impact on performance. But they are worth mentioning. There two main types. It Type1 the enclosure has 24 drive bays at the front. That’s 6 slots per node – and each node receives a boot drive, 1xSSD drive and 3xHHD drives leaving one slot free. In Type2 system there is an internal SATADOM drive from which the EVO:RAIL boots – and at the front of the enclosure there are 16 drive bays. Each node uses four of those slots – for 1xSSD and 3xHHD drives. As you can tell both Type 1 and 2 system both end up presenting the same amount of storage to VSAN. So at the end of the day it makes little difference. But its subtle difference few publically have picked up on. I think in the long run its likely all our partners will wind up using 24-drive bay system, with an internal SATADOM device. That would free up all 6-drive bays for each node, and would allow for more spindles or more SSD.

Networking:

I’ve blogged previously, and at some length about networking in these posts:

EVO:RAIL – Getting the pre-RTFM in place
EVO:RAIL – Under The Covers – Networking (Part1)
EVO:RAIL – Under The Covers – Networking (Part2)
EVO:RAIL – Under The Covers – Networking (Part3)

So I don’t want to repeat myself excessively here, except to say EVO:RAIL 1.x uses a single Standard Switch(0), and patches both vmnic0 and vmnic2 for network redundancy. The vmnic1 interface is dedicated to VSAN, whereas all other traffic traverses vmnic0. Traffic shaping is enabled for the vSphere vMotion portgroup to make sure that vMotion events do not impact negatively on management or virtual machine traffic.

Summary

Well, that wraps up this two part series that covered the different aspects of vSphere environment once EVO:RAIL has done its special magic. Stay tuned for the next thrilling installment.

 

Posted by on May 21, 2015 in EVO:RAIL

Comments Off on EVO:RAIL – The vSphere Environment – The Physical Resources (Part2)

My VMUG, My VMUG

Screen Shot 2015-05-15 at 07.48.15

This little post to tell you about the stuff I’m doing to support the VMUG. You know I really don’t enough, you see. :-)

North East England VMUG Meeting – 21th May

I’m speaking at a couple of VMUGs this year and my first date is with a location that close to my heart in the North-East of the UK. I’ve been U’ North a couple of times in last month checking in with my parents and family up there. Were blessed in the local by presence and talent of guys like Lee Dilworth and David Hill, both hold senior positions at VMware. Lee will be there speaking about vCloud Air, and I will there speaking about EVO:RAIL under the covers.

Register Here: http://www.vmug.com/p/cm/ld/fid=10544

cdqpcqzg

Central Ohio UserCon – 2nd June

This year I will be at the Central Ohio User Con(ferrence) on the 2nd June. It’s to be held at Hyatt Regency Columbus, 350 N High Street Columbus, Ohio. It has pretty packed agenda already with sessions on EUC, Hybrid Cloud and Emerging technologies, Storage and Availability, as well as the old favourites of vSphere and Virtualization generally.

Register Here: http://www.vmug.com/p/cm/ld/fid=9685

Charlotte UserCon – 4th June

In the very same week I will be over in Charlotte, North Carolina for their UserCon. I’ve always had a soft spot of the Charlotte VMUG ever since back in the day (2006? 2007?) I was asked by one of the VMUG leaders to come to their event and speak. I think that was first time I ever got up on stage to really big audience. Up until then it had been classrooms and smaller VMUG events of 50-80 people. I like to think Charlotte is where my public speaking apprenticeship started!

I will be delivering the morning keynote which to be honest isn’t very keynotey. It’s more of a breakout session on a big stage, rather than lofty vision-thing still presentation. Once again I will be talking about the nuts and bolts of EVO:RAIL. Delivering the lunch time keynote, will be none other than Chad Sakac of EMC. That’s quite a coup for Charlotte, as Chad is a busy man and everyone wants as slice of him and his time. His subject is “Technology and Industry Disruptions: What’s going on in Applications, Infrastructure, and Operational/Consumption Models”.

Register Here: http://www.vmug.com/p/cm/ld/fid=7448

After the VMUG I will be spending sometime with my good friends, Raymond Overman – the internationally famous wood turner who has made some instruments for me in the past. My wife will be coming over for the Charlotte event, and we intend to spend the week over in Raleigh-Durham with friends of ours (The Atwells and the Lewis’s) before heading the Pisgah National Forest, home of the Blue Ridge Mountains and the Blue Ridge Parkway for some well-earned R&R in the mountains.

D06_10area43

 

Posted by on May 15, 2015 in VMUG

Comments Off on My VMUG, My VMUG

EVO:RAIL – The vSphere Environment – The Metadata (Part1)

One of the most common questions I get is what the vSphere environment looks like after EVO:RAIL has done its configuration magic. I think this is quite understandable. After all many of us, including myself, have spent many years perfecting the build of the vSphere environment and naturally want to know what configuration the EVO:RAIL Team decided upon. I like to think about this from a “design” perspective. There’s no such thing as a definitive and single method of deploying vSphere (or any software for that matter). All configurations are based on designs that must take the hardware and software into consideration, and that’s precisely what the EVO:RAIL Team has done with VMware’s hyperconverged appliance.

When I’m asked by this question by customers I often use the hardware resources that vSphere and EVO:RAIL present and start from there, although this can overlook certain object types that reside outside of CPU, Memory, Disk and Network – I’m thinking of logical objects such as vCenter Datacenters and Cluster names. There other objects that are created at the network layer which shouldn’t be overlooked, such as portgroups. If you like you could refer to this as the “metadata” that makes up vSphere environment.

So before I forget, let me cover them right away.

Logical Objects or “Metadata”

By default a net-new deployment of an EVO:RAIL appliance instantiates a clean copy of the vCenter Server Appliance. EVO:RAIL creates a vCenter datacenter object currently called “MARVIN-Datacenter” and a cluster called “MARVIN-Virtual-SAN-Cluster” followed by a very long UUID. Incidentally, this is the same UUID that you will see on the VSAN Datastore, and is generated by the Qualified EVO:RAIL Partner (QEP), during the factory build of the EVO:RAIL Appliance

datastores

A common question is whether this datacenter object and cluster object can be renamed to be something more meaningful. The answer is yes. The EVO:RAIL Configuration and Management engine does not use these text labels in any of its processes. Instead “Managed Object Reference” or MOREF values are used. These are system-generated values that remain the same even when objects like this are renamed. As for other vCenter objects such as datastore folders or VM folders, the EVO:RAIL engine does not create these. The System VMs that make up EVO:RAIL such as vCenter, Log Insight and our partner’s VMs are merely placed in the default “Discovered Virtual Machine” folder like so:

vmfolders

Similarly, the datastore that is created by VSAN can be renamed as well. And although technically renaming the “service-datastore” is possible – there’s really little point, as it cannot be used as permanent storage for virtual machines. It’s perhaps worth mentioning that in the EVO:RAIL UI you cannot select which datastore VMs can be used, there is nothing to stop that happening if you use the vSphere Web Client or vSphere Desktop Client.

actuallythisisdatastores

EVO:RAIL uses Standard Switches – as you might know these have always been case-sensitive, and need to be consistent from one ESX host to another. Now, of course EVO:RAIL ensures this consistency of configuration by gathering all your variables and applying them programmatically and consistently. The portgroup names themselves are trickier to change after the fact.

portgroups

It would be relatively trivial to add the virtual machine portgroups above, to Staging, Development and Production. However, it would have to be consistent across every ESXi host. Given how EVO:RAIL 1.1 now allows for up to 8 appliances with 32 nodes per cluster that would not be a small amount of administration. If I were forced to make a change like that, I would probably use PowerCLI with a for-each loop to rename the portgroups for me. If you want an example of that – I have some on the VMUG Wiki Page here:

http://wiki.vmug.com/index.php/Configuring_Standard_Switches#Adding_a_new_VLAN_Portgroup_to_an_existing_Standard_vSwitch

There’s two examples there – of connecting to vCenter, and then create VLAN16 portgroup on every host in a vCenter, and also creating a range of VLAN portgroups VLAN20-25 on every host in the cluster.

As for the EVO:RAIL generated portgroups such as “vCenter Server Network” and the vmkernel ports – I would recommend leaving these alone – unless you have an utterly compelling reason to do so. They aren’t exposed to those who create VMs and consume the resources of the EVO:RAIL.

vCenter Server Appliance (vCSA) Configuration

Finally, I want to draw your attention to the configuration of the vCenter Server Appliance (vCSA). What you might not notice is that, at the factory, the vCSA is modified and we do allocate more resources to it than the default values contained in the .OVF or OVA file. So in EVO:RAIL allocation are changed to 4vCPUs and 16GB of memory. This is essentially a doubling of resource from the default vCSA that would by default normally receive 2vCPU and 8GB of memory.

Well, that wraps up this post about vSphere inventory metadata – on my next post I will be looking at the resources of Compute, Datastores and Networking in more detail….

 

Posted by on May 14, 2015 in EVO:RAIL

Comments Off on EVO:RAIL – The vSphere Environment – The Metadata (Part1)

VMUG Wiki Update and Thank You

DSC_0005

This blogpost is really one long thank you to individual who helped me get the VMUG Wiki of the ground. His name is Jack Collins and I met him via friend of mine in the town where I live. Jack’s been very helpful to me and the VMUG Wiki project by painstakingly converting the many books and blogposts that have contributed to the seeding of the VMUG Wiki prior to the launch. I pretty much realised I wouldn’t be able to convert my EUC and SRM books on my own, plus my commitments to VMware as part of my day job would prevent me from completing that process on time.

Anyway, I managed to secure from those very friendly and helpful “VMware Press” people a collection of books all about VMware Technologies as a thank you. Jack is very much at the beginning of his IT career and keen to learn more about our technologies. So I see him as the next-generation of people who are going to move our industry forward.

It’s really thanks to Jack we have an (almost) completed vSphere 5.5 Wiki; Site Recovery Wiki and VMware View Wiki.

Thank you once again, Jack! :-)

 

 

Posted by on May 7, 2015 in VMUG, VMUG Wiki

Comments Off on VMUG Wiki Update and Thank You

New Skinny Linux Distro Available – Includes VMware Tools

inception_top0

I have a new skinny linux distribution available on my site. I’m not the author of this release, but a colleague of mine Doug Baer orginally put it together. It’s used primarily in our hands-on-lab environment where you need some small VMs to run on top of nested vESX environment. These nested VMs that run within the content of vESX environment are sometime referred to as vVMs. This whole configuration where a physical system (Workstation, Fusion, pESX) runs VMware ESX in a VM has been dubbed by some in the community as “vInception:

These skinny linux distro uses MicroCore Linux and has VMware Tools installed using the “open source” edition which means its re-distributable in this manner. The environment is non-persistent except for /home and /opt.

I’ve taken this VM and created a .OVA for you download – and you might find it useful in your home lab where memory resources are limited.

The VM has 1xVCPU, 64MB RAM and it has single 1GB Virtual Disk that is thinly provisioned – this VM is available in the download section of my site.

 

 

 

Posted by on May 7, 2015 in Announcements

Comments Off on New Skinny Linux Distro Available – Includes VMware Tools