VMware still supporting DOS

Just saw the release of the 28 July 2011 patches for ESX(i) and in the list fixed is this little nugget.

When you use Altiris DOS boot disk and PXE boot a
virtual machine running on ESX 4.1 with flexible
adapter, the virtual machine might fail to start when it
attempts to load MS‐DOS LAN Manager NetBind.

VMware is still supporting DOS long after Microsoft has sent it to the curb.  I know several Fortune 1000 companies that still have DOS applications that they need to run that are critical to the business processes.   What other top notch hypervisors support DOS still?

John Troyer – A Leader by Example

One of the challenges any leader has is how to motivate and encourage people to contribute their time and energy to a cause. This can be done a variety of ways and a company that wants to foster a community needs someone that can use all of them.  Often a community forum is setup early on to help create a hangout location for their customers to share successes.   These forums will start out strong and then peter out after a short period of time.  The initial contributors don’t feel thanked for their work, get discouraged and leave.  People stop coming to the forums since their questions don’t get answered and its a vicious cycle.  VMware’s community has managed to avoid this downward trend and is the strongest I’ve ever seen.

John Troyer is one of the great leaders at VMware who has helped foster the vExpert community and kept it alive and well.    He encourages, thanks and helps supply these Experts with the recognition that a strong community needs.  A ground breaker who connects with the community through podcasts, Twitter, forums and emails showing that our work is not ignored.   He is out there responding to questions, posting challenges to start a new meme on Twitter and podcasting about the topics on people’s minds constantly.  He helps show that our efforts are one of the things that makes VMware stronger day by day.  He brings to light that VMware appreciates our work and ultimately appreciates us.

Thanks John for the tons of work you do day to day helping connect with the VMware evangelists and die-hards.   Thanks for all the work you’ve done to make the vExpert program reward those that spend their free time making VMware better for everyone.

Thanks.

vSphere 5 Licensing – Enterprise Viewpoint

Every since “Raising The Bar” announcement of vSphere 5 and its associated changes, the blogosphere has been running rampant about Licensing both for and against.   Some of the blog postings I see make some good points like CPU core count is no longer a metric to care about with VMware. There’s opinions on every side of the fence on this topic.   The one thing I haven’t seen is a discussion around companies that have already hit the 80+% virtualized space.   How does this licensing change affect them now that they are going after big hitters and larger systems such as SharePoint 2010, Exchange 2010 and tools like Autonomy or Lync?

I’ve been running the numbers of my environment and the initial numbers look good for the current time.

Counting physical cpu's and vRAM in your environment....
======
pCpu Count: 364
vRAM (GB):  7104
======
Resulting license options:

Edition         Entitlement           Licenses                                                
-------         -----------           --------                                                
[...]Standard   1 pCpu + 24 GB vRAM   364 with 1632 GB vRAM overhead                           
Enterprise      1 pCpu + 32 GB vRAM   364 with 4544 GB vRAM overhead                          
Enterprise Plus 1 pCpu + 48 GB vRAM   364 with 10368 GB vRAM overhead

Let’s cover the assumptions going forward into this post.

  • Every socket that runs ESXi must have a corresponding license even if you don’t need the vRAM
  • vRAM is only counted as allocated for Powered On VMs

Through much work, my environment has hit close to 87% virtualized for x86 workloads. We have eliminated all the low hanging fruit of 1 or 2 vCPU and <8G of RAM machines.  The new systems coming in on average are 4-8 vCPU and 16-32G of RAM minimum as they are bigger efforts and larger projects in general.  In one case they were seriously debating putting the project on hold to wait for vSphere 5 and the 12+ vCPU sizing.

That being said, I’ll cover some numbers in just a large discussion point view.  If there’s interest I’ll look at going into each of these unique projects in more detail in later blog posts.

Exchange 2010

This environment is looking at deploying 14 hosts total with 66 VMs spread across multiple campuses.

  • 14 Hosts with 256G of pRAM and 4 sockets each.
  • Each VM is allocated 32G of vRAM.
  • Host Clusters are setup at a 4+1 HA configuration.
  • Dedicated Host Cluster for this application deploy.

This sets up this situation…

14 Hosts with 4 sockets each = 56 sockets of Enterprise+
56 sockets of Enterprise+ * 48G of vRAM per Enterprise+ = 2,688G of vRAM available

66 VMs at 32G of vRAM each = 2,112G of vRAM needed

The results of this means that the environment has a surplus of 576G of vRAM to share out.  No difference in pricing between deploying on vSphere 4 or vSphere 5.

Autonomy

This environment is looking at deploying 10 hosts total with 60 VMs.

  • 10 Hosts with 384G of pRAM and 4 sockets each.
  • Each VM is allocated 48G of vRAM.
  • Host Clusters are setup at a 4+1 HA configuration.
  • Dedicated Host Cluster for this application deploy.

This sets up this situation…

10 Hosts with 4 sockets each = 40 sockets of Enterprise+
40 sockets of Enterprise+ * 48G of vRAM per Enterprise+ = 1,920G of vRAM available

60 VMs at 48G of vRAM each = 2,880G of vRAM needed

The results of this means that the environment has a deficit of 960G of vRAM.  This is the same as needing to purchase 20 more licenses of Enterprise+ to make up the vRAM difference.   At the list price of $3,495 (before discount) it will add $69,900 unless the environment can make up this difference.

This project is a hard sell to be virtual and this additional $70k would have pushed the project to go physical as it would have ultimately been cheaper in the initial capital costs.

Summary

Depending on the project and how much older hardware is available in the environment today, vRAM isn’t something to fret about today.  In the future this will offer up some challenges to justify making something that should run just fine as a virtual into a VM.  OS instances only get larger as we go into Tier 1 type of applications.   VMware needs to weigh getting a tad bit more revenue over having ideal case studies like this environment available of over 90% virtual.  They can address this by raising up the vRAM on the high end licenses to 64G+ of vRAM, allow me to buy vRAM increments or make an ELA be reasonably priced enough to keep the physical-lovers at bay.   If VMware is happy with companies just going to 80% then they are there.   If they want to get to the Tier 1, business stopping, critical applications in the larger corporations, they need to consider how to not have the vRAM be a detriment to internal discussions.

100k vMotions in Production

I logged into the environment and was checking some things out and we just recently broke a rather important milestone.

In our production environment one of our oldest clusters just broke 100,000 vMotions (or more accurately since they aren’t at ESXi 4.1U1 yet, VMotions).

100k vMotion Summary This cluster has weathered the ups and downs of a functional vSphere cluster with hosts going into an out of Maintenance Mode, Hardware failures and general maintenance.  This cluster we had over allocated on CPU for a couple months per our standard policy, though we got no real complaints about performance there.  We have since updated that policy and I’m sure the number of vMotions has slowed down somewhat.

Just proof that vMotions/DRS & the entire vSphere solution is valid and solid.  Here’s to another 100k vMotions.