Tag Archives: VMware

EUC1453 – Managing from the Middle with Horizon

The Post-PC world is not Mac vs Windows.  It is all about the changes of multi-device, data mobility & anywhere access.  As such applications and means of access need to change.

Complexity for end clients has gotten much more complicated due to multiple devices, types of applications and so on.   So the management of the environment is very complex behind the scenes.  The requirements for cost management, easy to use & security still exist.  Those basic back end necessities are crucial to IT.

How did this happen?  Its the piecemeal approach.  IT had to get a tool to manage it for the Mac.  Then had to manage it for iOS.  Then Android and so forth.  This is for Web tools, SaaS tools, Mobile Apps etc.

Consumeration of IT really means it has to be easy or the end user will go somewhere else.  Now to deliver this, VMware’s approach is to manage from the middle.  My Apps, My Files, Native Experience.

Now Horizion suite is aiming to managed in the middle and be the portal to facilitate access and control for all the various end user computing space to the application and devices.  By doing this a Catalog can be offered, file sharing can be offered and controlled.  Central management is required to offer security policy controls also.

The three ways of managing in Horizon view point is Identity, Policy & Context.  In Identity, who are you.  Context is what are you trying to access and from and what OS.  This is different than before where the PC was a known entity.  Now there are more choices and options here.Policy takes Identity and Context to apply rules across both of them.

Horizon offers a place to manage, security, track, delivery and make functional.  Their goal is to encourage them to opt-in instead of force down.  End customers will find a way around IT wherever possible.  Getting them to buy in is the future.

Keynote Day 2 – Herrod and Future of End User Computing

Today’s Keynote is all about the End User Computing experience and the Battle of the Platinum partners.

Branch in a box through the View Rapid Deployment Program.   Take an appliance, some configuration and deploy as many workstations as you need.

Mirage from Wanova is being presented as the future of secured solutions and centralized management.  ACE, Local Mode are great solutions for a limited set of end use cases.   Mirage is aimed at handling all the various other end devices out there.  It will offer disaster recovery and centralized management.

After an entertaining canned demo around Mirage.  They presented some more advanced demos of using Tablets and existing OS instances.   Project AppShift is some R&D making it more swipe style based interface for Windows 7.  This is User Interface Virtualization Techinques.  This took takes several of Windows basic interface experiences and made them Tablet friendly with swiping and copy/paste across the system.

Horizon Suite Administration is announced today and a quick demo.  One of the points is that Horizon can manage Xen App Applications.  Horizon Mobile on iOS will wrap applications into a secured workspace.   This separates applications and control around security policies.  So now it allows you to manage end devices cleanly and safely across multiple devices from a single interface tool.

VMworld Challenge -What is the things that Partners are doing to improve the VM space.  They get 4 minutes to give their presentation.  Then everyone at VMworld will vote using the mobile app on who gave the best preso (or is doing the most interesting thing).  The winner will have their charity get a sizeable donation from VMware.

What are partners doing

Cisco – Playing for Kaboom who help build playgrounds in dense city centers for kids.

Techwise TV is presented by Cisco to making Networking more close.   Gist of L.I.S.P.  An interestingly cute little preso around VM Mobility with all the different means to move VMs around between datacenters due to IPv4/IPv6.  LISP is a free technology to use from Cisco.

Dell – playing for Girl Scouts of America.

Dell vStart 1000 is a stack that gives you the full Dell solution from storage, networking and compute in a rack with a simple management interface.   Fully integrated

EMC – charity is Wounded Warriors

EMC believes that more things should be built in or easily available  Directly from the Web Client, Chad Sakac from EMC, he created a backup job directly in vSphere environment.  His demo is live as he does things and clicks it isn’t pre-recorded. Chad ran out of time.  He was the first and brave.

HP – playing for Big Brothers and Big Sisters

HP showed how they have integrated their HP Matrix Infrastructure orchestration suite and how it works with vCloud Director.  It organizes and automates the integration and backend creation of a provider DC.

NetApp- Playing for Be The Match, helps DNA matching for bone marrow

Data ON Tap infrastructure is the demo.  How do you demo that per Dave Hitz.  Peak Colo is going to be used an example of how NetApp helps them out.   Everyone in Peak Colo gets a vSAN since all the infrastructure is shared overall and the vSAN is for each customer.

NetApp won and VMware is donating $10,000 to Be The Match.

VMworld 2011 – Its on

After much work and challenge, I have been able to get a VMworld Pass, a Plane Ticket and by the generosity of some wonderful VMware Peeps, a room to crash in.  At this point I’m digging into and attempting to find sessions that I’d like to try to go to and getting packed to go.   The late entry obviously will make getting into some of these great sessions nearly impossible, and I’ll just have to make due.

After it’s all said and done, this trip will still be worth it all.  I look forward to seeing all the vExperts, Wizards, Stalkers and vEverything folks that I’ve met and look forward to meeting new ones.  See you there.

VMware still supporting DOS

Just saw the release of the 28 July 2011 patches for ESX(i) and in the list fixed is this little nugget.

When you use Altiris DOS boot disk and PXE boot a
virtual machine running on ESX 4.1 with flexible
adapter, the virtual machine might fail to start when it
attempts to load MS‐DOS LAN Manager NetBind.

VMware is still supporting DOS long after Microsoft has sent it to the curb.  I know several Fortune 1000 companies that still have DOS applications that they need to run that are critical to the business processes.   What other top notch hypervisors support DOS still?

100k vMotions in Production

I logged into the environment and was checking some things out and we just recently broke a rather important milestone.

In our production environment one of our oldest clusters just broke 100,000 vMotions (or more accurately since they aren’t at ESXi 4.1U1 yet, VMotions).

100k vMotion Summary This cluster has weathered the ups and downs of a functional vSphere cluster with hosts going into an out of Maintenance Mode, Hardware failures and general maintenance.  This cluster we had over allocated on CPU for a couple months per our standard policy, though we got no real complaints about performance there.  We have since updated that policy and I’m sure the number of vMotions has slowed down somewhat.

Just proof that vMotions/DRS & the entire vSphere solution is valid and solid.  Here’s to another 100k vMotions.

I’m on the list of Virtualization Blogs. Woot!

Eric Siebert, of vSphere-land.com fame, has published the Top 25 blog list of 2010.   If you look through the list there is some awesome bloggers out there.  I take great pride in having met some of them at VMworld and VMUG events and look forward to more interactions with all these brilliant thinkers and communicators.

Congrats to all the excellent information sharers out there.   You have all earned it.

On my side I am super ecstatic that I had 20 people vote for me.   Thank you to all the readers of this blog.

http://vsphere-land.com/news/top-vmware-blogger-results.html

Lab Manager is dead.. Long Live Lab Manager

VMware has announced the new product VMware vCloud Director (vCD from now on).   I’ve read the early blog posts and been in some conversations and know at this point I just can’t give it any justice.   The short view it is virtualizing a datacenter into software and then managing at that layer.  After spending close to 2 hours both taking the vCD install lab (which was fantastic to show you the concepts by the way) and then talking with a brilliant individual from the vCloud Team (Paul from the APAC region), I know I need to chew on vCD a bit longer.   Thankfully Yellow-Bricks has done an excellent write-up to give you a short intro to this new product offering.

VMware vCloud Director (vCD)

So go read it and come back.    Pretty powerful stuff even at a 1.0.

If you are familiar with the concepts of organizations, VM Templates, network Fencing and self service that are presented from Lab Manager, you will quickly get about 75% of vCD.   The other 25% is coming from Chargeback and vOrchestrator capabilities.  The challenge with Lab Manager is being able to run true production out of it.   The management is a bit limiting and constrained by size.   vCD takes all those concepts and adds a few more and pushes up the scalability to Service Provider size where you need to deal with limits of 4095 VLANs and Petabytes of storage.

Does this mean that an SMB can’t use vCD.  I don’t believe so.   When I look at this I easily see Lab Manager as dead now.   Why spend any resources on a less functional, less useful, more limited product when you have something you just need to right size in licensing for someone that needs to use it for a “Lab Manager style Test/Dev” environment?   vCD can do everything we do in Lab Manager today along with being able to have production right next to it in the same management interface.

Lab Manager is Dead.   Long Live Lab Manager.

PCoIP painting issues

Finally got PCoIP with View 4.0.1 up and running.   All excited and thrilled to compare it to RDP.  It was looking good until I fired up IE 8 and went to a couple websites.  Some had issues.. Some didn’t.

IE8 based Painting Issues

IE8 based Painting Issues

I then launched vSphere client only to be unable to see any of the objects in the left hand window in the client.

vSphere Painting Issues

vSphere Painting Issues

This VM is running on ESX 3.5U4 on VM Hardware 4.   The quick fix is to upgrade to VM Hardware 7 which entails all the updates to vSphere 4 & VM tools updates.

The other fix is due to the following two bugs in Hardware version 4.

  1. Completely uninstall the View Agent
  2. Reboot
  3. Reinstall the View Agent (Make sure that the Video Driver version is [...].0032)

If this doesn’t do it then you are probably having an issue with VRAM.   The fix is to adjust the pool inside of View for this machine and set the resolution and # of monitors so they come out to a number divisible by 64.  (Kudos to my Support Wizard for finding this one.)

The magic formula is

((#of monitors * Width of Resolution) * (# of monitors * Height of Resolution) * 4 )/1024 == Multiple of 64

Keep in mind if you have less monitors than you set the pool to, PCoIP handles this gracefully and it doesn’t cause issues.

Scale Up or Scale Out™

Duncan over at Yellow-Bricks.com brings up the great discussion once again.   Every time  a brand new piece of hardware comes out with more RAM possible or better, faster CPUs I have the “Scale Up or Scale Out™” Discussion with many people.  I have this discussion every 9-12 months on average.  We end up covering all sorts of criteria on what to compare and what is acceptable and what is not.

Our conversation usually goes something like this:

The hot new badness just came out and we need to order more hardware.

Awesome.   So how much does this puppy have in it?  RAM?  CPUs?   Slots for HBAs & NICs?

Did you know the new motherboard comes with 4 NICs now so our standard config can go from 4U to 2U and gobs of RAM with 6 core CPUs now.

Awesome!   *pause*  You know with that much RAM I can put 100 Win7 VDI systems on there.   Umm.. What about when it goes down?

Oh.. Hrm.   That wouldn’t be so good.   ….

That being said we generally end up breaking it down to a couple of factors.

  1. What is the current capacity configuration we run with today?
  2. What is our current pain points in CPU, Memory, Network or Storage?
  3. Is there any new architecture changes coming that will impact this design?   Is there a new switch fabric that needs to be plugged into?   Is there changes to storage that need to be addressed?
  4. How much does this new hardware configuration cost?
  5. How will this change affect DRS’s Chaos Theory?  The more hosts, the more DRS can do for you due to Chaos theory.
  6. What is our Risk level for number of eggs in a single basket?

The point is most corporation’s environments aren’t starting from scratch.  In my case we have a known configuration today to use as a baseline and adjust the environment and design every hardware order to make it better.

In our most recent order we had this discussion all over again.  This time we had some architectural changes needed to prevent some false positive HA events from happening in a 2 time a year strange events.   So we are going to a 3 switch connectivity solution to enable network beacon for NIC teamed connections.  We started with the following information:

  • Baseline:  HP DL585 G5, 4 sockets w/ quad cores, 128G of RAM, 3 Dual 1G NICs, 2 Emulex LPe11000 HBAs
  • Cluster: 10 Host Clusters with ~30 per Host in Servers and ~65 Workstations per Host in View
  • Pain Points:  CPU starvation, Licensing Issues with 10 Host sized clusters
  • Risk Level:  Politically we are getting pretty touchy about more than 30 Servers going down in a single blow even if HA works on bringing them up in under 15 mins automatically.

We compared 3 different models of newer, faster, badder and more wicked hardware from HP since the DL585 G5s are not really on the manufacturing line anymore.   So we looked at the BL495cG6, DL585 G6, DL385 G6 and DL580 G6.

DL585 G6:

  • Pros
    • Proven and comfortable AMD based stable platform with a good price/performance cost.
    • Gain more CPU resources with the additional 2 cores per socket.  6 core systems.
    • Can build 5 Host clusters to address licensing issues.   Issues with HA support for the density involved.
  • Cons
    • Same Risk Level as before.

    Push

    • Same architectural solution today with maybe another NIC card to enable the NIC Beaconing

DL580 G5:

  • Pros
    • Fastest individual cores out there.   Lots of good press about the Intel.
    • Should get better CPU resources with higher performing CPUs.
    • Can build 5 Host clusters to address licensing issues.   Issues with HA support for the density involved.
  • Cons
    • Significant premium in cost for speed.   See easily a 25% premium for a 10% faster performance.
    • Same Risk Level as before.
  • Push
    • Same architectural solution today with maybe another NIC card to enable the NIC Beaconing.

DL385 G6:

  • Pros
    • Lowers the risk level without lowering performance
    • Best price/performance cost for 6 core systems
    • Has enough slots to move to the newer network layout to enable NIC Beaconing
    • Gain more CPU resources with the additional 2 cores per socket.  6 core systems.
    • Put 64G of RAM into them and build 5 host clusters for licensing problematic applications.
  • Cons
    • More physical hosts to deal with (cabling, power, rack space, cooling, management)

BL495c G6:

  • Pros
    • Blades reduce the amount of cabling
    • Gain more CPU resources with the additional 2 cores per socket.  6 core systems
  • Cons
    • Firmware Management is an issue
    • Increases our Risk Level with more eggs in the same basket unless we get multiple chassis to spread the blades across
    • New solution from the ground up running ESX on blades
    • Not ready to support Flex10 and because of this we have limited NIC capabilities to fit our requirements

We decided to go with the DL385 G6s based on these criteria.  We will dedicate a specific 5 Host cluster for problem children applications with licensing issues.   The RAM size of the hosts will limit the number of VMs we can end up putting in a cluster which addresses the Risk Level of number of VMs per Host.  We are still way ahead of the game using VMware so having to have a couple more physicals for all these improvements is not an issue.

In your company or solution something else may be more appropriate.  The key in an ongoing improvement mentality is have things you can measure and then criteria on what to change along with why.   There is no one size fits all answer which is why VMware works so well for so many different folks.   We don’t have to change how we do things to gain a lot of flexibility in the Datacenter while not changing how we ultimately end up managing these systems.