Hiring a non-tech person for a CTO

Citrix has gone and hired a new CTO

For me I’m seeing another business person at Citrix in charge of the technical direction, not someone that has a strong basis of engineering and technology.

Maybe I’m judging MBA’s harshly, though they are bred and trained to aim for sales and revenue.  Engineering backgrounds aim for better products and solutions.   There is nothing wrong with either and every software business needs both.   For me, it just tells you the focus Citrix has at the top.   It isn’t about the tech, its about the sales first and foremost.

What are your thoughts on a non-scientific or non-engineering based Chief Technical Officer?

VMworld 2009 Trends and Summary

I’ve been watching things finally quiet down from VMworld 2009 and have some of the trends and summaries I have seen.   Some of the trends are interesting, some not so much.

Twitter
Twitter really started the first time last year with following @vmworld.   This year the # was all the rage.   As long as you followed #vmworld you could see everything folks were talking about.   Two good # that were fun this time around were

  • The #vCloud - Take a Drink game.
  • #VMworld3Word – 3 words for folks at VMworld.

I full expect this to only grow next year.

The Virtual Datacenters @ VMworld
Pretty impressive seeing the big one riding down the main escalator at the Moscone Center.   Watch as they build it.   776 VMware ESX Servers, 37 Terabyte internal RAM Memory, 6208 Cores and 348 TB of Shared Storage.  Wow.   Then the talks were how performance wasn’t there initially as the various engineers worked hard at resolving it.   Things were running good by Tuesday Night/Wednesday morning.   The one thing that many of us talked about was how the big data center just looked lopsided.   There was 3 server style racks to each Storage style rack.   The ratio just looked odd to most of us.

Next year I full expect to see one single big data center instead of having small, medium and large ones.   I’m still hoping to get some answers from folks from my initial blog entry.

Booth Babes
A lot more booth babes this year.   I’m not terribly excited about this.   Sure the eye candy is nice.   I’m going to talk with engineers, developers, product managers after wading my way through some marketing folks.   In general if a show is all about the Marketing/Booth Babes (and Guys) then the vendor floor has next to no value for me.

Keynote Lukewarm
Both Keynotes this year didn’t seem to really talk about all the cool stuff coming.   Not sure if this is a new leadership approach or just not much going on this year with the financial slowdown or not.   vCloudExpress stuff was nice though I expected to see more “You Gotta Check out the PCoIP stuff we are doing” and “This is mega cool”.

Vendor / ISV issues
Lots of chats were around the general feeling of hostility coming from VMware to ISV discussions.   Some talk about the rules limiting what Citrix/Microsoft could do and be demoing and shown at the conference.   (Most of the talk was they deserved it for the stuff pulled last year.  Some was a let down that we couldn’t see what they were doing.)  Some of the talk was interactions with ESXi and what was/wasn’t allowed to compete with VMware’s own offerings.  The quote that I heard that best describes it was “Is VMware turning into the Microsoft machine now?

Less Swag, Less People
My guess is the aim was 15k+ people and about 13k came versus last years 14k limit at Vegas.   There was less swag which wasn’t surprising since the financial changes in the past year.

iPhone
I have never seen more iPhones in a single place than in San Fran which was easily 1 in 10.   Then when I went into the Moscone Center for VMworld it was easily 1 in 5.   Crazy nuts what people were doing with their iPhones.   I was introduced to a good 4 dozen apps that I’ve never heard of and now have a good solid set of reasons to get an iPhone.

The other fun was the discussion around the iPhones as a conversation starter of “How many bars do you have?”   Depending on the day and time you’d have anywhere from 0 to 3.   The lucky person was one that could actually hold a phone conversation while in/near the Moscone Center with their iPhone.    Service from AT&T was less than ideal .

Better Bag
The VMworld given bag went back to the style of a true backpack instead of the messenger style.   Personally I like this as my VMworld 2006 with the same style is starting to get a little well used by now.

Live Blogging
This is a skill I am not sure I have though I’ve learned quite a bit by watching, reading and learning how to properly live blog.  If I look to do this again next year I will need to do some reading on different successful ways to do this kind of blogging.   I tried 2 different methods with varying success in my book.   For those of you that read through some of my Keynote Live Blogging posts.. I do apologize and promise to do better.

Overall a good conference again for the time spent with some quality people from VMware, NetApp, Cisco, HP, newScale and all the other individuals I talked with.   I look forward to next year.   See you all there.

Business Objects is Virtualization/MultiCore Stupid

Recently I have been involved in discussions internally on what it will take to get Business Objects onto a Virtual Machine.   The main talk has been around potentially removing another equivalent product and moving entirely over to Business Objects.   Then we got pricing for Business Objects.  

The standard piece of hardware today is pretty hefty even a small 1/2U system.   They come with multiple cores.   You have to do a special order to get anything less than a dual/quad core today.   An enterprise doesn’t order single sockets either.   Kinda silly to save $500 when you can have 2x the power and be able to reuse this system in the future for other purposes.  

They price and only price by physical cores in a system and on all systems their software could potentially run on.  

Business Objects is blowing a potential sale since today we only need something like 6-8 cores worth of power today and making these systems into VMs is ideal.   It isn’t like Enterprises are out to “screw” vendors.   Yes we all want a deal though Enterprises just want to pay for what they use.   If they would just license use of ~8 CPUs (virtual or physical or core) and let us make these VMs they win.  

Even for us to make these physical is a joke.    We have to disable cores and sockets to make us legal.  

So.. BO is blowing it.   They need to grow up and stop making Mainframe’s look cheap with their licensing policies.

VMworld 2009 – Day 2 Wrapup

Day two at VMworld ended up being quite a bit more exciting than yesterday.  The keynote by Steve Herrod was much more what I expected from the keynotes.   He covered some of the “cool” stuff coming down the pipes in both the short term and longer term.   The PCoIP demo showing Google Earth zooming up and down while connected to a machine in Portland, OR from the Moscone Center rocked.   I want to have that to use while I’m sitting in the hotel room’s blazing fast speeds while attempting to do something useful on one of my machines at home instead of using RDP with SSL.  

I went to the IO DRS Tech Preview and got the same excitement I’ve had from previous years where you know your seeing something innovative.  Several of the other sessions I hit were really partner style presentations that did not say much.   So a good 25/75 day for sessions which is pretty good.  

Now that the Self Service Labs were finally working properly I gave a shot at the vCenter Orchestrator product offering.  The Lab was responsive and well documented.  It was pretty nice and really hinted at the power this system can offer for DataCenter Automation.  The theory is this is free with vSphere 4 so I’m going to have to really look into that and find out.

During my open times during the day I had some good meetings with some VMware employees to discuss some of the vStorage & vCloud directions, HP folks around OpenView and Virtualization tools, AMD & Intel on their functionality futures and  Hitachi around their multipathing technology for VMware (still no roadmap).    

The Party was fun.   Foreigner still knows how to rock and I can actually climb rock walls.   The nice thing about the party this year is it was right at the Moscone Center.

A very productive and long day.

EA3196 – Virtualizing BlackBerry Enterprise on VMware

Once again.. another session I didn’t sign up with and zero issues getting into. 

To start off RIM & VMware have been working together for 2 years and it is officially supported on VMware.   Together RIM & VMware have done many numerous and successful engagements running BES on VMware.  The interesting thing is RIM runs their own BES on VMware for over 3 years now. 

Today BES best practice is no more than 1k users per server and they are not very multi-core friendly.   It is not cluster aware or have any HA built in.   The new 5.0 version of BES is coming with some HA availability via replication at the application layer.   One thing that has been seen in various engagements is if you put the BES servers on the same VMware Hosts as virtualized Exchange, there are noticable performance improvements. 

The support options for BES do clearly state that they support on VMware ESX.  

One of the big reasons to virtualize BES is that since it can not use multi-cores effectively the big 32 core boxes today are only able to use a fraction.  By virtualizing BES can get significant consolidation.   Then when doing the virtualization BES gets all the advantages of running virtual such as Test/Dev deployments and server consolidation and HA etc.   Things that are well known and talked about already.  

BES encourages template use to do rapid deployments.   The gotcha is just what your company policies and rules are and can potentially save quite a bit of time.   This presentation is really trying to show how to use VMware/Virtualization with BES for change management improvements, server maintenance, HA, component failures and other base vSphere technologies.   VMware is looking towards using Fault Tolerance for their own BES servers. 

BES is often not considered Tier 1 for DR events.   Even though email is often the biggest thing needed to start working after a DR event to start communications.   The reason is generally been seen due to the complexity and cost of DR. 

The performance testing with the Alliance Team from VMware has been successfully done numerous times for the past couple of years.   They have done testing at both RIM & VMware offices.   The main goal of these efforts was to generate white papers and a reference architectures that are known to work.   The testing was to use Exchange LoadGen & PERK load driver (BES testing driver).  Part of this is how to scale outwith more VMs  as the scale up is known. 

The hardware was 8 cpus, Intel E5450 3Ghz, 16 G RAM and FAS3020 Netapp on vSphere 4 & BES 4.1.6.  The 2k user test with 2 Exchange systems the results were 23% CPU utilization on 2 vCPU BES VMs.   Latency numbers was under 10 ms.   Nothing majorly wrong seen in the testing metrics.   Going from ESX 3.5 to vSphere 4 was a 10-15% CPU reduction in the same workload tests.   Adding in Hardware Assist for Memory saw what looks like another 3-5% reducting in CPU usage.   In their high load testing when doing VMotion there is a small hiccup of about 10% increase in CPU utilization during the cut over period of the VMotion.   This is well within the capacity available on the host and in the Guest OS. 

Their recommendation is to do no more than 2k users on a 2vCPU VM.  If you need more then add more VMs.   Scales and performs well in this scale out architecture.   Be sure you give the storage the number of spindles needed.   The standard statement when talking about virtualization management.  

 The presenter then went into a couple of reference architecture designs.  Small Business & Enterprise with a couple different varieties. 

BES @ VMware.   3 physical locations, 6,500 Exchange users.   1k of them have 5G mailboxes and the default for the rest are 2G.   BES has become pretty common.   They run Exchange 2007 & Windows 2003 for AD & the Guest OS.   Looks fairly straight forward. 

4 prod BES VMS, 1 STandby BES VM, 1 Attachment BES VM and 1 BES dedicated Database VM.   Done on 7 physical servers and 40 additional VM workloads on this cluster.

TA3461 – IO DRS: Tech Preview for VM Performance Isolation

This is a very new area of research at VMware.  Only about 2 years ago.  Since thm is is a Tech Preview it has no roadmap for when it will be available.

The Problem:

Many different workloads hit the same set of disks/arrays/spindles etc.   Low priority processes that run ad-hoc or other times will cause higher priority systems to experience an impact.  What you want to see is that the low priority VM gets less performance than the higher priority systems.   The question is how you can do this?

A solution:  Resource Controls

Assigning out shares based on disk performances.   Just like CPU/Memory shares of the original ESX days.  Higher shares total for a host gets higher priority for that shared VMFS volume.

To configure this you’d go into the VM and set the shares.   Fairly straight forward.   The setting is shares and then the limiting factor is IOPS.   Interesting idea. 

First case study covers two separate hosts with the IO DRS turned on running the same workload levels and saw a pretty significant difference in terms of IOPS & Latency measures.   With it turned off both VMs ran at 20 ms & 1500 IOPS.  With it on the Latency changed to 16 ms and 31 ms and a similar spread for IOPS.   Nice..

Case study two is a a more serious one with SQL server running.   The shares were 4:1 and the ratios were not that in terms of performance.  The thing that they are seeing is that load time matters significantly.  Overall thruput is working right and good though the loads make a big difference.  

The demo went and showed changing the shares on the fly and the Limit for IOPS and watched the IOMeter machines adjust immediately.   When limiting the IOPS the other systems picked up the slack and got more performance.  

After showing the demo the presenters asked if anyone in the packed room (and I do mean PACKED) would find a value to this?   Everyone immediately raised their hands.  

The tech approach is first to detect congestion.   If latency is above a threshold and then trigger the IO DRS.   If it isn’t borked don’t fix it.   IO DRS works by controling the IOs issued per host.  The sum of the vms on the host with IO DRS enabled is compared with other hosts to determine share priority.   So first the host is picked and then the VMs shares on that host are prioritized and then back to the host discussion.   The share control goes against all hosts using that same VMFS volume. 

IO slots are filled based on the shares on each host.  There are so many IO slots per Host.  This is how the IOs are controled for share congestion work.  

Two major performance metrics in storage industry.   Bandwidth (MB/s) and Throughput (IOPS).   Each have their pros and cons.   Bandwidth helps workloads with large IO sizes and IOPS is good for lots of sequential workloads.   IO DRS controls the array queue among the VMs.  Then if a VM has lots of small IOs they can continue to do things and have high IOPS.  Conversely if it has large IOs it is doing then it will get high bandwidth and low IOPS using the same share control system.

Case studies and test runs have shown that Device level latency stays the same as workloads change.  Some tests have shown that with IO DRS IOPS can go up simply due to the workloads involved.  Control of the IOs allows all to work though depending on the workload a VM can accomplish more.  

The key understanding is that IO DRS really helps when there is congestion.   When things are good and latency is not high enough to trigger the system, the shares are not used.   If a high IO share system is not using its slots, they are reassigned to other VMs in the cluster. 

The gain overall is the ability to do performance isolation amoung VMs based on Disk IO.

In the future they are looking to tie this into more vStorage APIs and VMotions and Storage VMotions, IOP reservation potentially etc.

Rocking cool and can’t wait for this to come out.

VMworld 2009 – Keynote P5

To expand and handle the next layers of Virtualization is:

vSphere Control:

Appspeed: is the “finger of blame” now.   Instead of Network always getting the finger, now AppSpeed can point the finger at someone else.  

vApps are the containers of the future for applications be it standalone or multi-tier.   The idea is with a vApp is that it has a variety of attributes/metadata such as Availability, RTOs for dR, Max Latency etc.   This info travels with the vApp.  

VMsafe APIs:   This gives control of security and compliance.   The nice thing is this is more appropriate data tied to a vApp via the attributes/metadata and the various vendors such as Trend/McAfee/Symantec/RSA etc.   Example would be Needs these firewall rules and capabilities.  

vCenter ConfigControl:   The demo showed that ConfigControl really has

vSphere Choice:

LabManager is the token self service portal today.

VMworld today:

37,248 machines -

if physical –> 25 MegaWatts – 3 football fields of space

with VMware Virtualization – Down to 776 physical servers running 540 Kilowatts

vCloud

Priority is around the internal cloud.   Next is working on bringing internal datacenter trust and capabilities to the external clouds.   The 3rd innovation is how and what can you do once you have these two pieces and how they interact and connectivity. 

Today Site Recovery Manager is the first step into the Connectivity space.   When and how and what needs to take place to failover from one datacenter to another one. 

Long Distance VMotion:   The challenges – Move VMs Memory, Disk consistency/syncing and VM network id/connections.  

  • Follow the Sun/Moon approaches (moving computing to stay during the night and cheaper issues) 
  • Disaster Datacenter Avoidance – Hurricane coming.   Move the Datacenter somewhere out of the path.

Cisco does this by spanning Layer 2 across both campuses up to 300KM apart. 
F5 uses its iSession technology to move things around through a globally based load balancer system.

Interoperability:  vCloud API

vSphere Plugins with your hosting provider to maintain the Single Pane of Glass. 

Open Standards.   The end goal is it will work regardless of where you go or what hypervizor is used.   The end goal is to have a good eco system and selection for end clients.  

vApps   Automation for the app stacks.   Spring Source helps go down this path.  Much discussion around splitting up Infrastructure, Applications, Platform and separating these to create well defined interaction points.  

Spring Source Demo shows some of the process capabilities to control deployment and put some controls around it.   Things like CloudFoundry.  For those of us the contest is on.. http://www.code2cloud.com for backstage pass to see Foreigner.  (Oh wait.. maybe I shouldn’t post that)  

Till the next time.   I’m off to IO DRS Tech Preview.

VMworld 2009 – Keynote P4

vSphere is the basis of all the improvements and technology over the years.  Based on Software Mainframe (for those of you over 40), the Cloud (for the under 40 crowd) and decides the best idea is to call it The Giant Computer.   The reason this all works is because of VMotion.   It is the basis of all that has happened.

The reason for the success of VMotion is Maturity, Breadth, Automated Use.

Maturity of VMotion – Estimates (fun or not) put around 360 million VMotions around the world since VMotion started.  About 2 VMotions a second around the world.   VMotion is 6 years old.   (Wow I feel old)

Breadth of VMotion – Storage  & Network VMotioning.   Across protocols and soon across Datacenters.   High performance computing systems are starting to look at using VMware.  

Automation of VMotion – DRS is the initial version that made this work.   DRS has been shown to average 96% of a perfect performance environment compared to a manually setup cluster in a perfect world.     Future will include IO DRS shares and configuration based on IOPS.    DPM allows for power optimization across the datacenter.   Or as has been said a Server Defrag capability.  

vSphere is still driving ahead.. more next post.

VMworld 2009 – Keynote P3

View also includes the Mobile Technology dicussion.   Mobile Technology is longer term working for functionality.    Visa Product Development is up on the stage.   He sees this space as a huge innovation going forward.    Current development is significantly complicated.   Easing functionality for development is extremely interesting for Visa. 

The Visa demo uses Windows Mobile on a developer version of a phone (kinda big) running an Atom CPU.   The presentation shows some alerting from Visa transactions and finding local ATMs.   The impressive zing is that the Visa demo application is actually an Android app running on the Atom CPU.   Wow.  

Next..