Tag Archives: exchange

BCA1360 – Global Enterprise virtualizing Exchange 2010

Exchange 2010 can be virtualized and this session covers how they did it.

Some of the design points that need to be covered are:

  • DAS vs SAN
  • Scale up or Scale out

The choices made here are arbitrary and dependent on how you manage your datacenter and what you like/don’t like.

Their layout is:

  • 4 datacenters, 2 DCs in US & 2 in Europe
  • If they have a DC failure, can run around 25% reduced capacity
  • 3 Hosts per datacenter
  • 2 Hosts are active, 1 failover
  • SAN backend with 1TB 7k rpm SATA disks

How did they do it?

  1. Virtuals are manually balanced across the hosts per role
  2. DRS set to level 1 – don’t VMotion naturally
  3. No Reservations
  4. Dedicated Farm versus using the general farm
    • Exchange, all roles, all support systems etc

The Exchange 2010 Role layout is defined per OS instance, minimal sharing here.

CAS Role

  • 4 G, 2 vCPUs
  • VMDK based

Hub Role

  • 4G , 4 vCPUs
  • VMDK based

MBX Role

  • 2000 mailboxes per server
  • 6 vCPU
  • 36G of RAM
  • 3 NIC (MAPI, Backup & Replication)
  • VMDK for OS & Pagefile
  • RDM for Log & DB disks
  • For the 1TB LUN sizes use the 8MB block size format

SAN configuration

  • EMC Clarion CX4, 1TB 7200rpm SAT disk
  • RAID 6
  • Datastores in 8MB
  • Presented as 500GB and 1TB
  • OS, Pagefiles, & Misc Storage are VMDK
  • Logfile & Databases are RDM

LoadGen Physical versus Virtual

They ran some testing with VMware Assistance and the performance numbers were significantly under where Microsoft states are required.  In most cases significantly under.

Lessons Learned:

Backups and disk contention as things grew did start to become an issue as load was added.   Symptoms would be dropped connections.  Moved the backups to the passive copies instead.  This addressed much of the concerns.

When doing the migrations, take breaks in between each batch of migrations to iron out any issues.   Found problems like pocket of users had unique issues and needed to have time to iron out the gotchas.

Database sizes introduce issues around backup, replication etc.   Make sure you can manage them for the demands for your environment.

Some interesting discussions is that Hyper-Threading is not supported for production.   It complicates performance discussions by Microsoft.  VMware can do either so be sure to follow the Microsoft standards at the VM level.

Memory is a big question.  Basically set

Storage.. the main points are make sure you have appropriate IOP capability behind the scenes.  The other is if setting up VMDK files, should eagerZeroThick the VMDKs.   If you check the box for enabling FT during creatio, this will eagerZeroThick is automatically.  Otherwise this should be done when the machine is powered off and run VMKFSTOOLS from the command line.

16 months later…

  • Success doing VMotions and DAG failovers
  • Backups are running lights out
  • Will add more hosts to expand the environment
  • Pain Points:
  • Service Desk new process adoptions
  • Integration with legacy tools in house.

After all is said and done this has done quite a bit for the company.

  1. Datacenter savings
  2. TCO is down and has been passed on to the business
  3. much greater flexibility
  4. Scale out or Scale up very quickly
  5. Lower Administrative overhead so far
  6. More options for disaster recovery and scenarios

Exchange 2010 is possible.

EA7849 – Exchange Server 2010 on vSphere

Hanging out checking out some information on Exchange and decided to hit this session.   My company is looking at upgrading and since we are going to Exchange 2010, I’d like to get us virtual on vSphere if we can make it happen.   Alex Fontana, our presenter, is a Microsoft Technical Specialist for VMware.

The trend has been clients have been pushing to virtualize Tier-1 apps such Exchange.   At VMworld 2007, Dell had a Exchange 2003 performance study.   VMworld 2008 introduced the SVVP program along with Exchange 2007 performance white papers.  Along with the early adopters pushing the envelope, Exchange has improved its approach to disk access every release.   It requires less and less IOPS in each release and still provide acceptable performance.

Over the years ESX has been improving and offering less overhead versus native performance in every release.   Starting with ESX 2 which had anywhere from a possible 30-60% overhead costs to now ESX 4 which is <7% . This along with better hardware generations every 18 months has given even more performance.

This is backed by some performance tests done.   In general the virtual has been within 5% of the the physical in a scale up test.   In a private vSphere Cloud spread across the US, when using DRS, have seen about an overall 18% improvement in system performance versus not enabling DRS on the cluster with Exchange in it.

Some of the best practices mentioned:

  • Go with a basic 1-1 ratio of vCPUs to pCPUs to start with.  Scale out after monitoring to make sure performance is acceptable.  Basically don’t go with over subscription if possible.
  • Don’t over  commit memory until steady state is stable and available RAM
  • Spread the heavy I/O systems across several LUNs
  • Use eagerthickzero VMDK files (Option at time of creation, select enable for FT in vSphere 4.x GUI)
  • RDMs are not any better than VMFS.   VMFS can’t do quorums though.  Performance of VMFS is typically a little bit faster.
  • Use VMXnet3 driver – highly optimized performance in both lower CPU and TOE
  • Note:  VMware does support VMotion/DRS for Microsoft Cluster Nodes.   Cold migration does work fine though.

Exchange has a variety of requirements matrixes for which Exchange 2010 server role is needed.  As long as the requriements matrix is followed for each role, the VMs should be scaled properly.   In that requirements is a discussion around Megacycles.   Need to generate that to scale properly.  Some key notes is that Mailbox roles shouldnt’ go above 70% utilization.   The recommendation is to use the Exchange 2010 Mailbox Server Role Requirements Calculator.   Especially around the database availability groups (DAG) going on.

As you design your Exchange 2010 need to keep in account the limitations of vSphere Configuration Maximums.   One additional one is keep the DAG under 1TB a piece to say under the 2TB limits of each VMFS volume.  Along with that be sure to take into account passive databases in DAG setups.

There is a large set of good slides to cover various VMware products and how they work with Microsoft clustering and DAGs.   Things like SRM functionality and vMotion and HA.   Definitely more details in the slides than I will cover here.

Exchange 2010 is nicely VSS friendly and as such can take array based backups quickly and painlessly offline.   It can easily have a 10 second backup window where we have impact on the Exchange systems.

At the end of the day we need to define what level of availability do we need?   What are the SLAs?   What level of corruption do we need to be concerned about?   Can recovery be manual or does this need to be automated and why?   Can we use VMware features or do we need to use the Exchange features?