EMC Infrastructure for Virtual Desktops - Enabled by EMC ... - Dell EMC

0 downloads 340 Views 2MB Size Report
... Active Directory, dynamic host configuration protocol (DHCP), Domain Name System (DNS), and SQL Server. .... Domain
EMC Infrastructure for Virtual Desktops Enabled by EMC Celerra Unified Storage (NFS), VMware vSphere 4, and Citrix XenDesktop 4

Proven Solution Guide

EMC for Enabled by on using >

Copyright © 2010 EMC Corporation. All rights reserved. Published September, 2010 EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without notice. Benchmark results are highly dependent upon workload, specific application requirements, and system design and implementation. Relative system performance will vary as a result of these and other factors. Therefore, this workload should not be used as a substitute for a specific customer application benchmark when critical capacity planning and/or product evaluation decisions are contemplated. All performance encoding="utf-8" ?> 3. After saving the file, restart either the DDC or the Citrix Pool Management Service for the change to take effect.

Virtual desktop idle pool settings

DDC manages the number of virtual desktops that are idle based on the time and automatically optimizes the idle pool settings in the desktop group based on the number of virtual desktops in the group. These default idle pool settings need to be adjusted according to customer requirements to have virtual machines powered on in advance to avoid a boot storm scenario. During the validation testing, the idle desktop count is set to match the number of desktops in the group to ensure that all desktops are powered on in a steady state and ready for client connections immediately. To change the idle pool settings after a desktop group is created: 1. Navigate to Start > All Programs > Citrix > Management Consoles > Delivery Services Console on DDC. 2. In the left pane, navigate to Citrix Resources > Desktop Delivery Controller > [XenDesktopFarmName] > Desktop Groups 3. Right-click the desktop group name and select Properties. 4. Select Idle Pool Settings in the left pane under the Advanced option. 5. In the Idle Desktop Count section in the right pane, modify the number of desktops to be powered on during Business hours, Peak time, and Out of hours. You can optionally redefine business days and hours per your business requirements. 6. Click OK to save the settings and close the window.

EMC Infrastructure for Virtual Desktops Enabled by EMC Celerra Unified Storage (NFS), VMware vSphere 4, and Citrix XenDesktop 4 Proven Solution Guide 30

EMC Infrastructure for Virtual Desktops Enabled by EMC Celerra Unified Storage (NFS), VMware vSphere 4, and Citrix XenDesktop 4 Proven Solution Guide 31

Task 3: Install and configure Provisioning Server Install Provisioning Server

Unlike Citrix Desktop Delivery Controller, the installation of Provisioning Server is identical for the first Provisioning Server in the desktop farm and the additional Provisioning Servers installed in the farm. The Provisioning Services Configuration Wizard is run after the installation of the Provisioning Services software. The configuration option differs for the first and secondary (or additional) Provisioning Servers. The following steps highlight the configuration wizard options customized for this solution.

Provisioning Server – DHCP services

Since the DHCP services run on a dedicated DHCP server, select The service that runs on another computer for DHCP services when configuring the DHCP services in the configuration wizard.

EMC Infrastructure for Virtual Desktops Enabled by EMC Celerra Unified Storage (NFS), VMware vSphere 4, and Citrix XenDesktop 4 Proven Solution Guide 32

Provisioning Server – PXE services

The Provisioning Server is not used as a PXE server because DHCP services are hosted elsewhere. Select The service that runs on another computer for PXE services when configuring the PXE services in the configuration wizard.

EMC Infrastructure for Virtual Desktops Enabled by EMC Celerra Unified Storage (NFS), VMware vSphere 4, and Citrix XenDesktop 4 Proven Solution Guide 33

Provisioning Server – Farm configuration

In the Farm Configuration page of the Configuration Wizard, select Create farm to configure the first Provisioning Server or Join existing farm to configure additional Provisioning Servers. With either option, the wizard will prompt for a SQL server and its instance. First, Provisioning Server will use these inputs to create a >

3 4

vim.ProxyService.LocalServiceSpec httpAndHttps 8085 /sdk Change accessMode to httpAndHttps. Alternatively, set accessMode to httpOnly to disable HTTPS. Save the file and restart the vmware-hostd process using the following command. You may have to reboot the vCenter Server if SDK is inaccessible after restarting the process: service mgmt-vmware restart

EMC Infrastructure for Virtual Desktops Enabled by EMC Celerra Unified Storage (NFS), VMware vSphere 4, and Citrix XenDesktop 4 Proven Solution Guide 46

XenDesktop Setup Wizard

The XenDesktop Setup Wizard installed on the Provisioning Server simplifies virtual desktop deployment and can rapidly provision a large number of desktops. To run this wizard, complete the following steps: Step Action Select Start > All Programs > Citrix > Administration Tools > 1 XenDesktop Setup Wizard on the Provisioning Server. The Welcome to XenDesktop Setup Wizard page appears.

2 3

Click Next. The Desktop Farm page appears. Select the relevant farm name from the Desktop farm list. The list of farms appear.

4

Click Next. Before proceeding to the Hosting Infrastructure page, complete the steps described in the Appropriate access to vCenter SDK on page 46. On the Hosting Infrastructure page, select VMware virtualization as the hosting infrastructure. Type the URL of the vCenter Server SDK and click Next.

5

EMC Infrastructure for Virtual Desktops Enabled by EMC Celerra Unified Storage (NFS), VMware vSphere 4, and Citrix XenDesktop 4 Proven Solution Guide 47

6

Note: You will be prompted to specify the user credentials for the VMware vCenter Server. On the Virtual Machine Template page, select the virtual machine template that you want to use as a template for the virtual desktops. These virtual machine templates are retrieved from the vCenter Server.

7 8

Click Next. The Virtual Disk (vDisk) page appears. Select the vDisk from which the virtual desktops will be created. Only vDisks in standard mode appear. As shown in the following figure, the list of existing device collections contain only the device collections that belong to the same site as the vDisk.

EMC Infrastructure for Virtual Desktops Enabled by EMC Celerra Unified Storage (NFS), VMware vSphere 4, and Citrix XenDesktop 4 Proven Solution Guide 48

9 10

11

Click Next. The Virtual Desktops page appears. Enter the following and click Next. • The number of desktops to create. • The common name to use for all the desktops. • The start number to enumerate the newly created desktops. The sequence of this number will be appended to the common name, and will be assigned to the virtual desktop names.

The Organizational Unit Location page appears. Select the OU to which the desktops will be added and click Next.

EMC Infrastructure for Virtual Desktops Enabled by EMC Celerra Unified Storage (NFS), VMware vSphere 4, and Citrix XenDesktop 4 Proven Solution Guide 49

12

The Desktop Group page appears. Specify the group of the Desktop Delivery Services to which to add the desktops and click Next.

12

The Desktop Creation page appears. Ensure that the details are correct and then click Next to create the desktops.

EMC Infrastructure for Virtual Desktops Enabled by EMC Celerra Unified Storage (NFS), VMware vSphere 4, and Citrix XenDesktop 4 Proven Solution Guide 50

The Summary page appears. Note: Clicking Next will start an irreversible process of creating desktops that also includes creating computer objects in the Active Directory.

EMC Infrastructure for Virtual Desktops Enabled by EMC Celerra Unified Storage (NFS), VMware vSphere 4, and Citrix XenDesktop 4 Proven Solution Guide 51

Chapter 6: Testing and Validation Overview Introduction

This solution for Citrix XenDesktop 4 on EMC Celerra explores several configurations that can be used to implement a 1,000-user environment using EMC Celerra.

Contents

This section contains the following topics: Topic Overview Testing overview Testing tools Test results Result analysis of Desktop Delivery Controller Result analysis of Provisioning Server Result analysis of the vCenter Server Result analysis of SQL Server Result analysis of ESX servers Result analysis of Celerra unified storage Login storm scenario Test summary

See Page 52 52 52 55 56 59 62 64 67 70 78 80

Testing overview Introduction

This chapter provides a summary and characterization of the tests performed to validate the solution. The goal of the testing was to characterize the end-to-end solution and component subsystem response under reasonable load for Citrix XenDesktop 4 with Celerra NS-120 over NFS.

Testing tools Introduction

To apply a reasonable real-world user’s workload, a third-party benchmarking tool — LoginVSI from Login Consultants was used. LoginVSI simulates a VDI workload using the AutoIT script within each desktop session to automate the execution of generic applications like Microsoft Office 2007, Internet Explorer, Acrobat Reader, Notepad, and other third-party software.

EMC Infrastructure for Virtual Desktops Enabled by EMC Celerra Unified Storage (NFS), VMware vSphere 4, and Citrix XenDesktop 4 Proven Solution Guide 52

LoginVSI – Test methodology

Virtual Session Index (VSI) provides guidance to gauge the maximum number of users a desktop environment can support. LoginVSI workload can be categorized as light, medium, heavy, and custom. Medium is the only workload that is available in the VSI Express (free edition) and Pro editions. VSI Pro edition and medium workload were chosen for testing and they have the following characteristics: Emulates a medium knowledge worker using Office, Internet Explorer, and PDF. Once a session is started, the medium workload will repeat every 12 minutes. The response time is measured every 2 minutes during each loop. The medium workload opens up to five applications simultaneously. The type rate is 160 ms for each character. The medium workload in VSI 2.0 is approximately 35 percent more resource-intensive than VSI 1.0. • Approximately, 2 minutes of idle time is included to simulate real-world users.

• • • • • •

Each loop of the medium workload will open and use: • Outlook 2007: Browse 10 messages. • Internet Explorer: One instance is left open (BBC.co.uk). One instance is browsed to Wired.com, Lonelyplanet.com, and heavy flash app gettheglass.com (not used with MediumNoFlash workload). • Word 2007: One instance to measure response time and one instance to review and edit the document. • Bullzip PDF Printer and Acrobat Reader: The Word document is printed and the PDF is reviewed. • Excel 2007: A very large randomized sheet is opened. • PowerPoint 2007: A presentation is reviewed and edited. • 7-zip: Using the command line version, the output of the session is zipped. The current LoginVSI version is 2.1.2. This version has a gating metric called VSImax that measures the response time of five operations: 1. 2. 3. 4. 5.

Maximizing Microsoft Word. Starting the File Open dialog box. Starting the Search and Replace dialog box. Starting the Print dialog box. Starting Notepad.

The LoginVSI workload is gradually increased by starting desktop sessions one after another at a specified interval. Although the interval can be customized, a default interval of 1 second is used during the testing. The desktop infrastructure is considered saturated when the average response time of three consecutive users crosses the 2,000 ms threshold. The administrator guide available at www.loginconsultants.com provides more information on the LoginVSI tool. LoginVSI launcher

A LoginVSI launcher is a Windows system that launches desktop sessions on target virtual desktop machines. There are two types of launchers — master and slave. There is only one master in a given test bed and there can be many slave launchers as required. Launchers coordinate the start of the sessions using a common CIFS share. In this validated testing, the share is created on a Celerra file system that resides in the 4+1 RAID 5 group as shown in Disk layout for 10 building blocks on page 17.

EMC Infrastructure for Virtual Desktops Enabled by EMC Celerra Unified Storage (NFS), VMware vSphere 4, and Citrix XenDesktop 4 Proven Solution Guide 53

The number of desktop sessions a launcher can run is typically limited by CPU or memory resources. Login consultants recommend using a maximum of 45 sessions per launcher with two CPU cores (or two dedicated vCPUs) and a RAM of 2 GB, when the GDI limit has not been tuned (default). However with the GDI limit tuned, this limit extends to 60 sessions per two-core machine. In this validated testing, 1,000 desktop sessions were launched from 24 launcher virtual machines, resulting in 41 or 42 sessions established per launcher. Each launcher virtual machine is allocated two vCPUs and a RAM of 4 GB without encountering any system bottlenecks.

EMC Infrastructure for Virtual Desktops Enabled by EMC Celerra Unified Storage (NFS), VMware vSphere 4, and Citrix XenDesktop 4 Proven Solution Guide 54

Test results Result summary

The following graph shows the response time compared to the number of active desktop sessions, as generated by the LoginVSI launchers. It shows that the average response time increases marginally as the user count increases. Throughout the test run, the average response time stays below 300 ms, which has plenty of headroom below the 2,000 ms gating metric. The maximum response time increases nominally as the user count increases with some spikes. However, it never exceeds 3,000 ms.

EMC Infrastructure for Virtual Desktops Enabled by EMC Celerra Unified Storage (NFS), VMware vSphere 4, and Citrix XenDesktop 4 Proven Solution Guide 55

I

Result analysis of Desktop Delivery Controller Introduction

Since the two DDCs are load balanced to host 1,000 desktops, their performance counters are comparable. As a result, only the statistics for the first DDC is reported in the following sections.

CPU utilization

The average percentage processor time is recorded at 8.42 percent with occasional spikes that reach as high as 65 percent. The percentage process time is reported based on the average of two vCPUs.

EMC Infrastructure for Virtual Desktops Enabled by EMC Celerra Unified Storage (NFS), VMware vSphere 4, and Citrix XenDesktop 4 Proven Solution Guide 56

Memory utilization

Each DDC virtual machine was configured with a RAM of 4 GB. The memory utilization fluctuates between 1 GB and 2.2 GB. The average utilization is around 1.5 GB, consuming less than half of the available memory.

Disk throughput

Windows operating system and XenDesktop software were installed on a local drive for each DDC. As seen in the following graph, despite a couple of spikes occurring at the end of the test, run of the average disk throughput is about 28 KB/s at the end of the test run.

EMC Infrastructure for Virtual Desktops Enabled by EMC Celerra Unified Storage (NFS), VMware vSphere 4, and Citrix XenDesktop 4 Proven Solution Guide 57

Network throughput

Each DDC virtual machine was configured with a gigabit adapter that uses the vmxnet2 driver to manage the virtual desktops. An average transfer rate of 443 KB/s translates to 3.5 Mb/s. A surge of 758 KB/s (or 6 Mb/s) was measured at the end of the test run due to concurrent users logging off.

EMC Infrastructure for Virtual Desktops Enabled by EMC Celerra Unified Storage (NFS), VMware vSphere 4, and Citrix XenDesktop 4 Proven Solution Guide 58

Result analysis of Provisioning Server Introduction

Since the two PVSs are load balanced to host 1,000 desktops, their performance counters are comparable. As a result, this section covers only the statistics for the first PVS.

CPU utilization

Four vCPUs were configured for each PVS server in anticipation of intense network activities to communicate with 1,000 desktops. It is obviously overkill based on the following graph. A two-CPU virtual machine would suffice.

EMC Infrastructure for Virtual Desktops Enabled by EMC Celerra Unified Storage (NFS), VMware vSphere 4, and Citrix XenDesktop 4 Proven Solution Guide 59

Memory utilization

Each PVS virtual machine was configured with a RAM of 4 GB. The memory utilization remains steady in the range of 1.3 GB to 1.8 GB.

Disk throughput

The following graph shows the disk throughput measured for the physical disk that stores the master vDisk. Since PVS servers cache the vDisk data blocks in memory, the initial read activity is observed at 4 MB/s. Negligible disk activity is observed thereafter.

EMC Infrastructure for Virtual Desktops Enabled by EMC Celerra Unified Storage (NFS), VMware vSphere 4, and Citrix XenDesktop 4 Proven Solution Guide 60

Network throughput

Each PVS virtual machine was configured with a gigabit adapter that uses the vmxnet2 driver to stream the vDisk image to virtual desktops. The average network throughput is recorded at 4 MB/s (or 32 Mb/s). The maximum network throughput is capped below 30 MB/s (or 240 Mb/s) towards the end of the run despite bursting activity as a result of concurrent user logoff.

EMC Infrastructure for Virtual Desktops Enabled by EMC Celerra Unified Storage (NFS), VMware vSphere 4, and Citrix XenDesktop 4 Proven Solution Guide 61

Result analysis of the vCenter Server Introduction

The vCenter Server maintains two clusters of ESX servers. Each cluster contains 500 desktop virtual machines that are hosted on eight ESX servers.

CPU utilization

The vCenter Server virtual machine is configured with dual vCPUs. The average CPU utilization is less than 4 percent throughout the test. Periodic surges are curbed at 81 percent.

EMC Infrastructure for Virtual Desktops Enabled by EMC Celerra Unified Storage (NFS), VMware vSphere 4, and Citrix XenDesktop 4 Proven Solution Guide 62

Memory utilization

A RAM of 6 GB was allocated to the vCenter Server virtual machine. Committed bytes never exceeded 2.55 GB. The amount of allocated memory could have scaled down to 4 GB.

Disk throughput

Windows operating system and vCenter Server software were installed on a local drive. There is minimal disk I/O activity as seen in the following graph.

EMC Infrastructure for Virtual Desktops Enabled by EMC Celerra Unified Storage (NFS), VMware vSphere 4, and Citrix XenDesktop 4 Proven Solution Guide 63

Network throughput

The vCenter Server was configured with a gigabit adapter that uses the vmxnet2 driver. The majority of network activity comes from the DDCs that manipulate and detect the state of each virtual desktop. The average network throughput is measured at 17.6 KB/s (or 141 Kbps). Logoff activity towards the end of the run triggers a spike of 782 KB/s (or 6.3 Mbps).

Result analysis of SQL Server Introduction

Three databases were created on SQL Server, which is the central repository of the DDC, PVS, and vCenter Server configurations. The database size for the vCenter Server grows to 5.3 GB — the largest of the three databases. DDC and PVS databases require merely 10 MB and 5 MB, respectively.

CPU utilization

The SQL Server virtual machine was configured with dual vCPUs. The average CPU utilization is less than 2 percent throughout the test. Periodic surges are curbed at 65 percent.

EMC Infrastructure for Virtual Desktops Enabled by EMC Celerra Unified Storage (NFS), VMware vSphere 4, and Citrix XenDesktop 4 Proven Solution Guide 64

Memory utilization

A RAM of 6 GB was allocated to the SQL Server virtual machine. Committed bytes never exceeded 3.5 GB. The amount of allocated memory could have scaled down by 1 GB.

EMC Infrastructure for Virtual Desktops Enabled by EMC Celerra Unified Storage (NFS), VMware vSphere 4, and Citrix XenDesktop 4 Proven Solution Guide 65

Disk throughput

Windows operating system and SQL server software were installed on a local drive. The average disk throughput is below 392 KB/s while the maximum throughput is recorded around 45 MB/s.

Network throughput

The SQL server was configured with a gigabit adapter that uses the vmxnet2 driver. The average network throughput is measured at 14.5 KB/s (or 116 Kbps). The maximum throughput is recorded at 458 KB/s (or 3.7 Mbps).

EMC Infrastructure for Virtual Desktops Enabled by EMC Celerra Unified Storage (NFS), VMware vSphere 4, and Citrix XenDesktop 4 Proven Solution Guide 66

Result analysis of ESX servers Introduction

A thousand desktop virtual machines are spread among 16 ESX servers. Prior to testing, ESX server is responsible for hosting 61 to 62 virtual machines that are distributed evenly using VMware Distributed Resource Scheduler (DRS) automation. The DRS automation level is set to manual during the test run to avoid unpredictable workload overhead caused by virtual machine migration. Because each ESX server hosts almost the same number of virtual machines, the esxtop performance counters are sampled from one of the 16 servers.

CPU utilization

Each of the 16 ESX servers has 8 Intel Nahalem 2.6 GHz CPUs. Each ESX server hosts up to 62 desktop virtual machines, yielding the VMs/core ratio at 7.75. As the workload gradually increases when more desktops become active during the test, CPU utilization grows linearly and reaches a maximum of 100 percent towards the end of the test run. Sessions begin to log off simultaneously, triggering a surge of CPU consumption.

EMC Infrastructure for Virtual Desktops Enabled by EMC Celerra Unified Storage (NFS), VMware vSphere 4, and Citrix XenDesktop 4 Proven Solution Guide 67

Memory utilization

Each of the 16 ESX servers has a memory of 32 GB installed. Ideally 62 virtual machines with a RAM of 512 MB each add up to a total of 31 GB in theory. The memory utilization barely exceeds 29 GB (32 GB–3 GB free memory) because the ESX memory deduplication technology is used.

Disk throughput

Each of the 16 ESX servers is configured with one internal hard disk. There is a nominal disk I/O targeted to the internal drive as the majority of I/Os are redirected to the NFS datastores.

EMC Infrastructure for Virtual Desktops Enabled by EMC Celerra Unified Storage (NFS), VMware vSphere 4, and Citrix XenDesktop 4 Proven Solution Guide 68

Network throughput

Each of the 16 ESX servers is configured with a NIC teaming on 2 gigabit adapters to provide high availability. The following graph shows that the network utilization continues to increase as desktop sessions are ramped up. Despite the steady increase in network utilization, the maximum throughput of 50 Mb/s is much higher than the physical limit of the aggregated gigabit network bandwidth.

EMC Infrastructure for Virtual Desktops Enabled by EMC Celerra Unified Storage (NFS), VMware vSphere 4, and Citrix XenDesktop 4 Proven Solution Guide 69

Result analysis of Celerra unified storage Celerra Data Mover stats

The Celerra command server_stats with the following syntax was used to collect the performance data of the Data Mover every 30 seconds. $ /nas/bin/server_stats -summary basic,caches -table net,dvol,fsvol -interval 30 -format csv -titles once -terminationsummary yes The following table provides some of the significant Data Mover statistics that were collected: Measurement parameter Network input Network output Dvol read Dvol write Buffer cache hit rate CPU utilization

Data Mover CPU utilization

Average value 20,694 KB/s (20.2 MB/s) 1,589 KB/s (1.6 MB/s) 575 KB/s (0.6 MB/s) 22,339 KB/s (21.8 MB/s) 98% 12%

Despite the gradual increase in the CPU utilization on the Data Mover and the increase in the test workload, the CPU utilization remains below 30 percent until the end of the test run when the logoff storm invokes a spike of 55 percent.

EMC Infrastructure for Virtual Desktops Enabled by EMC Celerra Unified Storage (NFS), VMware vSphere 4, and Citrix XenDesktop 4 Proven Solution Guide 70

Data Mover disk throughput

The following graph shows the trend of the disk throughput measured on the Data Mover. Its pattern mimics the CPU utilization trend where the disk throughput gradually increases and reaches a maximum at 67 MB/s.

Storage array CPU utilization

CLARiiON Analyzer GUI was started to collect performance data about the storage array at 60-second intervals. The following figure shows the CPU utilization at the SP level. The storage processor (SP) balances the LUN ownership for the 10 building blocks that are used to store the virtual desktops. However, because SP A also consists of the LUNs that store the golden vDisk image, and the CIFS file system that contains the roaming user profiles and LoginVSI results, additional CPU cycles are incurred on SPA, causing its maximum to reach nearly 60 percent, while SPB utilization reaches a maximum of 48 percent only.

EMC Infrastructure for Virtual Desktops Enabled by EMC Celerra Unified Storage (NFS), VMware vSphere 4, and Citrix XenDesktop 4 Proven Solution Guide 71

Storage array total bandwidth

The storage array can easily handle the I/O bandwidth that the test workload generates, where less than 30 MB/s of I/O bandwidth is observed for each SP.

EMC Infrastructure for Virtual Desktops Enabled by EMC Celerra Unified Storage (NFS), VMware vSphere 4, and Citrix XenDesktop 4 Proven Solution Guide 72

Storage array total throughput

The maximum aggregated throughput at the SP level is recorded at 6,452 IOPS (4145 + 2307) towards the end of the test run. This includes all I/O activities for this storage array. Throughput measured for the virtual desktops alone is reported to be below the LUN level.

EMC Infrastructure for Virtual Desktops Enabled by EMC Celerra Unified Storage (NFS), VMware vSphere 4, and Citrix XenDesktop 4 Proven Solution Guide 73

Storage array response time

The SP response time throughout the test run is less than 1 millisecond — an acceptable response time that suggests that the storage processor is not a bottleneck.

EMC Infrastructure for Virtual Desktops Enabled by EMC Celerra Unified Storage (NFS), VMware vSphere 4, and Citrix XenDesktop 4 Proven Solution Guide 74

Most active LUN utilization

The following four graphs shows the performance statistics for the busiest LUN. This is measured within the 10 building blocks used to store the virtual desktops. The maximum utilization for the most active LUN never exceeds 50 percent as shown in the following figure.

EMC Infrastructure for Virtual Desktops Enabled by EMC Celerra Unified Storage (NFS), VMware vSphere 4, and Citrix XenDesktop 4 Proven Solution Guide 75

Most active LUN bandwidth

The maximum LUN bandwidth is measured at 5 MB/s for the most active LUN during the test. The storage array can easily handle the bandwidth requirement.

EMC Infrastructure for Virtual Desktops Enabled by EMC Celerra Unified Storage (NFS), VMware vSphere 4, and Citrix XenDesktop 4 Proven Solution Guide 76

Most active LUN throughput

The maximum throughput measured for the most active LUN is slightly above 500 IOPS. Because the storage array write-cache absorbs some of the front-end IOPS before it writes to the physical disks, the LUN throughput can exceed the theoretical limit of what two 15k drives can yield in a building block.

EMC Infrastructure for Virtual Desktops Enabled by EMC Celerra Unified Storage (NFS), VMware vSphere 4, and Citrix XenDesktop 4 Proven Solution Guide 77

Most active LUN response time

The response time of the most active LUN is around 1 millisecond throughout the test run, which suggests that there is no bottleneck at the LUN level.

Login storm scenario Introduction

One of the most concerned areas in VDI implementation is the what-if scenarios of login and boot storms. Given that the DDC has an option to adjust the idle desktop count, it is recommended to tune the parameter accordingly to power up enough virtual desktops ahead of business opening/peak hours and alleviate a boot storm scenario. The impact of login storm, on the other hand, may be minimized by keeping desktop users logged in as long as possible. However, this is beyond the control of the desktop administrators. The following section prepares for the worst-case scenario when logins occur in rapid succession.

Login timing

To simulate a login storm, 500 desktops are powered up initially into steady state by setting the idle desktop count to 500. The login time of each session is then measured by starting a LoginVSI test that establishes the sessions with a custom interval of five seconds. The 500 sessions are logged in within 42 minutes (500 x 5 / 60 = 41.6), a period that models a burst of login activity that takes place in the opening hour of a production environment.

EMC Infrastructure for Virtual Desktops Enabled by EMC Celerra Unified Storage (NFS), VMware vSphere 4, and Citrix XenDesktop 4 Proven Solution Guide 78

The LoginVSI tool has a built-in login timer that measures from the start of the logon script defined in the Active Directory group policy to the start of the LoginVSI workload for each session. Although it does not measure the total login time from an end-to-end user’s perspective, the measurement gives a good indication of how sessions will be affected in a login storm scenario. The following figure shows the trend of the login time in seconds as sessions are started in rapid succession. The average login time for 500 sessions is approximately 5 seconds. The maximum login time is recorded at 29 seconds with a little over 300 concurrent sessions, while minimum login time is around 2 seconds. It is concluded that while some desktop users might experience a slightly longer login delay during login storm, most users should receive their desktop sessions with a reasonable delay.

EMC Infrastructure for Virtual Desktops Enabled by EMC Celerra Unified Storage (NFS), VMware vSphere 4, and Citrix XenDesktop 4 Proven Solution Guide 79

Test summary Summary

The following conclusions can be drawn from the tests: • RAID 10 is the preferred RAID type over RAID 5 in a XenDesktop 4 deployment due to the write-intensive nature of the PVS write cache area. • The recommended number of 100 desktops per a two-disk R10 building block is based on the medium workload generated by LoginVSI. Individual sizing requirements must be calibrated based on both capacity planning and workload characteristics of a production environment. • The ratio of 7.75 virtual machines per CPU core measured in the test should be used as a guideline in sizing. Recall that the ESX CPU utilization approaches nearly 100 percent at this ratio, and it would be wise to scale back on this ratio or reduce each user’s workload to reserve headroom for unforeseen overloaded activity such as the boot storm case. • Boot and login/logoff storms need to be taken in consideration when sizing a VDI implementation. While XenDesktop 4 has an option to cope with boot storm, care should be taken to monitor the environment to minimize the impact of potential login/logoff storm.

EMC Infrastructure for Virtual Desktops Enabled by EMC Celerra Unified Storage (NFS), VMware vSphere 4, and Citrix XenDesktop 4 Proven Solution Guide 80