Introduction to Using EMC Celerra with VMware vSphere 4

3 downloads 230 Views 1MB Size Report
with VMware. vSphere 4. Applied Best Practices Guide ...... VMware Data Recovery — A feature that provides simple, cos
Introduction to Using EMC® Celerra® with VMware vSphere 4 Applied Best Practices Guide

EMC NAS Product Validation Corporate Headquarters Hopkinton, MA 01748-9103 1-508-435-1000 www.EMC.com

www.EMC.com

Copyright © 2009 EMC Corporation. All rights reserved. Published May, 2009 EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without notice. THE INFORMATION IN THIS PUBLICATION IS PROVIDED “AS IS.” EMC CORPORATION MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND WITH RESPECT TO THE INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Use, copying, and distribution of any EMC software described in this publication requires an applicable software license. For the most up-to-date listing of EMC product names, see EMC Corporation Trademarks on EMC.com. All other trademarks used herein are the property of their respective owners.

Introduction to Using EMC Celerra with VMware vSphere 4 Applied Best Practices Guide P/N h6337 2

Introduction to Using EMC Celerra with VMware vSphere 4

Applied Best Practices Guide

Contents

About this Document ...................................................................................................................... 7 Chapter 1

EMC Celerra Overview................................................................................................................. 11 Introduction ................................................................................................................................... 12 Data Movers .................................................................................................................................. 13 Control Station .............................................................................................................................. 13 Basics of storage on Celerra.......................................................................................................... 13

Chapter 2

VMware vSphere Overview.......................................................................................................... 15 Introduction ................................................................................................................................... 16 Virtual data center architecture ..................................................................................................... 16 vCenter Server............................................................................................................................... 17 Network architecture ..................................................................................................................... 17 Storage architecture....................................................................................................................... 19 New vStorage features................................................................................................................... 20

Chapter 3

Celerra Storage Provisioning for ESX Hosts ................................................................................ 23 Introduction ................................................................................................................................... 24 Configuring a VMkernel port group in ESX................................................................................. 24 Naming storage objects ................................................................................................................. 25 Configuring Celerra NFS for ESX ................................................................................................ 26 Configuring Celerra iSCSI for ESX .............................................................................................. 30 Creating a VMware vStorage VMFS data store by using a vSphere client................................... 33 Volume Grow................................................................................................................................ 35 Creating RDM volumes for an ESX host over a Celerra iSCSI LUN ........................................... 35 Storage VMotion ........................................................................................................................... 36 Celerra Virtual Provisioning ......................................................................................................... 38 Celerra virtually provisioned NFS file system .............................................................................. 39 Celerra virtually provisioned iSCSI LUN ..................................................................................... 40 Storage deduplication for virtual machines’ data by using Celerra deduplication for NFS .......... 40

Chapter 4

Network Multipathing, Failover, and Load Balancing.................................................................. 42 Introduction ................................................................................................................................... 43 Celerra NFS high availability........................................................................................................ 43 Celerra iSCSI high availability...................................................................................................... 44

Introduction to Using EMC Celerra with VMware vSphere 4

3

Applied Best Practices Guide

Contents

Multipathing with Celerra iSCSI................................................................................................... 45 Chapter 5

Virtual Machine Operations with Celerra Storage ........................................................................ 47 Creating virtual machine over Celerra storage .............................................................................. 48 Cloning virtual machine and templates ......................................................................................... 49 Migrating virtual machines............................................................................................................ 49 Virtual machine snapshots............................................................................................................. 49

Chapter 6

Business Continuity and Data Replication .................................................................................... 50 Introduction ................................................................................................................................... 51 Celerra replication ......................................................................................................................... 51 Replicating a NFS data store ......................................................................................................... 52 Replicating a VMware vStorage VMFS data store over iSCSI ..................................................... 53 Replicating RDM over iSCSI ........................................................................................................ 53

Chapter 7

Conclusion..................................................................................................................................... 54 Summary ....................................................................................................................................... 55 Related documents......................................................................................................................... 55

4

Introduction to Using EMC Celerra with VMware vSphere 4

Applied Best Practices Guide

Figures

Figure 1 Figure 2 Figure 3 Figure 4 Figure 5 Figure 6 Figure 7 Figure 8 Figure 9 Figure 10 Figure 11 Figure 12 Figure 13 Figure 14 Figure 15 Figure 16 Figure 17 Figure 18 Figure 19 Figure 20 Figure 21 Figure 22 Figure 23 Figure 24 Figure 25

Celerra block diagram................................................................................................ 12 Celerra storage topology ............................................................................................ 14 VMware vSphere architecture ................................................................................... 16 Virtual data center architecture .................................................................................. 17 VMware vSphere network architecture ..................................................................... 17 Distributed virtual switch........................................................................................... 18 VMware vSphere storage architecture ....................................................................... 19 Storage map ............................................................................................................... 21 VMkernel configuration............................................................................................. 25 ESX - NFS advanced parameters configuration ........................................................ 27 Celerra Manager - NFS Export Properties ................................................................. 28 ESX - NFS data store configuration .......................................................................... 30 ESX - Software iSCSI Initiator Properties dialog box............................................... 32 ESX - Storage Adapters Rescan................................................................................. 33 Storage link in Configuration tab............................................................................... 34 Raw Device Mapping option for a VM...................................................................... 36 Storage VMotion - Select Change Data store ............................................................ 37 Storage VMotion - Target disk type selection ........................................................... 38 Celerra Manager - Virtual Provisioning configuration for a file system.................... 39 Celerra Manager - Enable deduplication for a file system......................................... 41 Celerra NFS high availability configuration .............................................................. 44 Celerra iSCSI high availability configuration............................................................ 45 Celerra iSCSI multipathing........................................................................................ 46 vCenter - Virtual machine type selection................................................................... 48 Celerra Manager - Replication Wizard ...................................................................... 52

Introduction to Using EMC Celerra with VMware vSphere 4

5

Applied Best Practices Guide

Figures

6

Introduction to Using EMC Celerra with VMware vSphere 4

Applied Best Practices Guide

About this Document

This document provides a detailed overview of using EMC Celerra with VMware vSphere 4. VMware vSphere is an infrastructure virtualization suite that provides virtualization, management, resource optimization, application availability, and operational automation capabilities in an integrated package. It supports a variety of storage connectivity. This document covers the use of Celerra protocols and features for VMware vSphere 4. Executive summary VMware vSphere 4 supports network file system (NFS), Internet small computer system interface (iSCSI), and Fibre Channel storage area network (FC SAN) storage connectivity. EMC Celerra offers all these storage protocols in a single package and also offers a dynamic set of capabilities for the VMware vSphere enabled virtual data center. EMC Celerra also offers advanced functionalities such as storage Virtual Provisioning, deduplication, and replication, which extends the capabilities of the virtual data center. This document describes how to optimally utilize EMC Celerra in a VMware vSphere environment. Introduction Virtualization is a familiar term used in technical conversations for many years now. We have grown accustomed to almost all things virtual, from virtual memory to virtual storage. VMware vSphere 4 is the latest enterprise virtualization suite from VMware, which virtualizes and aggregates the underlying physical hardware resources across multiple systems and provides pools of virtual resources to the data center. In addition, VMware vSphere introduces a set of distributed services that enables detailed, policy-driven resource allocation, high availability, and consolidated backup of the entire virtual data center. EMC Celerra platforms deliver a single-box block and file solution that offers a centralized point of management for distributed environments. This enables you to dynamically grow, share, costeffectively manage multi-protocol file systems, and provide multi-protocol block access. Celerra offers iSCSI LUNs and NFS exported file systems as storage options to create virtual machines and virtual disks or store shared ISO images. Celerra also provides advanced scalability and reliability with the user friendliness of IP storage. Audience System administrators, system architects, and anyone who is interested in how VMware vSphere 4 can be integrated with NFS, iSCSI, and FC storage will find this document beneficial. Readers familiar with VMware virtualization products and EMC Celerra will be able to understand the concepts better.

Introduction to Using EMC Celerra with VMware vSphere 4

7

Applied Best Practices Guide

About this Document

Terminology

8



Automatic file system extension — Configurable Celerra file system feature that automatically extends a file system created or extended with AVM when the high water mark (HWM) is reached.



Celerra Replication Service — A service that produces a read-only, point-in-time copy of a source file system. This service periodically updates the copy, making it consistent with the source file system.



Data store — Data stores are virtual representations of combinations of underlying physical storage resources in the data center



Fail-Safe Network (FSN) — A Celerra high-availability feature that extends link failover out into the network by providing switch-level redundancy. An FSN appears as a single link with a single MAC address and potentially multiple IP addresses.



Guest operating system — An operating system that runs within a virtual machine.



Hypervisor — A virtualization platform that allows multiple virtual machines to run in isolation on a physical host at the same time. Hypervisor is also known as Virtual Machine Monitor.



iSCSI target — An iSCSI endpoint, identified by a unique iSCSI name, that executes commands issued by the iSCSI Initiator.



Link aggregation — A high-availability feature based on IEEE802.3ad. Link aggregation uses multiple Ethernet network cables and ports in parallel to increase the aggregated throughput beyond the limits of any one single cable or port, and to increase the redundancy for higher availability.



Logical unit number (LUN) — For iSCSI on a Celerra Network Server, a logical unit is an iSCSI software feature that processes SCSI commands, such as reading from and writing to storage media. From an iSCSI host perspective, a logical unit appears as a disk device.



Network File System (NFS) — A distributed file system providing transparent access to remote file systems. NFS allows all network systems to share a single copy of a directory.



SnapSure — On a Celerra system, a feature providing read-only point-in-time copies of a file system. These copies are also referred to as checkpoints.



Templates — A means to import virtual machines and store them as templates that can be deployed at a later time to create new virtual machines.



Virtual local area network (VLAN) — A group of devices physically residing on different network segments but communicating as if they resided on the same network segment. VLANs are configured by management software and are based on logical versus physical connections for increased administrative flexibility.



Virtual machine (VM) — A virtualized x86 or x64 PC on which a guest operating system and an associated application run.



Virtual Provisioning — A configurable Celerra file system feature that can be used only in conjunction with automatic file system extension. This option allows allocating storage based on long-term projections, while dedicating only the file system resources that are currently needed. Users (NFS or CIFS clients and applications) see the virtual maximum size of the file system, of which only a portion is physically allocated. When combined, the automatic

Introduction to Using EMC Celerra with VMware vSphere 4

Applied Best Practices Guide

About this Document

file system extension and virtual provisioning options let you grow the file system gradually on a need basis. ♦

VMware Consolidated Backup — A feature that provides a centralized facility for agentfree backup of virtual machines. It simplifies backup administration and reduces the load on ESX/ESXi



VMware Data Recovery — A feature that provides simple, cost effective, and agentless backup and recovery for virtual machines in smaller environments.



VMware Distributed Resource Scheduler — A feature that allocates and balances computing capacity dynamically across collections of hardware resources for virtual machines. This feature includes distributed power management (DPM) capabilities that enable a data center to significantly reduce its power consumption.



VMware Distributed Virtual Networking — A feature that includes a distributed virtual switch (DVS), which spans many ESX/ESXi hosts enabling significant reduction of ongoing network maintenance activities and increasing network capacity. This allows virtual machines to maintain consistent network configuration as they migrate across multiple hosts.



VMware ESX and ESXi — A virtualization layer that runs on a bare metal physical server and abstracts processor, memory, storage, and resources into multiple virtual machines.



VMware Fault Tolerance — When Fault Tolerance is enabled for a virtual machine, a secondary copy of the original (or primary) virtual machine is created. All actions completed on the primary virtual machine are applied to the secondary virtual machine using Record/Replay functionality to ensure that that the secondary machine is identical to the primary. If the primary virtual machine becomes unavailable, the secondary becomes active, providing continual availability.



VMware High Availability — A feature that provides high availability for applications running in virtual machines. If a server fails, affected virtual machines are restarted on other production servers that have spare capacity.



VMware Hot add — A feature that enables CPU and memory to be added to virtual machines on a need basis without disruption or downtime.



VMware Hot extend of virtual disks — A feature that allows virtual storage to be added to running virtual machines without disruption or downtime.



VMware Hot plug — A feature that enables virtual storage and network devices to be added to or removed from virtual machines without disruption or downtime.



VMware vCenter Server — The central point for configuring, provisioning, and managing virtualized IT environments.



VMware VMotion and Storage VMotion — VMware VMotion enables the live migration of running virtual machines from one physical server to another with zero down time, continuous service availability, and complete transaction integrity. Storage VMotion enables the migration of virtual machine files from one data store to another without service interruption. A virtual machine and all its disks can be placed in a single location, or in separate locations for the virtual machine configuration file and each virtual disk. The virtual machine does not change execution host during a migration with Storage VMotion.



VMware vShield Zones — A feature that simplifies application security by enforcing corporate security policies at the application level in a shared environment, while still maintaining trust and network segmentation of users and sensitive data. Introduction to Using EMC Celerra with VMware vSphere 4

9

Applied Best Practices Guide

About this Document

10



VMware vSphere Client — An interface that allows users to connect remotely to vCenter Server or ESX/ESXi from any Windows PC.



VMware vStorage VMFS — A high performance clustered file system for ESX/ESXi virtual machines.

Introduction to Using EMC Celerra with VMware vSphere 4

Applied Best Practices Guide

Chapter 1

EMC Celerra Overview

This chapter presents these topics: Introduction ................................................................................................................................... 12 Data Movers .................................................................................................................................. 13 Control Station .............................................................................................................................. 13 Basics of storage on Celerra.......................................................................................................... 13

Introduction to Using EMC Celerra with VMware vSphere 4

11

Applied Best Practices Guide

EMC Celerra Overview

Introduction EMC® Celerra® platforms cover a broad range of configurations and capabilities that scale from midrange to high-end networked storage. Although differences exist along the product line, there are some common building blocks. These building blocks are combined to fill out a broad, scalable product line with consistent support and configuration options. A Celerra frame provides n+1 power and cooling redundancy and supports a scalable number of physical disks, depending on your model and the needs of your solution. For the purpose of this document, these two additional building blocks are discussed: ♦

Data Movers



Control Stations

Data Movers move data back and forth between the LAN and back-end storage (disks). The Control Station is the management station for the system. A Celerra is configured and controlled through the Control Station. Figure 1 shows how Celerra works.

Figure 1

12

Celerra block diagram

Introduction to Using EMC Celerra with VMware vSphere 4

Applied Best Practices Guide

EMC Celerra Overview

Data Movers A Celerra system has one or more Data Movers installed in its frame. A Data Mover is an independent server running EMC’s optimized NAS operating system, data access in real time (DART). Each Data Mover has multiple network ports, network identities, and connections to back-end storage. In many ways, a Data Mover operates as an independent server, bridging the LAN and the back-end storage disk array. Multiple Data Movers are grouped together as a single system for high availability and user friendliness. To ensure high availability, Celerra supports a configuration in which one Data Mover acts as a standby for one or more active Data Movers. When an active Data Mover fails, the standby boots quickly and takes over the identity and storage of the failed device. Data Movers in a cabinet are logically grouped together so that they can be managed as a single system by using the Control Station.

Control Station The Control Station is the single point of management and control of a Celerra frame. Regardless of the number of Data Movers or disk drives in the system, the administration of the system is done through the Control Station. Control Stations not only provide the interface to configure Data Movers and back-end storage, but also provide heartbeat monitoring of the Data Movers. Even if a Control Station is inoperable for any reason, the Data Movers continue to operate normally. The Celerra architecture provides an option for a redundant Control Station to support continuous management for an increased level of availability. The Control Station runs a version of the Linux OS that EMC has optimized for Celerra and NAS administration. Figure 1 on page 12 shows a Celerra system with two Data Movers. The Celerra NAS family supports up to eight Data Movers depending on the product model.

Basics of storage on Celerra Celerra provides access to block and file data using iSCSI, Common Internet File System (CIFS), and NFS and Fibre Channel protocols. These storage protocols provide standard TCP/IP and Fibre Channel network services. Using these network services EMC Celerra platforms deliver a complete multi-protocol foundation for a VMware vSphere virtual data center, as depicted in Figure 2 on page 14.

Introduction to Using EMC Celerra with VMware vSphere 4

13

Applied Best Practices Guide

EMC Celerra Overview

Figure 2

14

Celerra storage topology

Introduction to Using EMC Celerra with VMware vSphere 4

Applied Best Practices Guide

Chapter 2

VMware vSphere Overview

This chapter presents these topics: Introduction ................................................................................................................................... 16 Virtual data center architecture ..................................................................................................... 16 vCenter Server............................................................................................................................... 17 Network architecture ..................................................................................................................... 17 Storage architecture....................................................................................................................... 19 New vStorage features................................................................................................................... 20

Introduction to Using EMC Celerra with VMware vSphere 4

15

Applied Best Practices Guide

VMware vSphere Overview

Introduction The VMware vSphere virtualization suite consists of various components including ESX/ESXi Hosts, vCenter Server, vSphere client, vSphere web access, and vSphere SDK. In addition to this, VMware vSphere offers a set of distributed services like distributed resource scheduling, high availability, and consolidated backup. The relationship among the various components is shown in Figure 3.

Figure 3

VMware vSphere architecture

Virtual data center architecture VMware vSphere virtualizes the entire IT infrastructure including servers, storage, and networks.VMware vSphere aggregates these resources and presents a uniform set of elements in the virtual environment. With VMware vSphere, you can manage IT resources like a shared utility and dynamically provision resources to different business units and projects. Figure 4 on page 17 shows the key elements in the architecture of a virtual data center.

16

Introduction to Using EMC Celerra with VMware vSphere 4

Applied Best Practices Guide

VMware vSphere Overview

Figure 4

Virtual data center architecture

vCenter Server Using vCenter Server, the key elements like hosts, clusters, resource pools, data stores, networks and virtual machines can be viewed, configured, and managed. vCenter Server aggregates physical resources from multiple ESX/ESXi hosts and presents a central collection of simple and flexible resources for the system administrator to provision to virtual machines in the virtual environment. vCenter Server components are user access control, core services, distributed services, plug-ins, and various interfaces.

Network architecture The virtual environment provides similar networking elements as the physical world: virtual network interface cards (vNIC), virtual switches (vSwitch), and port groups.

Figure 5

VMware vSphere network architecture Introduction to Using EMC Celerra with VMware vSphere 4

17

Applied Best Practices Guide

VMware vSphere Overview

Like a physical machine, each virtual machine has its own vNIC. The operating system and applications communicate with the vNIC through a standard device driver or a VMware optimized device driver in the same way as a physical NIC. Outside the virtual machine, the vNIC has its own MAC address and one or more IP addresses, and responds to the standard Ethernet protocol in the same way as a physical NIC. An outside agent cannot detect that it is communicating with a virtual machine. A vSwitch works like a layer 2 physical switch. Each server has its own virtual switch. One side of the vSwitch has port groups that connect to virtual machines. The other side has uplink connections to physical Ethernet adapters on the server where the vSwitch resides. Virtual machines connect to the outside world through the physical Ethernet adapters that are connected to the vSwitch uplinks. A vSwitch can connect its uplinks to more than one physical Ethernet adapter to enable NIC teaming. With NIC teaming, two or more physical adapters can be used to share the traffic load or provide passive failover in the event of a physical adapter hardware failure or a network outage. Port group is a unique concept in the virtual environment. A port group is a mechanism for setting policies that govern the network connected to it. A vSwitch can have multiple port groups. Instead of connecting to a particular port on the vSwitch, a virtual machine connects its vNIC to a port group. All virtual machines that connect to the same port group belong to the same network inside the virtual environment, even if they are on different physical servers. The Distributed Virtual Switch Networking feature of vSphere expands this architecture by permitting to configure a virtual switch that spans many ESX/ESXi hosts rather than be confined to a single ESX host. This provides substantial maintenance and configuration advantages, and streamlines the mobility of virtual machines between ESX hosts. Figure 6 illustrates a distributed virtual switch that spans among server ESX hosts.

Figure 6

18

Distributed virtual switch

Introduction to Using EMC Celerra with VMware vSphere 4

Applied Best Practices Guide

VMware vSphere Overview

Storage architecture VMware vSphere storage architecture consists of layers of abstraction layer to manage the physical storage subsystems.

Figure 7

VMware vSphere storage architecture

The data store is like a storage appliance that allocates storage space for virtual machines across multiple physical storage devices. The data store provides a model to allocate storage space to the individual virtual machines without exposing them to the complexity of the physical storage technologies, such as Fibre Channel SAN, iSCSI SAN, direct attached storage, and NAS. A virtual machine is stored as a set of files in a directory in the data store. A virtual disk, inside each virtual machine, is one or more files in the directory. You can perform operations (such as copy, move, and backup) on a virtual disk just like in a file. New virtual disks can be hot-added to a virtual machine without powering it down. In such a case, either a virtual disk file (.vmdk) is created in a data store to provide new storage space for the hot-added virtual disk or an existing virtual disk file is added to a virtual machine. Introduction to Using EMC Celerra with VMware vSphere 4

19

Applied Best Practices Guide

VMware vSphere Overview

Each data store is physically a vStorage VMFS volume on a block storage system for vStorage VMFS data store. For NAS data stores, it is a NFS volume on a file storage system. VMware vStorage VMFS data stores can span multiple physical storage subsystems. A single VMware vStorage VMFS volume can contain one or more LUNs from a local SCSI disk array on a physical host, a Fibre Channel SAN disk farm, or iSCSI SAN disk farm. New LUNs added to any of the physical storage subsystems are detected and made available to all existing or new data stores. Storage capacity on a previously created VMware vStorage VMFS volume (data store) can be hot-extended, without powering down physical hosts or storage subsystems, by adding a new physical LUN from any of the storage subsystems that are visible to it. Conversely, if any of the LUNs (except for the LUN which has the first extent of the spanned volume) within a VMware vStorage VMFS volume (data store) fails or becomes unavailable, only virtual machines that interact with that LUN are affected. All other virtual machines with virtual disks residing in other LUNs continue to function as normal.

New vStorage features The new vStorage features are:

20



Storage VMotion — Storage VMotion can now be administered through vCenter Server and will work across all storage protocols including NFS (in addition to Fibre Channel and iSCSI).



Volume Grow — vCenter Server 4 allows dynamic expansion of a volume partition to add capacity to a running VMware vStorage VMFS.



Data store Alarms — Database Alarms track and warn you on potential resource overutilization or event conditions for data stores. You can set alarms to trigger on events and notify you when critical error conditions occur.



Storage Reports and Maps — Reports help monitor storage information like data store, LUNs, virtual machine on data store, and host access to data store. Storage maps helps to visually represent and understand the relationship between an inventory object and virtual and physical storage resources available for this object. For example, Figure 8 on page 21 shows a storage map that includes NFS and iSCSI storage resources from Celerra.

Introduction to Using EMC Celerra with VMware vSphere 4

Applied Best Practices Guide

VMware vSphere Overview

Figure 8

Storage map

Introduction to Using EMC Celerra with VMware vSphere 4

21

Applied Best Practices Guide

VMware vSphere Overview

22

Introduction to Using EMC Celerra with VMware vSphere 4

Applied Best Practices Guide

Chapter 3

Celerra Storage Provisioning for ESX Hosts

This chapter presents these topics: Introduction ................................................................................................................................... 24 Configuring a VMkernel port group in ESX................................................................................. 24 Naming storage objects ................................................................................................................. 25 Configuring Celerra NFS for ESX ................................................................................................ 26 Configuring Celerra iSCSI for ESX .............................................................................................. 30 Creating a VMware vStorage VMFS data store by using a vSphere client................................... 33 Volume Grow................................................................................................................................ 35 Creating RDM volumes for an ESX host over a Celerra iSCSI LUN ........................................... 35 Storage VMotion ........................................................................................................................... 36 Celerra Virtual Provisioning ......................................................................................................... 38 Celerra virtually provisioned NFS file system .............................................................................. 39 Celerra virtually provisioned iSCSI LUN ..................................................................................... 40 Storage deduplication for virtual machines’ data by using Celerra deduplication for NFS .......... 40

Introduction to Using EMC Celerra with VMware vSphere 4

23

Applied Best Practices Guide

Celerra storage Provisioning for ESX Hosts

Introduction Celerra supports both NFS and iSCSI protocols for configuring network storage to the ESX hosts. It also supports Fibre Channel. For Celerra Gateway platforms, Fibre Channel support is provided through the CLARiiON® or Symmetrix® back-end array of the Celerra Gateway. For Celerra Unified Storage platforms, Fibre Channel support is provided by the Celerra’s embedded FC ports. With multiprotocol support across the entire product line, EMC Celerra is a very attractive platform for a wide range of VMware vSphere deployments. Furthermore, Celerra with CLARiiON or Symmetrix back-end arrays provide ESX hosts with highly available, RAIDprotected storage. Celerra supports a range of advanced features such as Virtual Provisioning, advanced VMware-integrated local and remote replication, advanced storage tiering, and mobility. Furthermore, advanced IP based technologies such as IPv6 and 10 GbE are also available in Celerra. These are now supported in VMware vSphere. ESX supports the use of NFS and iSCSI along with the Fibre Channel devices. Fibre Channel is supported when ESX is configured with data stores using different classes of shared storage. VMware vCenter Server migrations can be used to rebalance virtual disks across the virtual datacenter based on the needs of the application. This can be achieved by offline and online migration by using Storage VMotion. Celerra Management interface is used to configure storage for ESX hosts. Celerra provides striped (RAID 1) and parity (RAID 3/RAID 5/RAID 6) options for performance and protection of the devices used to create ESX volumes. Celerra Automatic Volume Manager (AVM) runs an internal algorithm that identifies the optimal location of the disks that make up a file system. Storage administrators are only required to select the storage pool type and the desired capacity to establish a file system that can be presented to ESX as NAS and blocked storage. The storage and RAID algorithm you choose is largely based on the throughput requirements of your applications or virtual machines. Parity RAID, like RAID 5 and RAID 6, provides the most efficient use of disk space to satisfy the requirements of your applications. RAID 1 provides the best performance at the cost of additional disk capacity to mirror all the data in the file system. For tests that were performed within EMC labs, Parity RAID was chosen for both virtual machine boot disk images and virtual disk storage used for application data. RAID 6 provides added disk protection over RAID 5. An understanding of the application and storage requirements within the computing environment will help you identify the appropriate RAID configuration for your servers when very large pools of disks are used. Celerra uses advanced on-disk parity and proactive soft-error hot-sparing and is not as susceptible to dual disk failures during RAID 5 rebuild. ESX provides tools to create a data store from a Celerra NFS file system export or an iSCSI LUN. A user-assigned label is required to identify the data store. The virtual disks are assigned to a virtual machine and are managed by the guest operating system just like a standard SCSI device. For example, a virtual disk would be assigned to a virtual machine running on an ESX host. To make the device useful, a guest operating system is installed on one of the disks. The format of the virtual disk is determined by the guest OS or install program. One potential configuration would be to present an NFS file system from Celerra to an ESX host. The ESX would use the NFS file system to create an NFS data store and VMDK file for a newly defined virtual machine. In the case of a Windows guest, the VMDK would be formatted as an NTFS file system. Additional virtual disks used for applications could be provisioned from one or more Celerra file systems and formatted as NTFS by the Windows guest OS.

Configuring a VMkernel port group in ESX A VMkernel port group allows the use of iSCSI and NFS storage. When storage is configured on the Celerra system, the ESX host must have a VMkernel defined with network access to the 24

Introduction to Using EMC Celerra with VMware vSphere 4

Applied Best Practices Guide

Celerra storage Provisioning for ESX Hosts

Celerra system. At a functional level, the VMkernel manages the IP storage interfaces, including those used for iSCSI and NFS access to Celerra. When configuring ESX for IP storage with Celerra, the VMkernel network interfaces are configured to access one or more Data Mover iSCSI targets or NFS servers. The Add Networking link on the Configuration tab of the ESX host (Figure 9) provides the path to create a new network object. Configure the VMkernel interface by using the Add Networking link.

Figure 9

VMkernel configuration

Because ESX hosts access the IP storage through the VMkernel interface, it is a recommended practice that the network traffic from Celerra be segmented through a private LAN by using either a virtual LAN or a dedicated IP SAN switch. Based on the throughput requirements for the virtual machines, you may need to configure more interfaces for additional network paths to the Celerra Data Mover. With respect to how the VMkernel interface will be used, it is important to note that NFS will always use one link effectively for a single data store, and iSCSI can be configured to use multiple links for a single iSCSI target.

Naming storage objects Naming storage objects is critical when establishing an ESX environment with Celerra. Providing descriptive names for the file systems, exports, iSCSI targets, and the data stores in ESX can establish valuable information for ongoing administration and troubleshooting of the environment. Prudent use of labels, including the storage system from which they are configured, will help in managing and troubleshooting. Best practice: To ease future management and configuration tasks, incorporate identifying elements (such as IP addresses or NFS server names) into your data store definition and annotate with the name of the Celerra being used.

Introduction to Using EMC Celerra with VMware vSphere 4

25

Applied Best Practices Guide

Celerra storage Provisioning for ESX Hosts

Configuring Celerra NFS for ESX vCenter Server is used to configure and mount NFS file systems from Celerra. The VMware vSphere client is also used to assign a data store name to the export. The data store name is the key reference that is used to manage the data store within the ESX environment. The NFS data store is viewed as a pool of space used to support virtual disks. One or more virtual disks are created within the data store and assigned to virtual machines. Each virtual machine will use the primary virtual disk to install the guest operating system and boot information. The additional disks are used to support application and user data. NFS data stores offer support for the use of ESX virtual disks, virtual machine configuration files, snapshots, disk extension, VMotion, and Disaster Recovery Services. In addition, Celerra provides support for replication, local snapshots, virtual provisioning, NDMP backups, and deduplication for the user data. To access the NFS exported file system, the ESX host must have a VMkernel defined with network access to the Celerra. By default, the ESX host will mount the file system by using the root user account. The file system must include root access for the interface that will mount the file system. Best Practice: Limit the Celerra NFS exports to only the VMkernel interfaces, if they are used as a data store. Table 1 shows default and maximum limit for a vSphere NFS configuration. Table 1

NFS maximum configurations in VMware vSphere 4

Item Number of NFS data stores — default Number of NFS data stores — maximum

Maximum 8 64 (requires changes to advanced settings)

ESX supports a maximum of 64 NFS mount points. By default, the NFS client within ESX is limited to eight NFS mount points per ESX server. The NFS.MaxVolumes parameter can be increased to allow the maximum 64 NFS mount points. Increase in the number of mount points requires the use of additional memory resources on the ESX host that are needed to be able to drive multiple TCP sessions, which is the key for multiple links to be used. Therefore, prior to increasing the number of mount points, the ESX NFS TCP and HEAP settings, and the Net.TcpipHeapSize and Net.TcpipHeapMax should be set to 30 and 120, respectively, by using the Advanced Settings link on the Configuration tab of the ESX host.

26

Introduction to Using EMC Celerra with VMware vSphere 4

Applied Best Practices Guide

Celerra storage Provisioning for ESX Hosts

VMware KB# 2239 1 (http://kb.vmware.com/kb/2239) provides you further details on how you can set these from vCenter Server. Best Practice: The parameters NFS.SendBufferSize and NFS.ReceiveBufferSize should be set to a multiple of 32k per Celerra best practices for connecting an NFS client to Celerra. To change the NFS mount points and buffer parameters:

Figure 10

1

1.

In the vSphere Client window, select the ESX that you need to configure.

2.

Click Configuration.

3.

Click Advanced Setting on the left side of the window.

4.

The Advanced Setting dialog box appears.

5.

Select NFS on the left side of the window.

6.

Modify the parameters NFS.MaxVolumes, NFS.SendBufferSize, and FS.ReceiveBufferSize on the right side of the window.

ESX - NFS advanced parameters configuration

This is an ESX 3.5 KB. However, it also applies to ESX 4.0. Introduction to Using EMC Celerra with VMware vSphere 4

27

Applied Best Practices Guide

Celerra storage Provisioning for ESX Hosts

The addition or removal of Celerra file systems to ESX involves the following:

Figure 11

1.

Appropriate changes, such as creating and exporting a file system, must be made to the EMC Celerra storage. Exported file systems are available across the network and can be mounted by remote users.

2.

Create a data store on ESX. “Creating a VMware vStorage VMFS data store by using a vSphere client” on page 33 provides more information on this. When using NFS, consider the following:



An NFS export should be created for the Celerra file system to permit VMware ESX to access the file system using the NFS network storage protocol. This NFS export should, therefore, include several access permissions to the VMkernel port that was configured in VMware ESX. These access permissions are: Read/Write Hosts — provide the VMkernel port read/write access to the file system.



Root Hosts — provide the VMkernel port root access to the file system.



Access Hosts — provide the VMkernel port mount access to the file system.

Celerra Manager - NFS Export Properties



28



It is a recommended practice to use the uncached mechanism to enhance write performance to the Celerra over the NFS protocol. This mechanism allows well-formed writes (such as multiple disk blocks and disk block alignments) to be sent directly to the disk without being cached on the server.

Introduction to Using EMC Celerra with VMware vSphere 4

Applied Best Practices Guide

Celerra storage Provisioning for ESX Hosts

The uncached write mechanism is designed to improve performance for applications with many connections to a large file. It is important to note that when using this mechanism written data is still cached, but only using the intelligent back-end storage of the Celerra rather than also by the Data Mover. This mechanism can enhance access to large files by at least 30 percent by using the NFS protocol. By default, the uncached mechanism is not selected on Celerra. However, it can be selected for a specific file system. When using the replication software, the uncached option should be selected on the primary file system. The uncached option should also be selected on the secondary file system to continue to improve performance in case of a failover. To select the uncached write mechanism for a file system: From the Control Station command line, or the CLI interface in Celerra Manager, enter the following command: $ server_mount movername -option options,uncached fs_name mount_point Where: ♦

movername — name of the specified Data Mover



options — specifies mount options, separated by commas



fs_name — name of the file system



mount_point — path to mount point for the specified Data Mover

Example: $ server_mount server_2 -option rw,uncached ufs1 /ufs1 Output: server_2: done After exporting the file system on Celerra, a NAS/NFS data store can be configured through the Add Storage wizard by using the vSphere client.

Introduction to Using EMC Celerra with VMware vSphere 4

29

Applied Best Practices Guide

Celerra storage Provisioning for ESX Hosts

Figure 12

ESX - NFS data store configuration

By default, the ESX host will mount the file system by using the root user account. The file system must include root access for the interface that will mount the file system. The NFS data store can be extended by extending the corresponding Celerra file system by using Celerra Manager GUI or CLI.

Configuring Celerra iSCSI for ESX A dynamic, virtualized environment requires changes to the storage infrastructure. This may include the addition and removal of storage devices presented to an ESX server. You can perform both these functions when ESX is online. However, the removal of storage from an existing environment poses a high level of risk. Therefore, you must be extremely careful while removing storage from an ESX server. To add or remove Celerra iSCSI devices to and from the ESX: 1.

Perform configuration changes to the Celerra storage array

2.

Perform configuration changes to the ESX server

You can use Celerra Manager to perform the configuration changes on the Celerra storage array. Subsequently, you must ensure that the VMkernel discovers the new configuration.

Table 2 shows the software iSCSI maximum configurations in VMware vSphere 4.

30

Introduction to Using EMC Celerra with VMware vSphere 4

Applied Best Practices Guide

Celerra storage Provisioning for ESX Hosts

Table 2

Software iSCSI maximum configurations

Item LUNs per host LUNs concurrently used per host Portbound NICs per server Paths to a LUN Total paths per host Host bus adapters (HBAs) per host Total of any combination of static and dynamic targets per host iSCSI initiator ports per host Raw device mapping (RDM) size

Maximum 256 256 8 8 1024 2 256 8 2 TB minus 512 B

To configure iSCSI devices, the ESX hosts must have a network connection configured for IP storage and iSCSI service enabled. To do this: 1.

In the vSphere client, enable the iSCSI client in the Security profile, firewall properties dialog box. This enables the client to establish sessions with the iSCSI target on Celerra.

2.

Select Storage Adapters, and then select the Properties link in the iSCSI Software Adapter dialog box.

3.

In the iSCSI Initiator Properties dialog box, click Configure. The General Properties dialog box appears as shown in Figure 13 on page 32. Select Enabled and click OK.

Introduction to Using EMC Celerra with VMware vSphere 4

31

Applied Best Practices Guide

Celerra storage Provisioning for ESX Hosts

Figure 13

ESX - Software iSCSI Initiator Properties dialog box

4.

In the iSCSI Initiator Properties dialog box, click Dynamic Discovery, click Add, and then type the IP address of the Data Mover interfaces with which the iSCSI initiator communicates.

5.

Configure the iSCSI LUNs on Celerra and mask them to the iSCSI Qualified Name (IQN) of the software initiator defined for this ESX host.

Note: You can identify the IQN in the iSCSI Initiator Properties dialog box. The default device name for the software initiator is vmhba33. You can also obtain the IQN name by issuing the vmkiscsi-iname command from the service console. The management interface is used to enable the iSCSI service and define the network portal that is used to access the Celerra iSCSI target. You can use the Celerra Management iSCSI Wizard to configure an iSCSI LUN. If you know the IQN defined for the ESX host, you can mask the LUN to the host for further configuration. LUNs are provisioned through Celerra Manager from the Celerra file system and

32

Introduction to Using EMC Celerra with VMware vSphere 4

Applied Best Practices Guide

Celerra storage Provisioning for ESX Hosts

masked to the IQN of the ESX host iSCSI software initiator. Similar to NFS, the VMkernel network interface is used to establish the iSCSI session with the Celerra target.

6.

Figure 14

Click Configuration, select Storage Adapters and click Rescan. This scans the iSCSI bus to identify the LUNs that you have configured for the ESX host.

ESX - Storage Adapters Rescan

Creating a VMware vStorage VMFS data store by using a vSphere client The Storage link in the Configuration tab provides the path to create a new data store. Select the object to display all available data stores on the ESX server. In addition to the current state information, the window also provides options to manage the data store information and create new data stores. To create a new data store:

Introduction to Using EMC Celerra with VMware vSphere 4

33

Applied Best Practices Guide

Celerra storage Provisioning for ESX Hosts

Figure 15

Storage link in Configuration tab

1.

Click Storage in the right side of the window and click Add Storage. The Add Storage wizard window appears. The Add Storage wizard presents a summary of the required steps to provision a new data store.

2.

Select the Disk/LUN option to provision a data store on an iSCSI-attached Celerra system.

3.

Click Next. The viable iSCSI or SCSI-attached devices are visible. Devices that have existing VMware file systems are not displayed. This is independent of whether or not that device contains free space. However, devices that do not have VMware vStorage VMFS formatted partitions but have free space are visible in the wizard.

4.

Select the appropriate device and click Next. A summary window or a window that has two options based on the configuration of the selected device. If the selected device has no existing partition, the wizard presents a summary window that describes the proposed layout on the selected device. For devices with existing partitions, the system prompts you with two options: to delete the existing partition or create a VMware vStorage VMFS volume on the free space available on the device.

5.

Select the appropriate option (if applicable), and click Next.

Note: To format the device with the VMware file system, the wizard automatically selects the appropriate formatting option. The block size of the VMware file system influences the maximum size of a single file on the file system. Do not change the default block size (1 MB) unless a virtual disk larger than 256 GB has to be created on the file system. However, unlike other file systems, VMware vStorage VMFS is a self-tuning file system that changes the allocation unit based on the size of the created file. This approach reduces the wasted space commonly found in file systems with the average file size smaller than the block size.

34

Introduction to Using EMC Celerra with VMware vSphere 4

Applied Best Practices Guide

Celerra storage Provisioning for ESX Hosts

Best Practice: Align the application data disks at the Virtual Machine level in a VMware vStorage VMFS data store. The vStorage VMFS data store itself does not need alignment when it is created by using vCenter Server. Recommendations for aligning VMFS Partitions on the VMware website provides more information about aligning application data disks at the Virtual Machine level.

Volume Grow vCenter Server version 4 enables dynamic expansion of a volume partition to add capacity to a running VMware vStorage VMFS. You must extend the Celerra iSCSI LUN that has the existing VMware vStorage VMFS partition first and extend the VMware vStorage VMFS partition through vCenter Server. You do not need to stop the virtual machines to expand the VMware vStorage VMFS volume. Best Practice: With this new feature that supports nondisruptive extensions, expand the same Celerra iSCSI LUN instead of adding a new iSCSI LUN for the extension.

Creating RDM volumes for an ESX host over a Celerra iSCSI LUN To create RDM volumes in ESX, present the LUNs to the ESX server and add the raw LUN through the virtual machine’s Edit settings interface. Click the Add button and select the Add Hard Disk wizard to enable users to add a raw device mapping to a virtual machine. Figure 16 on page 36 shows the interface where you select raw device mappings for a virtual machine. You can notice that there are only raw devices and no VMware file systems on the LUNs. The VMware file system that hosts the mapping file for the RDM volume is selected as part of the Add Hardware wizard process. Ensure that the RDM volumes are aligned for the application data volumes. Best Practice: In general RDM volumes must be used sparingly in use cases that require direct mapping from the virtual machine to the physical LUN such as MSCS.

Introduction to Using EMC Celerra with VMware vSphere 4

35

Applied Best Practices Guide

Celerra storage Provisioning for ESX Hosts

Figure 16

Raw Device Mapping option for a VM

To extend the RDM volume, you must extend the corresponding Celerra iSCSI LUN through the Celerra Manager GUI or CLI. After the guest OS detects the extended volume, you can format the extended volume by using the guest OS.

Storage VMotion The Storage VMotion feature enables the migration of virtual machines from one data store to another data store without service interruption. This enables network administrators to offload virtual machines from one storage array to another to perform maintenance, reconfigure LUNs, and upgrade VMware vStorage VMFS volumes. Administrators can optimize the storage environment for improved performance and seamlessly migrate virtual machines. You can migrate a virtual machine and its disk files from one data store to another, while the virtual machine is running, by using the Storage VMotion feature. You can place the virtual machine and all its disks in a single location or separate locations for the virtual machine configuration file and each virtual disk. The virtual machine does not change the execution host during a migration with Storage VMotion. You can use Storage VMotion to manually redistribute virtual machines or virtual disks to different storage volumes to balance the capacity or to improve the performance. You can administer Storage VMotion through vCenter Server. It works across NFS, Fibre Channel, and iSCSI rather than just Fibre Channel and iSCSI.

36

Introduction to Using EMC Celerra with VMware vSphere 4

Applied Best Practices Guide

Celerra storage Provisioning for ESX Hosts

To configure Storage VMotion:

Figure 17

1.

Display the virtual machine you want to migrate in the inventory.

2.

Right-click the virtual machine and select Migrate from the drop-down list.

Storage VMotion - Select Change Data store

3.

Select Change data store as shown in Figure 17 and click Next.

4.

Select a resource pool and click Next.

5.

To select the destination data store, do one of the following:



To move the virtual machine configuration files and virtual disks to a single destination, select the data store and click Next.



To select individual destinations for the configuration file and each virtual disk, click Advanced. In the Data store column, select a destination for the configuration file and each virtual disk, and click Next.

6.

Select a disk format, and click Next. Figure 18 shows the various formats to store the virtual machine’s disks.

Introduction to Using EMC Celerra with VMware vSphere 4

37

Applied Best Practices Guide

Celerra storage Provisioning for ESX Hosts

Figure 18

Storage VMotion - Target disk type selection

7.

Select the migration priority level and click Next.

8.

Verify the information and click Finish.

A Storage VMotion migration is conducted by the ESX host that holds the affected virtual machines. As such, a large-scale migration is expected to add considerable load on the ESX host. Therefore, Storage VMotion is mainly appropriate for small-scale migrations. Storagebased migration alternatives should be considered for large-scale migrations. Note:

Celerra Virtual Provisioning Celerra Virtual Provisioning™ is a thin provisioning feature that is used to improve capacity utilization. With Virtual Provisioning provided through Celerra file systems and iSCSI LUNs, storage is consumed in a practical manner. Virtual Provisioning enables the creation of storage devices that do not pre-allocate storage capacity for virtual disk space until the virtual machine application generates some data to be stored on the virtual disk. The Virtual Provisioning model avoids the need to over-provision disks based upon expected growth. Storage devices still represent and support the upper size limits to the host that accesses them. But, in most cases the actual disk usage falls below the apparent allocated size. The benefit is that like virtual resources in the ESX architecture, storage is presented as a set of virtual devices that share from a pool of disk resources. Disk consumption increases based on the needs of the virtual machines in the ESX environment. To address future growth, Celerra monitors the available 38

Introduction to Using EMC Celerra with VMware vSphere 4

Applied Best Practices Guide

Celerra storage Provisioning for ESX Hosts

space and can be configured to automatically extend the file system size as the amount of free space decreases.

Celerra virtually provisioned NFS file system Virtual Provisioning is a feature that you can enable on the Celerra NFS file system. However, you must use this feature along with the automatic file system extension feature. When you configure these two features, you must select values for the maximum size parameter and the high water mark (HWM) parameter. The Celerra Control Station extends the file system, when needed, based on the values of these parameters. The automatic file system extension feature guarantees that the file system usage (measured by the ratio of used space to allocated space) is always at least 3 percent below the HWM. With automatic file system extension, when the file system usage reaches the HWM, an automatic extension event notification is sent to the Celerra sys_log and the file system is automatically extended. If Virtual Provisioning is enabled, the maximum size (rather than the amount of storage actually allocated) is presented to the NFS, CIFS, or FTP clients. Figure 19 shows the Virtual Provisioning option in Celerra Manager.

Figure 19

Celerra Manager - Virtual Provisioning configuration for a file system

If there is not enough free storage space to extend the file system to the requested size, the automatic file system extension feature extends the file system to use all the available storage. For example, if automatic file system extension requires 6 GB but only 3 GB is available, the file system automatically extends by 3 GB. For the example in Figure 19, an error message appears indicating there was not enough storage space available to perform the automatic extension. When there is no available storage space, automatic file system extension fails. If this happens, you must manually extend the file system.

Introduction to Using EMC Celerra with VMware vSphere 4

39

Applied Best Practices Guide

Celerra storage Provisioning for ESX Hosts

Celerra virtually provisioned iSCSI LUN A Celerra iSCSI LUN is created within a standard Celerra file system, and it emulates a SCSI disk device. By default, an iSCSI LUN is created as a fully provisioned (also called thick) device with the entire requested disk size reserved for the LUN although it is not taken from the reservation pool. This is called a regular iSCSI LUN. However, when a virtually provisioned iSCSI LUN is created using the Virtual Provisioning storage method, space is not reserved on the disk for the LUN. Additional space is allocated to the LUN only when it is actually required by the user. Therefore, it is important to ensure that file system space is available before the data is added to the LUN. For this reason, you must enable automatic file system extension on the file system on which the virtually provisioned LUN is created. It is recommended that you enable Virtual Provisioning on this file system to optimize storage utilization. When you use the high water mark (HWM) parameter for automatic file system extension, the number of blocks that are used determines when the HWM is reached. File system extension can occur even when usage of the production LUN appears to be low from the host’s perspective. For example, snapshots consume additional file system space. In addition, to delete data from a LUN does not reduce the number of blocks allocated to the LUN. By default, a snapshot of a virtually provisioned iSCSI LUN is virtually provisioned, and a snapshot of a regular iSCSI LUN is fully provisioned. To also virtually provision a snapshot of a regular iSCSI LUN, you must adjust the sparstws parameter on the Celerra Data Mover.

Storage deduplication for virtual machines’ data by using Celerra deduplication for NFS With VMware Infrastructure, the Celerra Data Deduplication feature works best on Celerra file systems that are mounted or mapped by virtual machines over NFS or CIFS for use such as home directories and network shared folders. This eliminates redundant data and improves the storage efficiency of the file systems. Further, as Celerra Deduplication only targets non-active files, it has minimal impact on the virtual machines’ performance. Because Celerra Data Deduplication only works on active files, it does not affect virtual disk files (vmdk files). You can enable Celerra Deduplication for a file system through the Celerra Manager GUI or CLI. Figure 20 shows the Celerra Deduplication option in the Celerra Manager GUI.

40

Introduction to Using EMC Celerra with VMware vSphere 4

Applied Best Practices Guide

Celerra storage Provisioning for ESX Hosts

Figure 20

Celerra Manager - Enable deduplication for a file system

Best Practice: Enable Celerra Deduplication for file systems that are used to store user data, operating system images, and ISOs for better storage efficiency.

Introduction to Using EMC Celerra with VMware vSphere 4

41

Applied Best Practices Guide

Network Multipathing, Failover, and Load Balancing

Chapter 4

Network Multipathing, Failover, and Load Balancing

This chapter presents these topics: Introduction ................................................................................................................................... 43 Celerra NFS high availability ........................................................................................................ 43 Celerra iSCSI high availability...................................................................................................... 44 Multipathing with Celerra iSCSI................................................................................................... 45

42

Introduction to Using EMC Celerra with VMware vSphere 4

Applied Best Practices Guide

Network Multipathing, Failover, and Load Balancing

Introduction An important aspect about configuring an ESX server's network design is to ensure that there is redundancy in accessing the ESX server's storage. This redundancy is available when the storage is accessed by using the NFS or iSCSI protocols. The following section describes how to configure a highly available network when you use either NFS or iSCSI to access the storage presented by a Celerra Data Mover. To maintain a constant connection between an ESX host and its storage, ESX supports multipathing. Multipathing is a technique where you can use more than one physical path to transfer data between the ESX host and the external storage device. If any element in the network path fails, such as an HBA, switch, or cable, ESX can fail over to another physical path. In addition to path failover, multipathing offers load balancing, which redistributes I/O loads between multiple paths, thus reducing or removing potential bottlenecks.

Celerra NFS high availability

In a VMware and Celerra environment, you can achieve high availability for storage access when you use the NFS protocol by configuring specific network configurations in ESX and the Celerra Data Mover. The general rule is not to have a single point of failure in the network path between the ESX server and the Celerra storage. Figure 21 broadly describes one way to achieve high availability when configuring an NFS data store accessing Celerra storage on an ESX server. ♦

Within the virtual switch on the ESX server, NIC Teaming is configured between two network devices connected to the same subnet.



Two Ethernet switches configured with the same subnets and connected by an uplink are used. In this configuration, an uplink is required so that there is always a valid connected network path to the active interface of the Fail-Safe Network (FSN) device. In a FSN device configuration, one network interface is always in a standby state resulting in only one connection being active at a time.



A FSN device, comprised of 2 GbE ports on the same subnet, is created on the Celerra Data Mover.

Introduction to Using EMC Celerra with VMware vSphere 4

43

Applied Best Practices Guide

Network Multipathing, Failover, and Load Balancing

Figure 21

Celerra NFS high availability configuration

Celerra iSCSI high availability In a VMware and Celerra environment, you can achieve high availability for storage access when you use the iSCSI protocol by setting up the following network configurations in ESX and the Celerra Data Mover: ♦

In ESX: •

Two virtual switches are created.



Two VMkernel ports are created.



Two different NICs connected to different subnets are used.



Two Ethernet switches are used. The switches must be on separate subnets. Although not required for a single point of failure, an uplink between switches provides additional protection in case of multiple points of failure.



Two different network interfaces connected to two different subnets are created on a Celerra Data Mover.

The primary design objective is to avoid a single point of failure in the network path between the ESX server and the storage that is accessed through the Celerra Data Mover. Figure 22 on page 45 broadly describes one way to achieve high availability by using VMware iSCSI multipathing feature with Celerra Data Mover.

44

Introduction to Using EMC Celerra with VMware vSphere 4

Applied Best Practices Guide

Network Multipathing, Failover, and Load Balancing

Figure 22

Celerra iSCSI high availability configuration

Multipathing with Celerra iSCSI ESX supports host-based multipathing for hardware and software iSCSI initiators. With Celerra iSCSI, ESX can use multipathing support built into the IP network, which allows the network to perform routing. Through Dynamic Discovery, the iSCSI initiators obtain a list of target addresses that the initiators can use as multiple paths to iSCSI LUNs for failover purposes. ESX also supports host-based multipathing. With the hardware iSCSI, the host can have two or more hardware iSCSI adapters. You can use each adapter as a different path to reach the Celerra iSCSI LUN. With the software iSCSI, you can use multiple physical Ethernet adapters and associate them with multiple VMkernel ports by using 1:1 mapping. As a result, each VMkernel port becomes a different port for storage multipathing. This type of configuration provides network redundancy, failover, and load balancing capabilities for iSCSI connections between ESX and the storage systems. Figure 23 on page 46 depicts a graphical representation of the Celerra iSCSI multipathing.

Introduction to Using EMC Celerra with VMware vSphere 4

45

Applied Best Practices Guide

Network Multipathing, Failover, and Load Balancing

Figure 23

46

Celerra iSCSI multipathing

Introduction to Using EMC Celerra with VMware vSphere 4

Applied Best Practices Guide

Chapter 5

Virtual Machine Operations with Celerra Storage

This chapter presents these topics: Creating virtual machine over Celerra storage .............................................................................. 48 Cloning virtual machine and templates ......................................................................................... 49 Migrating virtual machines ........................................................................................................... 49 Virtual machine snapshots............................................................................................................. 49

Introduction to Using EMC Celerra with VMware vSphere 4

47

Applied Best Practices Guide

Virtual Machine Operations with Celerra Storage

Creating virtual machine over Celerra storage You can create virtual machines by using the New Virtual Machine Wizard in vCenter Server. You must select the Celerra NFS or VMware vStorage VMFS data store, which is already created, during the creation process. Figure 24 shows the two paths in the wizard, typical and custom. vCenter Server supports two virtual machine formats when the custom path is selected:

Figure 24



Virtual machine version 4: Compatible with ESX version 3.0 hosts and later and VMware Server version 1.0 hosts and later



Virtual machine version 7: Compatible with ESX version 4.0 hosts and later and VMware Server version 2.0 hosts. This version provides greater virtual machine functionality.

vCenter - Virtual machine type selection

VMware vSphere 4 supports thin provisioning of virtual disks while creating the virtual machines on VMware vStorage VMFS data stores on Celerra iSCSI. Thin-provisioned virtual disks grow as the virtual machine is used. You can configure virtual machines that use raw device mapping of Celerra iSCSI LUN by using this wizard in custom path. You can convert a virtual disk from thin to thick by navigating to the virtual machine on a data store by using the data store browser. To make the virtual disk thick, right-click and select Inflate.

48

Introduction to Using EMC Celerra with VMware vSphere 4

Applied Best Practices Guide

Virtual Machine Operations with Celerra Storage

Cloning virtual machine and templates A clone is a copy along with customization of a virtual machine. You can initiate cloning through vCenter Server by using the cloning wizard. You can create clones from a virtual machine or a template. To clone a virtual machine with thin-provisioned virtual disks on a Celerra NFS or VMware vStorage VMFS on Celerra iSCSI data store results in a clone with thin disks. When you clone a virtual machine to a template, vCenter Server provides an option to keep the target disk as the source or make it thin or thick.

Migrating virtual machines vCenter Server supports both online and cold migration. “Storage VMotion” on page 36 explains online migration, which is also called Storage VMotion. Cold migration is used to migrate virtual machines from one data store to another when it is offline. The migration of a virtual machine with a thin-provisioned disk from one data store to another data store on the Celerra storage results in a thin-provisioned disk after migration. To initiate a cold migration, right-click the virtual machine and select Migrate.

Virtual machine snapshots A snapshot captures the entire state of the virtual machine at the time of taking the snapshot. This includes: ♦

Memory: The contents of the virtual machine’s memory.



Settings: The virtual machine settings.



Disk state: The state of all the virtual machine’s virtual disks. You can take snapshots on virtual machines with raw device mappings.

To take a snapshot, right-click the virtual machine, select Snapshot, and then select Take Snapshot. You can manage snapshots by using the snapshot manager.

Introduction to Using EMC Celerra with VMware vSphere 4

49

Applied Best Practices Guide

Business Continuity and Data Replication

Chapter 6

Business Continuity and Data Replication

This chapter presents these topics: Introduction ................................................................................................................................... 51 Celerra replication ......................................................................................................................... 51 Replicating a NFS data store ......................................................................................................... 52 Replicating a VMware vStorage VMFS data store over iSCSI ..................................................... 53 Replicating RDM over iSCSI ........................................................................................................ 53

50

Introduction to Using EMC Celerra with VMware vSphere 4

Applied Best Practices Guide

Business Continuity and Data Replication

Introduction The use of Celerra SnapSure™ and EMC Celerra Replicator™ assist with the ongoing management of the NFS file system. Depending on the state of the guest OS and applications, SnapSure provides a crash-consistent image of the NFS file system, which contains operating system and application virtual disks. Celerra iSCSI snapshots can be leveraged to achieve similar benefits with iSCSI LUNs. Celerra snapshots can be used to create point-in-time images of the file system or the iSCSI LUN that contain VMDK files for the OS and application data. The snapshot image can then be used for nearline restores of virtual disk images. The snapshot file system can be integrated to NDMP backup processes or copied to a second Celerra for disaster recovery purposes. VMware Site Recovery Management (SRM) simplifies and automates the key elements of disaster recovery: setting up disaster recovery plans, testing those plans, executing failover when a data center disaster occurs, and failing back to the primary data center. The EMC Celerra iSCSI EMC Celerra Replicator Adapter for VMware SRM is a software package that allows SRM to orchestrate disaster recovery actions for virtual machines hosted on vStorage VMFS data stores that are configured over Celerra iSCSI LUNs. In addition to the iSCSI protocol license, the Celerra storage systems rely on the Celerra Replicator V2 and SnapSure features to provide the replication and snapshot architecture needed for SRM.

Celerra replication The business continuity solution for a production environment with VMware virtual infrastructure includes the use of Celerra Replicator as the mechanism to replicate data from the production data center to the remote data center. You can present the copy of the data in the remote data center to a VMware ESX version 3.x cluster group. The virtual infrastructure at the remote data center thus provides a business continuity solution. For disaster recovery (DR) purposes, a remote replica of the production file system or iSCSI LUN, which is used to provide the ESX storage, is required. Celerra offers advanced data replication technologies to help protect a file system or an iSCSI LUN. In a disaster, you can fail over to the destination side with minimum administrator intervention. You need to maintain the replication session and refresh the snapshots periodically. The update frequency is determined by the available WAN bandwidth and the RPO. You can access Celerra Replicator through the replication wizard in Celerra Manager or CLI. Figure 25 shows the replication wizard in Celerra Manager.

Introduction to Using EMC Celerra with VMware vSphere 4

51

Applied Best Practices Guide

Business Continuity and Data Replication

Figure 25

Celerra Manager - Replication Wizard

Replicating a NFS data store You can use Celerra Replicator to replicate the file systems exported to ESX as NFS data stores. This is accomplished by using the Celerra /nas/bin/fs_replicate command, the new nas_replicate command in EMC Celerra Network Sever version 5.6 or through Celerra Manager. The replication operates at a data store level. Multiple virtual machines are replicated together if they reside in the same data store. If you want further granularity at an image level for an individual virtual machine, you can put the virtual machine in its own NFS data store. After you perform a failover operation to promote the replica, you can mount the destination file system as a NFS data store on the remote ESX server. When you configure the remote ESX server, you must configure the network so that the replicated virtual machines are accessible. Virtual machines that reside in the file system need to register with the new ESX server through the VI client. When you browse the NFS data store, you can right-click a .vmx configuration file and select Add to Inventory to complete the registration. Alternatively, the ESX service console command vmware-cmd can be used to automate the process if you need to register a large number of virtual machines. The following Shell script helps to achieve this goal: for vm in `ls /vmfs/volumes/` do /usr/bin/vmware-cmd -s register /vmfs/volumes//$vm/*.vmx done 52

Introduction to Using EMC Celerra with VMware vSphere 4

Applied Best Practices Guide

Business Continuity and Data Replication

After the registration is done, you can power up a virtual machine. This may take a while to complete. During the power up, a message appears displaying msg.uuid.moved. You must select the Keep option to complete the power-on procedure.

Replicating a VMware vStorage VMFS data store over iSCSI You can use Celerra Replicator for iSCSI to replicate the iSCSI LUNs exported to ESX as VMware vStorage VMFS data stores. You can use the cbm_replicate command of the CBMCLI package to create and manage iSCSI replication sessions. Alternatively, you can use the nas_replicate command available in Celerra version 5.6. The replication operates at a LUN level. Multiple virtual machines are replicated together if they reside in the same iSCSI LUN. If you want better granularity at an image level for an individual virtual machine, you can put the virtual machine in its own iSCSI LUN. Similar to the NFS data store case, you need to register virtual machines with the remote ESX server after a failover. You can do a virtual machine registration either through the data store browser GUI interface or automate it with scripts by using the vmware-cmd service console command or through PowerCLI.

Replicating RDM over iSCSI Replication of RDM volumes is similar to the physical backup of RDM volumes. You can use Celerra Replicator for iSCSI to replicate the iSCSI LUNs presented to ESX as RDM volumes either by using the cbm_replicate command of the CBMCLI package, the nas_replicate command available in Celerra version 5.6, or Replication Manager. You can use Replication Manager only with a RDM volume that is formatted as NTFS and in the physical compatibility mode.

Introduction to Using EMC Celerra with VMware vSphere 4

53

Applied Best Practices Guide

Conclusion

Chapter 7

Conclusion

This chapter presents these topics: Summary ....................................................................................................................................... 55 Related documents......................................................................................................................... 55

54

Introduction to Using EMC Celerra with VMware vSphere 4

Applied Best Practices Guide

Conclusion

Summary To leverage the full capabilities of VMware vSphere requires shared storage. As virtualization is used for production applications, storage performance and total system availability become critical. The EMC Celerra platform is a high-performance, high-availability unified storage solution that is ideal to support multi-protocol ESX host deployments. This document has covered many of the VMware vSphere and Celerra considerations. We have seen that the capability to use a Celerra NFS-exported file system provides a significant benefit in the flexibility of VMware vSphere. Due to the open nature of NFS, multiple ESX hosts can access the same repository for files or folders containing ISO libraries as well as virtual disks (in case of NFS). The combination of NFS, iSCSI, and Fibre Channel support in VMware vSphere and Celerra storage provide an extremely flexible and reliable environment. Because the virtual disks represent file objects stored in the Celerra file system, the management and mobility features of Celerra offer unique options to replicate and migrate virtual machines in the network environment. Aided by the flexibility and advanced functionality of the Celerra products, the ESX host and Celerra network storage platforms provide a very useful set of tools to establish and support the virtual infrastructure.

Related documents There is a considerable amount of additional information available on the VMware website and discussion boards as well as numerous other websites focused on VMware Infrastructure. The VMware website (http://www.vmware.com) provides additional resources and user guides. •

Recommendations for Aligning VMFS Partitions



VMware vSphere 4 Administrator Guide



VMware vSphere 4 Configurations Maximum Guide



VMware vSphere 4 SAN Configuration Guide



VMware vSphere 4 Availability Guide

Introduction to Using EMC Celerra with VMware vSphere 4

55

Applied Best Practices Guide