Using EMC Celerra Storage with VMware vSphere and VMware ...

0 downloads 388 Views 8MB Size Report
Jun 3, 2010 - 3.7.2 Create a NAS datastore on an ESX server............. 133. 3.8 Using iSCSI storage . ... 3.11 Monitor
Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure Version 4.0

• Connectivity of VMware vSphere or VMware Infrastructure to Celerra Storage • Backup and Recovery of VMware vSphere or VMware Infrastructure on Celerra Storage • Disaster Recovery of VMware vSphere or VMware Infrastructure on Celerra Storage

Yossi Mesika

Copyright © 2008, 2009, 2010 EMC Corporation. All rights reserved. EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without notice. THE INFORMATION IN THIS PUBLICATION IS PROVIDED “AS IS.” EMC CORPORATION MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND WITH RESPECT TO THE INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Use, copying, and distribution of any EMC software described in this publication requires an applicable software license. For the most up-to-date regulatory document for your product line, go to the Technical Documentation and Advisories section on EMC Powerlink. For the most up-to-date listing of EMC product names, see EMC Corporation Trademarks on EMC.com. All other trademarks used herein are the property of their respective owners.

H5536.9

2

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

Contents

Errata EMC VSI for VMware vSphere: Unified Storage Management replaces EMC Celerra Plug-in for VMware............................. 21

Preface Top five optimization recommendations .................................28

Chapter 1

Introduction to VMware Technology 1.1 VMware vSphere and VMware Infrastructure virtualization platforms ............................................................. 30 1.2 VMware vSphere and VMware Infrastructure --type=vendor --rule --vendor="EMC" --model="Celerra

Figure 229 Claim rule to ESX server

2. Type the following command to update the kernel and esx.conf: esxcli corestorage claimrule load

Figure 230 Kernel and esx conf

3. Type the following command to verify whether the claim rule is successfully loaded. esxcli corestorage claimrule list

4. Reboot the ESX host.

300

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

Figure 231 Rescan the ESX host

3.13.4.4 Configure PowerPath/VE multipathing for Celerra iSCSI using hardware iSCSI initiators 1. Configure the hardware iSCSI initiator as given in Section 3.8.2.2, “ESX iSCSI hardware initiator,” on page 154. 2. To create a new iSCSI LUN, refer to step 10 onwards in Section 3.8.2.1, “ESX iSCSI software initiator,” on page 139.

Storage multipathing

301

Note: The iSCSI target must be made available on two network interfaces, which are on two different subnets.

Figure 232 iSCSI Target Properties

Note: Grant the iSCSI LUN to both hardware initiators (on two different subnets), which are connected to the target.

Figure 233 LUN Mask

302

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

Figure 234 Storage Adapters

3. From vCenter Server, select the HBA and right-click it, and then click Rescan. Note: Rescan both hardware HBAs to discover the iSCSI LUN and to make sure a path is available for each HBA port.

Storage multipathing

303

4. Click Storage in the left pane. The Storage page appears.

Figure 235 Storage

5. Click Add Storage. The Add Storage wizard appears.

Figure 236 Add Storage wizard

304

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

6. Select Disk/LUN, and then click Next. The Select Disk/LUN dialog box appears.

Figure 237 Select Disk/LUN

7. Select the appropriate iSCSI LUN from the list, and then click Next. The Current Disk Layout dialog box appears.

Figure 238 Current Disk Layout

8. Review the current disk layout, and then click Next. The Ready to Complete dialog box appears.

Storage multipathing

305

Figure 239 Ready to Complete

9. Review the layout, and then click Finish. The vCenter Server storage configuration page appears.

306

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

Figure 240 vCenter Server storage configuration

10. Select the .

Figure 263 Configuration file

20. Upload the updated virtual machine configuration file to the datastore browser. VMware Resiliency

331

Figure 264 Update virtual machine file configuration

21. Power on the virtual machines. The LSI StorPort Adapters are successfully installed in the virtual machine.

332

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

Figure 265 LSI Storport drivers are upgraded successfully

3.14.8 Using paravirtual drivers in vSphere 4 environments The procedure to configure the paravirtual SCSI adapters as a system boot disk for VMware vSphere 4 environments and to add the paravirtual SCSI disks to the existing virtual machines is discussed in detail in this section. 3.14.8.1 Configure a PVSCSI adapter as a system boot disk in VMware vSphere 4.1 environments To configure a disk as PVSCSI adapter as system boot disk in VMware vSphere 4.1 environments: 1. Launch the vSphere Client and log in to the ESX host system and create a new virtual machine. 2. Ensure that a guest operating system that supports PVSCSI is installed on the virtual machine. VMware Resiliency

333

3. Right-click the virtual machine, and then click Edit Settings. The Virtual Machine Properties dialog box appears. Note: These drivers are loaded during the installation of the guest operating system in the form of floppy disk images, which are available in the [Datastore]/vmimages/floppies folder.

Figure 266 Virtual Machine Properties

4. Select Use existing floppy image in datastore, and then click Browse. The Browse Datastore dialog box appears. Note: Connect the floppy disk image after the Windows CD-ROM is booted so that the system does not boot from the floppy drive.

334

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

Figure 267 Browse Datastores

5. Browse to vmimages > floppies and select the floppy images of the appropriate guest OS, and then click OK. The floppy image of the guest OS is displayed, and the device status of the floppy image is connected at power on.

VMware Resiliency

335

Figure 268 Virtual Machine Properties

6. Power on the virtual machine. Note: The virtual machine boots from the CD-ROM drive.

7. Press F6. Windows Setup appears. Note: This is required to instruct the operating system that third-party SCSI drivers are used.

336

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

Figure 269 Install the third-party driver

8. In the Device Status area of the Virtual Machine Properties dialog box, select Connect at Power on and click OK. The newly created virtual machine points to the PVSCSI SCSI driver.

VMware Resiliency

337

Figure 270 Select VMware PVSCSI Controller

9. Press ENTER. The third-party paravirtual SCSI drivers are successfully installed. 10. Continue the Window guest OS setup. Note: Booting a Microsoft Windows guest from a disk attached to a PVSCSI adapter is not supported in versions of ESX prior to ESX 4.0 Update 1. In these situations, install the system software on a disk attached to an adapter that does support a bootable disk.

3.14.8.2 Add Paravirtual SCSI (PVSCSI) adapters To add a hard disk with a paravirtual SCSI adapter: 1. Start a vSphere Client and log in to an ESX host system. 2. Select an existing virtual machine or create a new one.

338

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

3. Ensure that a guest operating system that supports PVSCSI is installed on the virtual machine. Note: The guest operating system currently supports the paravirtual drivers Windows Server 2008, Windows Server 2003, and Red Hat Enterprise Linux (RHEL) 5. If the guest operating system does not support booting from a disk attached to a PVSCSI adapter, install the system software on a disk attached to an adapter that supports a bootable disk In the vSphere Client, right-click the virtual machine, and then click Edit Settings.

In the vCenter Server, right-click the virtual machine and then click Edit Settings. The Virtual Machine Properties dialog box appears.

Figure 271 Virtual Machine Properties

4. Click Hardware, and then click Add. The Add Hardware wizard appears.

VMware Resiliency

339

Figure 272 Select Hard Disk

5. Select Hard Disk, and then click Next. The Select a Disk dialog box appears.

340

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

Figure 273 Select a Disk

6. Select Create a new virtual disk, and then click Next. The Create a Disk dialog box appears.

VMware Resiliency

341

Figure 274 Create a Disk

7. Specify the virtual disk size and provisioning policy, and then click Next. The Advanced Options dialog box appears.

342

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

Figure 275 Advanced Options

8. Select a Virtual Device Node between SCSI (1:0) to SCSI (3:15). Click Next. The Ready to Complete page appears.

VMware Resiliency

343

Figure 276 Ready to Complete

9. Click Finish. A new disk and controller are created. 10. In the Virtual Machine Properties dialog box, select the newly created controller, and then click Change Type.

344

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

Figure 277 Virtual Machine Properties

The Change SCSI Controller Type dialog box appears.

VMware Resiliency

345

Figure 278 Change SCSI Controller Type

11. Click VMware Paravirtual, and then click OK. 12. Power on the virtual machine. 13. Install VMware Tools. VMware Tools includes the PVSCSI driver. 14. Scan and format the hard disk.

346

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

4 Cloning Virtual Machines

This chapter presents these topics: ◆ ◆ ◆ ◆ ◆

4.1 Introduction .................................................................................... 348 4.2 Cloning methodologies ................................................................. 349 4.3 Cloning virtual machines by using Celerra-based technologies 353 4.4 Celerra-based cloning with Virtual Provisioning ...................... 359 4.5 Conclusion....................................................................................... 363

Cloning Virtual Machines

347

4.1 Introduction Cloning a virtual machine is the process of creating an exact copy of an existing virtual machine in the same or a different location. By cloning virtual machines, administrators can quickly deploy a group of virtual machines based on a single virtual machine that was already created and configured. To clone a virtual machine, copy the data on the virtual disk of the source virtual machine and transfer that data to the target virtual disk, which is the new cloned virtual disk. System reconfiguration, also known as system customization, is the process of adjusting the migrated operating system to avoid any possible network and software conflicts, and enabling it to function on the virtual hardware. Perform this adjustment on the target virtual disk after cloning. It is not mandatory to shut down virtual machines before they are cloned. However, ideally, administrators should shut down the virtual machines before copying the metadata and the virtual disks associated with the virtual machines. Copying the virtual machines after they are shut down ensures that all the data from memory has been committed to the virtual disk. Hence, the virtual disk will contain a fully consistent copy of the virtual machines, which can be used to back up or to quickstart cloned virtual machines. This chapter explains the primary methods available in VMware vSphere and VMware Infrastructure to clone virtual machines. It also explains Celerra-based technologies that can be used to clone virtual machines.

348

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

4.2 Cloning methodologies VMware vSphere and VMware Infrastructure provide two primary methods to clone virtual machines — VMware vCenter Converter and the Clone Virtual Machine wizard in vCenter Server.

4.2.1 Clone Virtual Machine wizard in vCenter Server To clone a virtual machine by using the Clone Virtual Machine wizard: 1. Right-click the virtual machine in the inventory and select Clone. The Clone Virtual Machine wizard appears. Note: For VMware Infrastructure 3.5, it is recommended that administrators shut down the virtual machine before cloning. For VMware vSphere and Celerra-based cloning, the state of the virtual machine does not matter.

Figure 279 Clone Virtual Machine wizard

Cloning methodologies

349

2. Type the name of the virtual machine, select the inventory location, and then click Next. The Host/Cluster dialog box appears.

Figure 280 Host/Cluster

3. Select the host for running the cloned virtual machine and click Next. The Datastore dialog box appears.

Figure 281 Datastore

350

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

4. Select the datastore to store the virtual machine and click Next. The Disk Format dialog box appears.

Figure 282 Disk Format

5. Select the format to store the virtual machine disk and click Next. The Guest Customization dialog box appears.

Figure 283 Guest Customization

6. Select the option to use in customizing the guest operating system of the new virtual machine and click Next. The Ready to Complete dialog box appears.

Cloning methodologies

351

Note: Select Do not customize if no customization is required.

Figure 284 Ready to Complete

7. Click Finish. The cloning is initiated. Note: After the clone operation is completed, a cloned virtual machine is created with an exact copy of the source virtual machine. The Clone Virtual Machine wizard can also handle system reconfiguration of the cloned virtual machine.

4.2.2 VMware vCenter Converter VMware vCenter Converter is a tool integrated with vCenter Server that enables administrators to convert any type of a physical or virtual machine, which runs on the Windows operating system, into a virtual machine that runs on an ESX server. VMware vCenter Converter can also be used to clone an existing virtual machine. VMware vCenter Converter uses its cloning and system reconfiguration features to create a virtual machine that is compatible with an ESX server. Section 3.7, “Using NFS storage,” on page 128 provides more details about VMware vCenter Converter.

352

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

4.3 Cloning virtual machines by using Celerra-based technologies Note: The following section includes applicable Celerra technologies available with Celerra versions earlier than 5.6.48. However, starting from this release, the Celerra Data Deduplication technology was enhanced to also support virtual machines cloning. EMC Celerra Plug-in for VMware—Solution Guide provides more information on this technology and how it can be used in this case.

Celerra provides two technologies that can be used to clone virtual machines — Celerra SnapSure™ for file systems when using the NFS protocol, and iSCSI snapshot for iSCSI LUNs when using the iSCSI protocol. When using Celerra-based technologies for cloning, the virtual machine data is not passed on the wire from Celerra to ESX and back. Instead, the entire cloning operation is performed optimally within the Celerra with no ESX cycles. If the information stored on the snapshot or checkpoint needs to be application-consistent (recoverable), administrators should either shut down or quiesce the applications that are running on the virtual machines involved in the cloning process. This must be done before a checkpoint or snapshot is created. Otherwise, the information on the snapshot or checkpoint will only be crash-consistent (restartable). This means that although it is possible to restart the virtual machines and the applications in them from the checkpoint or snapshot, some of the most recent data will be missing because it is not yet committed by the application (data in flight). When virtual machines are cloned by using Celerra SnapSure or iSCSI snapshots, the cloned virtual machines will be exact copies of the source virtual machines. Administrators should manually customize these cloned virtual machines to avoid any possible network or software conflicts. To customize a Windows virtual machine that was cloned by using Celerra SnapSure or iSCSI snapshots, install the Windows customization tool, System Preparation (Sysprep), on the virtual machine. Sysprep will resignature all details associated with the new virtual machine and assign new system details. Sysprep also avoids possible network and software conflict between the virtual machines. Appendix B, “ Windows Customization,”provides information on Windows customization with Sysprep.

Cloning virtual machines by using Celerra-based technologies

353

4.3.1 Clone virtual machines over NAS datastores using Celerra SnapSure Celerra SnapSure makes cloning of virtual machines, which are provisioned over an NAS datastore, easier. SnapSure creates a logical point-in-time copy of the production file system called a checkpoint file system. The production file system contains a NAS datastore that contains the metadata and virtual disks associated with the virtual machines that must be cloned. For cloning virtual machines by using Celerra SnapSure, the writeable checkpoint file system must be in read/write mode. The writeable checkpoint file system is created using Celerra Manager as shown in Figure 285 on page 354.

Figure 285 Create a writeable checkpoint for NAS datastore

Alternatively, writeable checkpoint file systems can be created by using Celerra CLI: # fs_ckpt

-Create -readonly n

Similar to a standard NAS file system, it is mandatory to grant the VMkernel read/write access in addition to root access to the checkpoint file system. Section 3.7.1, “Add a Celerra file system to ESX,” on page 128 explains how to provide VMkernel the required access permissions. To clone one or more virtual machines that reside on a checkpoint file system, add the writeable checkpoint file system to the ESX server as a new NAS datastore, browse for the new datastore, and add the VMX files of the virtual machines to the vCenter inventory. This creates new virtual machines with the help of the Add to Inventory wizard.

354

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

Section 2.5.2, “Celerra SnapSure,” on page 76 provides more details about Celerra SnapSure.

4.3.2 Clone virtual machines over iSCSI/vStorage VMFS datastores using iSCSI snapshots iSCSI snapshots on Celerra offer a logical point-in-time copy of the iSCSI LUN. Virtual machines are created on the vStorage VMFS over iSCSI. A Celerra iSCSI LUN is presented to the ESX server and formatted as a VMFS datastore. Because each snapshot needs the same amount of storage as the iSCSI LUN (when virtual provisioning is not used), ensure that the file system that stores the production LUN unit and its snapshot has enough free space to store the snapshot. Section 2.5.4, “Celerra iSCSI snapshots,” on page 77 provides more details about iSCSI snapshots. 4.3.2.1 Create a temporary writeable snap Promoting a snapshot creates a temporary writeable snap (TWS). Mounting a TWS on an iSCSI LUN makes the snapshot visible to the iSCSI initiator. After mounting a TWS on an iSCSI LUN, it can be configured as a disk device and used as a production LUN. Note: Only a snapshot can be promoted.

Use the following CLI command to promote the snapshot. #server_iscsi -snap -promote -initiator

Figure 286 shows how to promote a snapshot in CLI.

Figure 286 Promote a snapshot

Cloning virtual machines by using Celerra-based technologies

355

The mounted LUN is assigned the next available number that is greater than 127. If this number is not available, the LUN is assigned the next available number in the range 0 through 127. After the LUN is promoted, the TWS becomes visible to the ESX server as a new iSCSI LUN. With VMware Infrastructure, administrators must configure the advanced configuration parameters, LVM.DisallowsnapshotLun and LVM.EnableResignature, to control the clone behavior. To add the promoted LUN to the storage without VMFS formatting, set the LVM.EnableResignature parameter to 1, set LVM.DisallowsnapshotLun to the default parameter value, which is 1. Refer step 7 onwards in Section 3.8.3, “Create VMFS datastores on ESX,” on page 174 for more details on the LVM parameter combination. With VMware vSphere, the configuration is much simpler because there is no need to configure any advanced configuration parameters. To resignature a vStorage VMFS datastore copy, select the Assign a new signature option when adding the LUN as a datastore. Datastore resignaturing must be used to retain the data stored on the vStorage VMFS datastore copy. The prerequisites for datastore resignaturing are: ◆

Unmount the mounted datastore copy.



Rescan the storage on the ESX server so that it updates its view of LUNs presented to it and discovers any LUN copies.

To resignature a vStorage VMFS data copy: 1. Log in to vSphere Client and select the host from the Inventory area. 2. Click Configuration and click Storage in the Hardware area. 3. Click Add Storage. 4. Select the Disk/LUN storage type and click Next. 5. From the list of LUNs, select the LUN that has a datastore name displayed in the VMFS Label column and click Next. The Select VMFS Mount Options dialog box appears. Note: The name present in the VMFS Label column indicates that the LUN is a copy that contains a copy of an existing vStorage VMFS datastore.

356

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

6. Select Assign a new signature and click Next.

Figure 287 Assign a new signature option

The Ready to Complete page appears. 7. Review the datastore configuration information and click Finish. The promoted LUN is added and is visible to the host. 8. Browse for the virtual machine's VMX file in the newly created datastore, and add it to the vCenter inventory. The virtual machine clone is created. Although a promoted snapshot LUN is writeable, all changes made to the LUN are allocated only to the TWS. When the snapshot is demoted, the LUN is unmounted and its LUN number is unassigned. After the snapshot demotion, data that was written to the promoted LUN becomes irretrievable because it is lost. Therefore, back up the cloned virtual machines before the promoted LUN is demoted.

4.3.3 Clone virtual machines over iSCSI or RDM volumes by using iSCSI snapshots RDM allows a special file in a vStorage VMFS datastore to act as a proxy for a raw device, the RDM volume. iSCSI snapshots can be used to create a logical point-in-time copy of the RDM volume, which can be used to clone virtual machines. Multiple virtual machines cannot be cloned on the same RDM volume because only a single virtual machine can use an RDM volume. To clone a virtual machine that is stored on an RDM volume, create a snapshot of the iSCSI LUN that is mapped by the RDM volume. Cloning virtual machines by using Celerra-based technologies

357

Section 2.5.4, “Celerra iSCSI snapshots,” on page 77 provides more details about iSCSI snapshots. The procedure to create a TWS of the RDM volume is the same as the procedure to create a vStorage VMFS volume. To clone a virtual machine over RDM, create a virtual machine over the local datastore by using the Virtual Machine Creation wizard. After a virtual machine is created, select and edit the virtual machine settings by using the Edit Settings menu option. Using this option, remove the hard disk created on the local datastore. Add the newly promoted iSCSI LUN as the hard disk that contains the original virtual machine VMX files and power on the virtual machine. Section 3.8.4, “Create RDM volumes on ESX servers,” on page 182 provides detailed information about creating a virtual machine over an RDM volume.

358

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

4.4 Celerra-based cloning with Virtual Provisioning To optimize the utilization of the file system, administrators can combine Celerra Virtual Provisioning technology and virtual machine cloning by using Celerra-based technologies. Celerra Virtual Provisioning includes two technologies that are used together — automatic file system extension and file system/LUN virtual provisioning. Section 2.5.1, “Celerra Virtual Provisioning,” on page 76 provides more information about Celerra Virtual Provisioning.

4.4.1 Clone virtual machines over NAS using SnapSure and Virtual Provisioning Virtual Provisioning provides the advantage of presenting the maximum size of the file system to the ESX server, of which only a portion is actually allocated. To create a NAS datastore, a virtually provisioned file system must be selected. Cloning virtual machines on a virtually provisioned file system is similar to cloning virtual machines on a fully provisioned file system. The advantage of cloning virtual machines on a virtually provisioned file system is that the administrators can initially allocate a minimum amount of storage space required for the virtual machines, and as the data grows, they can automatically allocate additional space to the NAS datastore. When using a virtually provisioned file system during virtual machine cloning, it is important to monitor the file system utilization to ensure that enough space is available. The storage utilization of the file system can be monitored by checking the size of the file system using Celerra Manager. Figure 288 on page 360 shows the file system usage on Celerra Manager.

Celerra-based cloning with Virtual Provisioning

359

Figure 288 File system usage on Celerra Manager

Section 4.3.1, “Clone virtual machines over NAS datastores using Celerra SnapSure,” on page 354 explains the procedure to clone virtual machines from a file system.

4.4.2 Clone virtual machines over VMFS or RDM using iSCSI snapshot and Virtual Provisioning To maximize overall storage utilization, ensure that virtually provisioned iSCSI LUNs are created to deploy the virtual machines. The iSCSI LUNs take advantage of the automatic file system extension when cloning virtual machines. Virtually provisioned iSCSI LUNs can be created only through CLI. When using a virtually provisioned iSCSI LUN during the virtual machine cloning, it is crucial to monitor the file system space to ensure that enough space is available. Monitor the LUN utilization by using the following CLI command: #server_iscsi -lun -info

360

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

A virtually provisioned LUN does not reserve space on the file system. To avoid data loss or corruption, ensure that the file system space is available for allocation when the data is added to the LUN. Setting a conservative high water mark provides an added advantage when enabling the automatic file system extension. Cloning virtual machines on the virtually provisioned iSCSI LUN is the same as cloning virtual machines on normal iSCSI LUNs. Section 4.3.2, “Clone virtual machines over iSCSI/vStorage VMFS datastores using iSCSI snapshots,” on page 355 explains the procedure to clone virtual machines from the iSCSI LUN. Section 3.12, “Virtually provisioned storage,” on page 258 provides further information on deploying virtual machines over Celerra virtually provisioned file systems. 4.4.2.1 Celerra Data Mover parameter setting for TWS To further maximize the storage utilization, an extra step is required to ensure that the TWS will also be virtually provisioned in all cases. Set the sparseTws Celerra Data Mover parameter to 1 to ensure that the TWS of an iSCSI LUN will be virtually provisioned. If the sparseTws parameter is set to 1, the TWS created will always be virtually provisioned. The default value of the sparseTws is 0 and its possible values are 0 and 1. The value 0 indicates that a fully provisioned TWS is created if the production LUN is not virtually provisioned. The sparseTws parameter can be modified by using Celerra Manager (Figure 289 on page 362) or CLI.

Celerra-based cloning with Virtual Provisioning

361

Figure 289 Parameter setting using Celerra Manager

In CLI, the following command updates the parameter: $ server_param server_2 -facility nbs -modify sparseTws -value 1

Section 3.12, “Virtually provisioned storage,” on page 258 provides further information on deploying virtual machines over Celerra virtually provisioned iSCSI LUNs.

362

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

4.5 Conclusion Celerra-based virtual machine cloning is an alternative that can be used instead of the conventional VMware-based cloning. The advantage of cloning virtual machines using Celerra-based technologies is that the cloning of virtual machines can be performed on the storage layer in a single operation for multiple virtual machines. The Celerra methodologies used to clone virtual machines are Celerra SnapSure and iSCSI snapshot. Celerra SnapSure creates a checkpoint of the NAS file system. Adding the checkpoint file system as a storage to the ESX server provides the advantage of creating clones of the original virtual machines on the ESX server. The iSCSI snapshot creates an exact snap of the LUN that can be used as a datastore to clone the original virtual machine. Enabling Virtual Provisioning provides the advantage of efficiently managing the storage space used for virtual machines cloning on the file system and the LUN. Table 4 summarizes when to consider VMware-based cloning and Celerra-based cloning. Table 4

Virtual machine cloning methodology comparison Virtual machine cloning category

Consider when

VMware-based

• The VMware administrator has limited access to the storage system. • Only a few virtual machines from a datastore must be cloned.

Celerra-based

• Most of the virtual machines from a datastore must be cloned. • Using VMware Infrastructure, the production virtual machines should not be shut down during the cloning process.

Conclusion

363

364

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

5 Backup and Restore of Virtual Machines

This chapter presents these topics: ◆ ◆ ◆ ◆ ◆ ◆ ◆ ◆ ◆ ◆ ◆ ◆

5.1 Backup and recovery options ....................................................... 366 5.2 Recoverable as compared to restartable copies of data ............ 367 5.3 Virtual machines data consistency............................................... 369 5.4 Backup and recovery of a NAS datastore ................................... 371 5.5 Backup and recovery of a vStorage VMFS datastore over iSCSI . 382 5.6 Backup and recovery of an RDM volume over iSCSI ............... 388 5.7 Backup and recovery using VCB ................................................. 389 5.8 Backup and recovery using VCB and EMC Avamar ................ 395 5.9 Backup and recovery using VMware Data Recovery ............... 398 5.10 Virtual machine single file restore from a Celerra checkpoint . 401 5.11 Other file-level backup and restore alternatives ...................... 404 5.12 Summary ....................................................................................... 406

Backup and Restore of Virtual Machines

365

5.1 Backup and recovery options EMC Celerra combines with VMware vSphere or VMware Infrastructure to offer many possible ways to perform backup and recovery of virtual machines. This is regardless of whether an ESX server uses an NAS datastore, a vStorage VMFS datastore over iSCSI, or an RDM volume over iSCSI. It is critical to determine the customer RPO or RTO so that an appropriate method is used to meet the Service Level Agreements (SLAs) and minimize downtime. At the storage layer, two types of backup are discussed in the context of this chapter: logical backup and physical backup. A logical backup does not provide a physically independent copy of production data. It offers a view of the file system or iSCSI LUN as of a certain point in time. A logical backup can occur very rapidly and requires very little space to store. Therefore, a logical backup can be taken very frequently. Restoring from a logical backup can be quick as well, depending on the data changes. This dramatically reduces the mean time to recovery. However, a logical backup cannot replace a physical backup. The logical backup protects against logical corruption of the file system or iSCSI LUN, accidental deletion of files, and other similar human errors. However, it does not protect the data from hardware failures. Also, loss of the PFS or iSCSI LUN renders the checkpoints or snapshots unusable. A physical backup takes a full and complete copy of the file system or iSCSI LUN to a different physical media. Although the backup and recovery time may be longer, a physical backup protects the data from hardware failure.

366

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

5.2 Recoverable as compared to restartable copies of data The Celerra-based replication technologies can generate a restartable or recoverable copy of the data. The difference between the two types of copies can be confusing. A clear understanding of the differences between the two is critical to ensure that the recovery goals for a virtual infrastructure environment can be met.

5.2.1 Recoverable disk copies A recoverable (also called application-consistent) copy of the data is one that allows the application to apply logs and roll the data forward to an arbitrary point in time after the copy was created. This is only possible if recoverable disk copies are supported by the application. The recoverable copy is most relevant in the database realm where database administrators use it frequently to create backup copies of a database. It is critical to business applications that a database failure can be recovered to the last backup and that it can roll forward subsequent transactions. Without this capability, a failure may cause an unacceptable loss of all transactions that occurred since the last backup. To create a recoverable image of an application, either shut down the application or suspend writes when the data is copied. Most database vendors provide the functionality to suspend writes in their RDBMS engine. This functionality must be invoked inside the virtual machine when EMC technology is deployed to ensure that a recoverable copy of the data is generated on the target devices.

5.2.2 Restartable disk copies When a copy of a running virtual machine is created by using EMC consistency technology when no action is taking place inside the virtual machine, the copy is normally a restartable (also called crash-consistent) image of the virtual machine. This means that when the data is used on cloned virtual machines, the operating system or the application goes into crash recovery. The exact implications of crash recovery in a virtual machine depends on the application that the virtual machine supports. These implications could be: ◆

If the source virtual machine is a file server or it runs an application that uses flat files, the operating system performs a file system check and fixes inconsistencies in the file system, if any. Modern file systems such as Microsoft NTFS use journals to accelerate the process.

Recoverable as compared to restartable copies of data

367



When the virtual machine is running a database or application with a log-based recovery mechanism, the application uses the transaction logs to bring the database or application to a point of consistency. The deployed process varies depending on the database or application, and is beyond the scope of this document.

Most applications and databases cannot perform roll-forward recovery from a restartable copy of the data. Therefore, it is inappropriate to use a restartable copy of data created from a virtual machine that is running a database engine for performing backups. However, applications that use flat files or virtual machines that act as file servers can be backed up from a restartable copy of the data. This is possible because none of the file systems provide a logging mechanism that enables roll-forward recovery. Note: Without additional steps, VCB creates a restartable copy of virtual disks associated with virtual machines. The quiesced copy of the virtual disks created by VCB is similar to the copy created by using EMC consistency technology.

368

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

5.3 Virtual machines data consistency In environments where EMC Celerra is deployed to provide storage to the ESX server, crash consistency is generally offered by the Celerra backup technologies that are described in this chapter. In a simplified configuration where a virtual machine’s guest OS, application, application data, and application log are encapsulated together in one datastore, crash consistency is achieved by using one of the Celerra technologies. However, many applications, especially database applications, strongly recommend separating data and log files in different file systems or iSCSI LUNs. By following this best practice, a virtual machine will have multiple virtual disks (vmdk files) spread across several datastores. It is therefore critical to maintain data consistency across these datastores when backup or replication occurs. VMware snapshots can be leveraged together with the Celerra technologies to provide crash consistency in such complicated scenarios. VMware snapshot is a software-based technology that operates on a per-virtual machine basis. When a VMware snapshot is taken, it quiesces all I/Os and captures the entire state of a virtual machine including its settings, virtual disks, and optionally the memory state, if the virtual machine is up and running. The virtual machine ceases to write to the existing virtual disks while subsequently writing changed blocks to the newly created virtual disks, which essentially are the .vmdk delta files. Because I/Os are frozen to the original virtual disks, the virtual machine can revert to the snapshot by discarding the delta files. On the other hand, the virtual disks merge together if the snapshot must be deleted. As soon as the VMware snapshot is taken, a virtual machine backup can be completed by initiating a SnapSure checkpoint if the virtual disk resides on an NAS datastore, or by taking an iSCSI snapshot if the virtual disk resides on vStorage VMFS/iSCSI or RDM/iSCSI. Snapshots of all datastores containing all virtual disks that belong to the virtual machines constitute the entire backup set. All the files related to a particular virtual machine must be restored together to revert to the previous state when the VMware snapshot was taken. Carefully isolate the placement of .vmdk files of multiple virtual machines in the same datastore so that a snapshot restore does not affect other virtual machines.

Virtual machines data consistency

369

As long as the backup set is intact, crash consistency can be maintained even across protocols or storage types such as vStorage VMFS/iSCSI, RDM/iSCSI, or, except RDM (physical mode), which is not supported by VMware snapshot technology. To perform backup operations, do the following: 1. Initiate a VMware snapshot and capture the memory state if the virtual machine is up and running. 2. Take Celerra checkpoints or snapshots of all datastores that contain virtual disks that belong to the virtual machine. Note: Replicate the datastores to a local or remote Celerra. This is optional.

3. Delete the VMware snapshot to allow virtual disks to merge after deltas are applied to the original virtual disks. To perform restore operations, do the following: 1. Power off the virtual machine. 2. Perform checkpoint or snapshot restore of all datastores containing virtual disks that belong to the virtual machine. 3. Execute the service console command service mgmt-vmware restart to restart the ESX host agent, which updates the virtual machine status reported in the vSphere GUI. Note: Wait for 30 seconds for the refresh and then proceed.

4. Open the VMware Snapshot Manager and revert to the snapshot taken in step 1 and delete the snapshot. 5. Power on the virtual machine. Replication Manager, which is described later in this chapter, supports the creation of replicas and vStorage VMFS datastores containing virtual machines in a VMware ESX server environment. It also provides a point-and-click backup and recovery of virtual machine-level images. It automates and simplifies the management of virtual machine backup and replication by leveraging VMware snapshots. This is to create virtual machine consistent replicas of vStorage VMFS and NAS datastores that are ideal for creating image-level backups and instant restores of virtual machines.

370

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

5.4 Backup and recovery of a NAS datastore The backup and recovery of virtual machines residing on NAS datastores can be performed in various ways. These are described in the following sections.

5.4.1 Logical backup and restore using Celerra SnapSure Celerra SnapSure can be used to create and schedule logical backups of the file systems exported to an ESX server as NAS datastores. This is accomplished by using the Celerra Manager as shown in Figure 290.

Figure 290 Checkpoint creation in Celerra Manager GUI 5.6

Alternatively, this can also be accomplished by using the following two Celerra commands: # /nas/sbin/fs_ckpt -name Create –readonly y # /nas/sbin/rootfs_ckpt -Restore

For Celerra version 5.5, use the following command to create and restore checkpoints: # /nas/sbin/fs_ckpt -name -Create # /nas/sbin/rootfs_ckpt -name -Restore

Backup and recovery of a NAS datastore

371

In general, this method works on a per-datastore basis. If multiple virtual machines share the same datastore, they can be backed up and recovered simultaneously and consistently, in one operation. To recover an individual virtual machine: 1. Change the Data Mover parameter cfs.showChildFsRoot from the default value of 0 to 1 as shown in Figure 291.

Figure 291 ShowChildFsRoot Server Parameter Properties in Celerra Manager

Note: A virtual directory is created for each checkpoint that is created with Celerra SnapSure. By default, these directories will be under a virtual directory named .ckpt. This virtual directory is located in the root of the file system. By default, the .ckpt directory is hidden. Therefore, the datastore viewer in vCenter Server will not be able to view the .ckpt directory. Changing the Data Mover parameter enables each mounted checkpoint of a PFS to be visible to clients as subdirectories of the root directory of the PFS as shown in Figure 292.

Figure 292 Datastore Browser view after checkpoints are visible

2. Power off the virtual machine. 372

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

3. Browse to the appropriate configuration and virtual disk files of the specific virtual machine as shown in Figure 292 on page 372. 4. Manually copy the files from the checkpoint and add it to the datastore under the directory /vmfs/volumes//VM_dir. 5. Power on the virtual machine.

5.4.2 Logical backup and restore using Replication Manager Replication Manager can also be used to protect NAS datastores that reside on an ESX server managed by a VMware vCenter Server and attached to a Celerra system. Replication Manager uses Celerra SnapSure to create local replicas of VMware NAS datastores. VMware snapshots are taken for all the virtual machines that are online and that reside on the NAS datastore just prior to creating local replicas to ensure operating system consistency of the resulting replica. Operations are sent from a Linux proxy host, which is either a physical host or a separate virtual host. The Replication Manager Job Wizard (Figure 293) can be used to select the replica type and expiry options. Replication Manager version 5.2.2 must be installed for datastore support.

Figure 293 Job Wizard

Backup and recovery of a NAS datastore

373

Select the Restore option in Replication Manager (Figure 294) to restore the entire datastore.

Figure 294 Restoring the datastore replica from Replication Manager

Before restoring the replica, do the following: 1. Power off the virtual machines that are hosted within the datastore. 2. Remove those virtual machines from the vCenter Server inventory. 3. Restore the replica from Replication Manager. 4. After the restore is complete, add the virtual machines to the vCenter Server inventory. 5. Revert to the VMware snapshot taken by Replication Manager to obtain an operating system consistent replica and delete the snapshot. 6. Manually power on each virtual machine. Note: Replication Manager creates a rollback snapshot for every Celerra file system that has been restored. The name of each rollback snapshot can be found in the restore details as shown in Figure 295 on page 375. The rollback snapshot may be deleted manually after the contents of the restore have been verified and the rollback snapshot is no longer needed. Retaining these snapshots beyond their useful life can cause resource issues.

374

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

Figure 295 Replica Properties in Replication Manager

A single virtual machine can be restored by using the Mount option in Replication Manager. Using this option, it is possible to mount a datastore replica to an ESX server as a read-only or read-write datastore. To restore a single virtual machine, do the following: 1. Mount the read-only replica as a datastore in the ESX server as shown in Figure 296 on page 376. 2. Power off the virtual machine residing in the production datastore. 3. Remove the virtual machine from the vCenter Server inventory. 4. Browse to the mounted datastore. 5. Copy the virtual machine files to the production datastore. 6. Add the virtual machine to the inventory again. 7. Revert to the VMware snapshot taken by Replication Manager to obtain an operating system consistent replica and delete the snapshot. 8. Unmount the replica through Replication Manager.

Backup and recovery of a NAS datastore

375

9. Power on the virtual machine.

Figure 296 Read-only copy of the datastore view in the vSphere client

5.4.3 Physical backup and restore using the nas_copy command The Celerra command /nas/bin/nas_copy can be used for full or incremental physical backup. It can be typically used to back up a file system to a volume on the Celerra that consists of ATA drives or another Celerra. Although using nas_copy for backup is convenient, it has some limitations during recovery. The nas_copy command cannot be used to copy data back to the source file system directly. The destination must be mounted and the files must be copied back to the source file system manually. This could unnecessarily prolong the recovery time. Therefore, using nas_copy to back up datastores is not encouraged. Note: Use the fs_copy command to perform a full physical backup in versions earlier than Celerra version 5.6.

5.4.4 Physical backup and restore using Celerra NDMP and NetWorker One of the recommended methods for physical backup and recovery is to use Network Data Management Protocol (NDMP) by utilizing Celerra Backup along with the Integrated Checkpoints feature and EMC NetWorker®, or any other compatible third-party backup software in the following manner: 1. Create a Virtual Tape Library Unit (VTLU) on Celerra if the performance needs to be improved by backing up on disks instead of tapes. 2. Create a library in EMC NetWorker. 3. Configure NetWorker to create bootstrap configuration, backup group, backup client, and so on. 376

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

4. Run NetWorker Backup. 5. Execute NetWorker Recover. The entire datastore or individual virtual machine can be selected for backup and recovery. Figure 297 shows NetWorker during the process.

Figure 297 NDMP recovery using EMC NetWorker

To utilize Celerra backup with integrated checkpoints, set the environment variable SNAPSURE=y. This feature automates the checkpoint creation, management, and deletion activities by entering the environmental variable in the qualified vendor backup software. The setting of the SNAPSURE variable for creating a backup client with EMC NetWorker is illustrated in Figure 298.

Figure 298 Backup with integrated checkpoint

Backup and recovery of a NAS datastore

377

When the variable is set in the backup software, each time a particular job is run, a checkpoint of the file system is automatically created (and mounted as read-only) before the NDMP backup starts. The checkpoint is automatically used for the backup, allowing production activity to continue uninterrupted on the file system. During the backup process, the checkpoint is automatically managed (for example, SavVol is auto-extended if needed, and if space is available). When the backup completes, the checkpoint is automatically deleted, regardless of whether it succeeds or fails.

5.4.5 Physical backup and restore using Celerra Replicator Celerra Replicator can be used for the physical backup of the file systems exported to ESX servers as datastores. This is accomplished by using the Celerra /nas/bin/nas_replicate command or by using the Celerra Manager. Multiple virtual machines can be backed up together if they reside in the same datastore. If further granularity is required at an image level for an individual virtual machine, move the virtual machine in its own datastore. The backup can either be local or remote. After the file system is completely backed up, stop the replication to make the target file system a stand-alone copy. If required, this target file system can be made read-writeable. After the target file system is attached to an ESX server, an individual virtual machine can be restored by copying its folder from the target file system to the PFS. If VMware snapshots already exist at the time of the backup, the Snapshot Manager in the VI client might not report all VMware snapshots correctly after a virtual machine restore. One way of updating the GUI information is to remove the virtual machine from the inventory and add it again. If an entire file system is to be recovered, a replication session can be established in the reverse direction from the target file system to the production file system with the nas_replicate command. Note: For versions earlier than Celerra version 5.6, use the /nas/bin/fs_replicate command for physical backup of datastores.

5.4.6 Physical backup and restore using Replication Manager Another method to take backups is to use Replication Manager to provide physical backup of the datastores. Replication Manager uses Celerra Replicator technology to create remote replicas in this scenario. These replicas are actually snapshots that represent a crash-consistent

378

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

replica of the entire datastore. Similar to a logical backup and restore, Replication Manager version 5.2.2 must be installed for datastores support. Before creating replicas on a target Celerra, create a read-only file system on the target Celerra to which the data will be transferred, and create a Celerra Replicator session between the source and target file systems by using Celerra Manager. While creating a replication session, it is recommended to use a Time out of Sync value of 1 minute. VMware snapshots are taken for all virtual machines that are online and reside on the datastore just prior to creating replicas to ensure the operating system consistency of the resulting replica. The entire datastore can be restored by selecting the Restore option in Replication Manager. Replication Manager creates a rollback snapshot for a remote Celerra file system during restore. Before restoring a crash-consistent remote replica, do the following: 1. Power off the virtual machines that are hosted within the datastore. 2. Remove those virtual machines from the vCenter Server inventory. 3. Restore the remote replica from Replication Manager. 4. After the restore is complete, add the virtual machines into the vCenter Server inventory. 5. Revert to the VMware snapshot taken by Replication Manager to obtain an operating system consistent replica, and then delete the snapshot. 6. Manually power on each virtual machine.

Backup and recovery of a NAS datastore

379

A single virtual machine can be restored by using the Mount option in the Replication Manager. Using this option, it is possible to mount a datastore remote replica to an ESX server as a datastore as shown in Figure 299.

Figure 299 Mount Wizard - Mount Options

To restore a single virtual machine: 1. Mount the read-only remote replica as a datastore in the ESX server. 2. Power off the virtual machine that resides in the production datastore. 3. Remove the virtual machine from the vCenter Server inventory. 4. Browse the mounted datastore. 5. Copy the virtual machine files to the production datastore. 6. Add the virtual machine to the inventory again to report the VMware snapshot taken by Replication Manager. 380

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

7. Revert to the VMware snapshot taken by Replication Manager to obtain an operating system consistent replica and delete the snapshot. 8. Unmount the replica by using Replication Manager. 9. Power on the virtual machine.

Backup and recovery of a NAS datastore

381

5.5 Backup and recovery of a vStorage VMFS datastore over iSCSI The backup and recovery of virtual machines residing on vStorage VMFS datastores over iSCSI can be done in many ways. A brief description of the methods is given here.

5.5.1 Logical backup and restore using Celerra iSCSI snapshots When using vStorage VMFS over iSCSI, a Celerra iSCSI LUN is presented to the ESX server and formatted as type vStorage VMFS. In this case, users can create iSCSI snapshots on the Celerra to offer point-in-time logical backup of the iSCSI LUN. Use the following command to create and manage iSCSI snaps directly on Celerra Control Station: # server_iscsi -snap –create –target -lun # server_iscsi -snap –restore Note: To create and manage iSCSI snapshots in versions earlier than Celerra 5.6, a Linux host that contains the Celerra Block Management Command Line Interface (CBMCLI) package is required. The following command is used to create snapshots and restore data on the Linux host: # cbm_iscsi --snap --create # cbm_iscsi --snap --restore

In general, this method works on a per-vStorage VMFS basis, unless the vStorage VMFS spans multiple LUNs. If multiple virtual machines share the same vStorage VMFS, back up and recover them together in one operation. When multiple snapshots are created from the PLU, restoring an earlier snapshot will delete all newer snapshots. Furthermore, ensure that the file system that stores the PLU and its snapshots has enough free space to create and restore from a snapshot. An individual virtual machine can be restored from a snapshot when the snapshot is made read-writeable and attached to the ESX server. With VMware vSphere, as part of the Select VMFS Mount Options screen, select Assign a new signature (Figure 300 on page 383) to enable disk re-signature if the snapped LUN is attached to the same ESX server.

382

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

Figure 300 VMFS mount options to manage snapshots

With VMware Infrastructure, however, this step is somewhat more complex. To present the snapshot correctly to ESX, administrators must set the advanced configuration parameters, LVM.DisallowsnapshotLun and LVM.EnableResignature, to control the clone behavior. Use a proper combination of the LVM advanced configuration parameters to discover the storage to ESX. To add the promoted LUN to the storage without vStorage VMFS formatting, set the LVM.EnableResignature parameter to 0. To add the promoted LUN to the storage, set LVM.DisallowsnapshotLun to the default parameter value, which is 1.

Backup and recovery of a vStorage VMFS datastore over iSCSI

383

When the snapped vStorage VMFS is accessible from the ESX server, the virtual machine files can be copied from the snapped vStorage VMFS to the original vStorage VMFS to recover the virtual machine.

5.5.2 Logical backup and restore using Replication Manager Replication Manager protects the vStorage VMFS datastore over iSCSI that resides on an ESX server managed by a VMware vCenter Server and attached to a Celerra. It uses Celerra iSCSI snapshots to create replicas of vStorage VMFS datastores. VMware snapshots are taken for all virtual machines, which are online and reside on the vStorage VMFS datastore, just prior to creating local replicas to ensure operating system consistency of the resulting replica. Operations are sent from a Windows proxy host, which is either a physical host or a separate virtual host. The entire vStorage VMFS datastore can be restored by choosing the Restore option in Replication Manager. Before restoring a crash-consistent vStorage VMFS replica, do the following: 1. Power off the virtual machines that are hosted within the vStorage VMFS datastore. 2. Remove these virtual machines from the vCenter Server inventory. 3. Restore the replica from the Replication Manager. 4. After the restore is completed, add the virtual machines to the vCenter Server inventory. 5. Revert to the VMware snapshot to obtain an operating system consistent replica, and delete the snapshots. 6. Manually power on each virtual machine. A single virtual machine can be restored by using the Mount option in Replication Manager. Using this option, it is possible to mount a vStorage VMFS datastore replica to an ESX server as a vStorage VMFS datastore. To restore a single virtual machine: 1. Mount the replica as a vStorage VMFS datastore in the ESX server. 2. Power off the virtual machine residing in the production datastore. 3. Remove the virtual machine from the vCenter Server inventory. 4. Browse for the mounted datastore. 5. Copy and paste the virtual machine files to the production datastore. 384

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

6. Add the virtual machine to the inventory again to report the VMware snapshot taken by Replication Manager. 7. Revert to the VMware snapshot taken by Replication Manager to obtain an operating system consistent replica and delete the snapshot. 8. Unmount the replica through Replication Manager. 9. Power on the virtual machine.

5.5.3 Physical backup and restore using Celerra Replicator For a physical backup, use the following nas_replicate command to create and manage iSCSI clones by using Celerra Replicator V2 from the CLI on the Control Station or from the Celerra Manager in Celerra version 5.6: # nas_replicate –create -source –lun -target -destination –lun -target -interconnect

Figure 301 shows the new Replication Wizard in the Celerra Manager, which allows you to replicate an iSCSI LUN:

Figure 301 Celerra Manager Replication Wizard

Backup and recovery of a vStorage VMFS datastore over iSCSI

385

Note: To create a physical backup in versions earlier than Celerra version 5.6, the Celerra iSCSI Replication-Based LUN Clone feature can be used. A target iSCSI LUN of the same size as the production LUN must be created on Fibre Channel or ATA disks to serve as the destination of a replication session initiated by the following command: # cbm_replicate --dev --session --create --alias --dest_ip --dest_name --label

The backup can be either local or remote. After the PLU is completely replicated, stop the replication session to make the target LUN a stand-alone copy. If required, this target LUN can be made read-writeable. The target LUN can be attached to the same or different ESX server. If the target LUN is attached to the same server, disk re-signature must be enabled. After the target LUN is attached to an ESX server, an individual virtual machine can be restored by copying its folder from the target LUN to the PLU. If VMware snapshots already exist at the time of backup and VMware snapshots are added or deleted later, the Snapshot Manager in the VI client might not report all VMware snapshots correctly after a virtual machine restore. One way to update the GUI information is to remove the virtual machine from the inventory and add it again. If an entire vStorage VMFS must be recovered, a replication session can be established in the reverse direction from the target LUN back to the PLU with the cbm_replicate command or the nas_replicate command. Storage operations, such as snapshot restore, can cause the vSphere client GUI to be out of sync with the actual state of the ESX server. For example, if VMware snapshots already exist at the time of backup and VMware snapshots are added or deleted later, the Snapshot Manager in the vSphere client may not report all VMware snapshots correctly after a LUN restore. One way of updating the GUI information is executing the following command in the service console to restart the ESX host agent: # service mgmt-vmware restart

Restore and refresh all VMware snapshots existing prior to the backup when the Snapshot Manager is reopened. However, VMware snapshots taken after the backup are lost following an iSCSI LUN restore.

386

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

5.5.4 Physical backup and restore using Replication Manager Replication Manager also provides a physical backup of the vStorage VMFS datastore over iSCSI that resides on an ESX server managed by VMware vCenter Server and attached to a Celerra. It uses Celerra Replicator to create remote replicas of vStorage VMFS datastores. For a single virtual machine recovery, the Mount option in Replication Manager can be used. To restore the entire vStorage VMFS datastore, use the Restore option as described in Section 5.5.2, “Logical backup and restore using Replication Manager,” on page 384.

Backup and recovery of a vStorage VMFS datastore over iSCSI

387

5.6 Backup and recovery of an RDM volume over iSCSI The iSCSI LUNs presented to an ESX server as RDM are normal raw devices just like they are in a non-virtualized environment. RDM provides some advantages of a virtual disk in the vStorage VMFS file system while retaining some advantages of direct access to physical devices. For example, administrators can take full advantage of storage array-based data protection technologies regardless of whether the RDM is in a physical mode or virtual mode. For logical backup and recovery, point-in-time, Celerra-based iSCSI snapshots can be created. To back up an RDM volume physically, administrators can use the Celerra iSCSI Replication-Based LUN Clone feature to create clones for versions earlier than Celerra version 5.6. When using RDM, it is recommended that an RDM volume is not shared among different virtual machines or different applications, except in the case of being used as the quorum disk of a clustered application. With RDM, administrators can create snapshots or clones in one of the following ways: ◆

Use the nas_replicate command or the Celerra Manager Replication Wizard. Alternatively, for Celerra version 5.5, administrators can install the CBMCLI package and use the cbm_iscsi and cbm_replicate commands as described in Section 5.5, “Backup and recovery of a vStorage VMFS datastore over iSCSI,” on page 382.



Install and use Replication Manager. Replication Manager offers customers a simple interface to manipulate and manage the disk-based snaps and replicas for Celerra and other platforms and integrate with Windows applications to provide application-level consistency.

Note: Only RDM volumes in the physical compatibility mode are supported at this time. Only RDM volumes formatted as NTFS can be recognized by Replication Manager. Therefore, Microsoft Windows guest machines can be backed up this way. Virtual machines of other OS types still require CBMCLI for crash-consistent backup.

388

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

5.7 Backup and recovery using VCB VCB allows a virtual machine backup at any time by providing a centralized backup facility that leverages a centralized proxy server and reduces the load on production ESX server hosts. VCB integrates with existing backup tools and technologies to perform full and incremental file backups of virtual machines. VCB can perform full image-level backup for virtual machines running any OS, as well as file-level backups for virtual machines running Microsoft Windows without requiring a backup agent in the guest hosts. Figure 302 on page 390 illustrates how VCB works. In addition to the current LAN and SAN mode, VMware introduced the Hot-Add mode in the VCB 1.5 release. This mode allows administrators to leverage VCB for any datastore by setting up one of the virtual machines as a VCB proxy and using it to back up other virtual machines residing on storage visible to the ESX server that hosts the VCB proxy. VCB creates a snapshot of the virtual disk to be protected and hot-adds the snapshot to the VCB proxy, allowing it to access virtual machine disk data. The VCB proxy reads the data through the I/O stack of the ESX host. In contrast to the LAN mode, which uses the service console network to perform backups, the Hot-Add mode uses the hypervisor I/O stack. In the LAN mode, the IP network can potentially be saturated. The testing proved that the Hot-Add mode is more efficient than the LAN mode.

Backup and recovery using VCB

389

Figure 302 VCB

The Celerra array-based solutions for backup and recovery operate at the datastore level, or more granularly at the virtual machine image level. If individual files residing inside a virtual machine must be backed up, other tools will be required. VCB is a great tool for file-level and image-level backup. A VCB proxy must be configured on a Windows system that requires third-party backup software such as EMC NetWorker or EMC Avamar®, the VCB integration module for the backup software, and the VCB software itself. VMware provides the latter two components that are downloadable at no cost. However, the VCB licenses must be purchased and enabled on the ESX or vCenter Server. After all three components are installed, the configuration file config.js located in the directory \config must be modified before the first backup can be taken. This file contains comments that define each parameter. 390

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

It is recommended to follow the README file in the integration module, which contains step-by-step instructions to prepare and complete the first VCB backup successfully. When a backup is initiated through EMC NetWorker, it triggers the scripts provided in the integration module, which in turn starts the executable vcbMounter.exe (included in the VCB software) to contact the vCenter Server or the ESX server directly to locate the virtual machine to be backed up. The arguments passed to vcbMounter.exe come from config.js and the Save set syntax in EMC NetWorker. VCB image-level backup supports virtual machines that run any type of OS. For NetWorker versions earlier than 7.4.1, the Save set in EMC NetWorker must include the keyword FULL and the name or IP of the target virtual machine. Starting with release 7.4.1, each virtual machine to be backed up must be added as a client to NetWorker. Specify FULL in the Save set for full machine backup as shown in Figure 303 on page 392. VCB first retrieves the virtual machine configuration files as well as its virtual disks in a local directory before NetWorker takes a backup of the directory. During a restore, NetWorker restores the directory on the VCB proxy. The administrator must take the final step to restore the virtual machine onto an ESX server by using the vcbRestore command or VMware vCenter Server Converter tool. Because the command vcbRestore is unavailable in the VCB proxy, it must be run directly from the ESX service console. VCB file-level backup only supports the Windows guest OS. For versions earlier than NetWorker version 7.4.1, the Save set in EMC NetWorker must include the name or IP of the target virtual machine and a colon-separated list of paths that must be backed up. Starting with the NetWorker release 7.4.1, each virtual machine that must be backed up must be added as a client to NetWorker. In the Save set, specify the colon-separated list of paths that must be backed up or ALLVMFS to back up all the files and directories on all drivers of the target machine. VCB first takes a VMware snapshot and uses mountvm.exe to mount the virtual disk on the VCB proxy before NetWorker backs up the list of paths provided in the Save set. During a restore, expose the target directory of the virtual machine as a CIFS share to the backup proxy. Use NetWorker User on the VCB proxy to restore the desired file to this network share.

Backup and recovery using VCB

391

Figure 303 NetWorker configuration settings for VCB

While planning to use VCB with vSphere, consider the following guidelines and best practices:

392



Ensure that all virtual machines that must be used with VCB have the latest version of VMware tools installed. Without the latest version of VMware tools, the snapshots that VCB creates for backups are crash-consistent only. This means that no virtual machine-level file system consistency is performed.



Image-level backup can be performed on virtual machines running any OS. File-level backup can be done only on Windows virtual machines.



RDM physical mode is not supported for VCB.



When an RDM disk in a virtual mode is backed up, it is converted to a standard virtual disk format. Hence, when it is restored, it will no longer be in the RDM format.



When using the LAN mode, each virtual disk cannot exceed 1 TB.

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure



The default backup mode is SAN. To perform LAN-based backup, modify TRANSPORT_MODE to either nbd or nbdssl or hotadd in file config.js.



Even though Hot-Add transport mode is efficient, it does not support the backup of virtual disks belonging to different datastores.



vcbMounter and vcbRestore commands can be executed directly on the ESX server without the need for a VCB license. However, there will be a performance impact on the ESX server because additional resources are consumed during backup/restore.



vcbRestore is not available on VCB proxy. It has to be run directly on the ESX server or a VMware vCenter Server Converter must be installed to restore a VCB image backup.



Mountvm.exe on VCB proxy is a useful tool to mount a virtual disk that contains NTFS partitions.



Before taking a file-level backup, VCB creates a virtual machine snapshot named _VCB-BACKUP_. An EMC NetWorker job will hang if the snapshot with the same name already exists. This default behavior can be modified by changing the parameter PREEXISTING_VCB_SNAPSHOT to delete in config.js.



If a backup job fails, virtual machines can remain mounted in the snapshot mode. Run vcbCleanup to clean up snapshots and unmount virtual machines from the directory specified in BACKUPROOT of config.js.



Because VCB by default searches for the target virtual machines by IP address, the virtual machine has to be powered on the first time it is backed up so that VMware tools can relay the information to the ESX or VC server. This information is then cached locally on the VCB proxy after the first backup. A workaround is to switch to the virtual machine lookup by name setting VM_LOOKUP_METHOD=”name” in config.js. Note: The backup would fail if there are duplicated virtual machine names.



Beginning with release 7.4.1 of NetWorker, each virtual machine to be backed up must be added as a client to the NetWorker. However, installing the NetWorker client software on the virtual machine itself is not required. It is recommended that with NetWorker

Backup and recovery using VCB

393

release 7.4.1 or later, the VCB method to find virtual machines should be based on the virtual machine IP address (default method). ◆

If vcbMounter hangs, NetWorker will also hang waiting for it to complete. To troubleshoot this issue, download and run a copy of the Process Explorer utility from sysinternals.com, right-click the vcbMounter process, and select Properties. The Command line textbox on the Image tab displays the full syntax of the vcbMounter command. Copy the command, terminate the hung process, then paste and run the command manually in a DOS window to view the output and determine the cause.



vcbRestore by default restores the image to its original location. An alternate location can be specified by editing the paths listed in the catalog file.

When using Security Support Provider Interface (SSPI) authentication, ensure that the HOST in the config.js configuration file points to the vCenter Server. The NetWorker integration module that calls the VCB Framework must use the user credentials that reside on both the VCB and the vCenter Servers with identical passwords, or must use the domain account. The user account must have administrator privileges on the VCB proxy and at least VCB user privileges in the vCenter Server.

394

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

5.8 Backup and recovery using VCB and EMC Avamar EMC Avamar is a backup and recovery software product. Avamar’s source-based global data deduplication technology eliminates unnecessary network traffic and data duplication. By identifying redundant data at the source, this deduplication minimizes backup data before it is sent over the network, thereby slowing the pace of the data growth in the core data centers and at remote offices. Avamar is very effective in areas where traditional backup solutions are inadequate such as virtual machines, remote offices, and large LAN-attached file servers. Avamar solves traditional backup challenges by: ◆

Reducing the size of backup data at the source.



Storing only a single copy of sub-file data segments across all sites and servers.



Performing full backups that can be recovered in just one step.



Verifying backup data recoverability.

Avamar Virtual Edition for VMware integrates with VCB for virtual environments by using the Avamar VCB Interoperability Module (AVIM). The AVIM is a series of .bat wrapper scripts that leverage VCB scripts to snap/mount and unmount running virtual machines. These scripts are called before and after an Avamar backup job. There are some scripts for full virtual machine backup (for all types of virtual machines) and some scripts for file-level backup (for Windows virtual machines only). These scripts can be used regardless of whether a NFS datastore or vStorage VMFS over iSCSI is used. Figure 304 on page 396 illustrates the full virtual machine backup and file-level backup process.

Backup and recovery using VCB and EMC Avamar

395

Figure 304 VCB backup with EMC Avamar Virtual Edition

The Avamar agent, AVIM, and the VCB software must be installed on the VCB proxy server. After all the three software components are installed, the VCB configuration file (config.js), which is located in the \config directory, must be modified before the first backup can be taken. The VCB configuration file contains comments that define each parameter for Avamar backups. After initiating a backup job from Avamar, VCB retrieves configuration files as well as virtual disks to its local directory. Then Avamar copies the files to the backup destination. After the job is successful, Avamar removes the duplicate copy on the VCB proxy server. This type of backup can be performed on any guest OS and the deduplication occurs at the .vmdk level. VCB file-level backup with Avamar is similar to the VCB image-level backup with Avamar. When a backup is initiated through Avamar, it triggers the scripts provided in the integration module, which in turn starts the executable vcbMounter.exe (included in the VCB software) to contact the vCenter Server or the ESX server directly to locate the virtual machine to be backed up. The arguments passed to vcbMounter.exe come from config.js and the Dataset syntax in EMC 396

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

Avamar. In this case, data deduplication happens at the file level. However, presently, VCB file-level backup works only for virtual machines that run the Windows OS.

Backup and recovery using VCB and EMC Avamar

397

5.9 Backup and recovery using VMware Data Recovery In the VMware vSphere 4 release, VMware introduced VMware Data Recovery, which is a disk-based backup and recovery solution. It is built on the VMware vStorage API for data protection and uses a virtual machine appliance and a client plug-in to manage and restore backups. VMware Data Recovery can be used to protect any kind of OS. It incorporates capabilities such as block-based data deduplication and performs only incremental backups after the first full backup to maximize storage efficiency. Celerra-based CIFS and iSCSI storage can be used as destination storage for VMware Data Recovery. Backed-up virtual machines are stored on a target disk in a deduplicated store.

Figure 305 VMware Data Recovery

398

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

During the backup, VMware Data Recovery takes a snapshot of the virtual machine and mounts the snapshot directly to the VMware Data Recovery virtual appliance. After the snapshot is mounted, VMware Data Recovery begins streaming the blocks of data to the destination storage as shown in Figure 305 on page 398. During this process, VMware Data Recovery deduplicates the stream of data blocks to ensure that redundant data is eliminated prior to the backup data being written to the destination disk. VMware Data Recovery uses the change tracking functionality on ESX hosts to obtain the changes since the last backup. The deduplicated store creates a virtual full backup based on the last backup image and applies the changes to it. When all the data is written, VMware Data Recovery dismounts the snapshot and takes the virtual disk out of the snapshot mode. VMware Data Recovery supports only full and incremental backups at the virtual machine level and does not support backups at file level. Figure 306 on page 399 shows a sample backup screenshot.

Figure 306 VDR backup process

When using VMware Data Recovery, adhere to the following guidelines: ◆

A VMware Data Recovery appliance can protect up to 100 virtual machines. It supports the use of only two backup destinations simultaneously. If more than two backup destinations must be used, configure them to be used at different times. It is recommended that the backup destination size does not exceed 1 TB. Backup and recovery using VMware Data Recovery

399

400



A VMware Data Recovery appliance is only supported if the mount is presented by an ESX server and the VMDK is assigned to the VDR appliance. Mounts cannot be mapped directly to the VDR appliance.



VMware Data Recovery supports both RDM virtual and physical compatibility modes as backup destinations. When using RDM as a backup destination, it is recommended to use the virtual compatibility mode. Using this mode, a VMware snapshot can be taken, which can be leveraged together with the Celerra technologies to provide crash consistency and protection for the backed-up data.



When creating vStorage VMFS over iSCSI as a backup destination, choose the block size that matches the storage requirements. Selecting the default 1 MB block size only allows for a maximum virtual disk size of 256 GB.



To realize increased space savings, ensure that similar virtual machines are backed up to the same destination. Because VMware Data Recovery performs data duplication within and across virtual machines, virtual machines with the same OS will have only one copy of the OS data stored.



The virtual machine must not have a snapshot named _data recovery_ prior to backup by using VMware Data Recovery. This is because VDR creates a snapshot named _data recovery_ as a part of its backup procedure. If the snapshot with the same name exists already, the VDR will delete and re-create it.



Backups of virtual machines with RDM can be performed only when the RDM is running in virtual compatibility mode.



VMware Data Recovery provides an experimental capability called File Level Restore (FLR) to restore the individual files without restoring the whole virtual machine for Windows machines.



Because VMware Data Recovery will only copy the state of the virtual machine at the time of backup, pre-existing snaps are not a part of the VMware Data Recovery backup process.

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

5.10 Virtual machine single file restore from a Celerra checkpoint VMware has introduced the Virtual Disk Development Kit (VDDK) to create or access VMware virtual disk storage. The VMware website (http://communities.vmware.com/community/developer/forums/v ddk) provides more information. The VDDK Disk Mount utility allows administrators to mount a virtual disk as a separate drive or partition without requiring to connect to the virtual disk from within a virtual machine. Therefore, this tool provides a way to mount a Celerra checkpoint-based virtual disk or Celerra iSCSI snapshot-based virtual disk from which one can restore specific files to production virtual machines. A virtual disk cannot be mounted if any of its vmdk files have read-only permissions. Change these attributes to read/write before mounting the virtual disk. To restore a single file for a Windows virtual machine residing on a Celerra-based file system read-only checkpoint: 1. Install VDDK either in the vCenter Server or in a virtual machine where the file has to be restored. 2. Identify the appropriate read-only checkpoint from the Celerra Manager GUI. 3. Create a CIFS share on the read-only checkpoint file system identified in step 2. 4. Map that CIFS share on to the vCenter Server or on to the virtual machine as mentioned in step 1. 5. Execute the following command syntax to mount the virtual disk from the mapped read-only checkpoint: vmware-mount

• driveletter—Specifies the drive letter where a virtual disk must be mounted or unmounted. • path-to-vmdk—Specifies the location of a virtual disk that must be mounted. • /m:n—Allows mounting of Celerra file system read-only checkpoint. • /v:n—Mounts volume N of a virtual disk. N defaults to 1. The following example shows how to mount a virtual disk when the read-only checkpoint is mapped to the U: drive of vCenter Server as shown in Figure 307 on page 402. Virtual machine single file restore from a Celerra checkpoint

401

Figure 307 Mapped CIFS share containing a virtual machine in the vCenter Server

From the command prompt, execute the following command to list the volume partitions: vmware-mount "U:\DEMO\DEMO.vmdk" /p

From the command prompt, execute the following command to mount the virtual disk: vmware-mount P: "U:\DEMO\DEMO.vmdk" /m:n

6. After the virtual disk has been mounted as a P: drive on the vCenter Server, the administrator must copy the individual files through CIFS to the corresponding production machine. 7. After the copy is completed, unmount the virtual disk by using the following command: vmware-mount P:

/d

To restore the Windows files from a vmdk residing on a vStorage VMFS datastore over iSCSI: 1. Identify the Celerra iSCSI snap from which the files have to be restored.

402

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

2. Execute the server_iscsi command from Celerra to promote the identified snap. 3. Create a new datastore and add a copy of the virtual machine to the vCenter Server inventory. 4. Install VDDK in vCenter Server and use the following syntax to mount the vmdk file: vmware-mount /v:N /i:/vm/ "[datastorename] /.vmdk" /h: /u: /s:

The following command mounts the vmdk of the testvm_copy machine on the Q: drive of the vCenter Server as shown in Figure 308. vmware-mount Q: /v:1 /i:"EMC/vm/testvm_copy" "[snap-63ac0294-iscsidatastore] testvm/testvm.vmdk" /h:10.6.119.201 /u:administrator /s:nasadmin

Figure 308 Virtual machine view from the vSphere client

5. After it is mounted, copy the files back to the production machine. 6. After the restore has completed, demote the snap by using the server_iscsi command from the Celerra Control Station. The virtual disk, which is in the RDM format, can also be mounted in a similar manner as the single file restore described in this procedure. Virtual machine single file restore from a Celerra checkpoint

403

5.11 Other file-level backup and restore alternatives There are other alternatives for virtual machine file-level backup and restore. A traditional file-level backup method is installing a backup agent on the guest operating system that runs in the virtual machine, in the same way as it is done on a physical machine. This is normally called guest-based backup. Another method of file-level backup is to use a Linux host to mount the .vmdk file and access the files within the .vmdk directly. Do the following to achieve this: 1. Download the Linux NTFS driver located at http://linux-ntfs.org/doku.php, and install it on the Linux host. 2. Mount the file system being used as the datastore on the Linux host. Administrators can now access configuration and virtual disk files and can do image-level backup of a virtual machine. # mount :/ /mnt/esxfs

3. Mount the virtual disk file of the virtual machine as a loopback mount. Specify the starting offset of 32,256 and the NTFS file system type in the mount command line. # mount /mnt/esxfs//-flat.vmdk /mnt/vmdk –o ro,loop=/dev/loop2,offset=32256 –t ntfs

4. Browse the mounted .vmdk, which can be viewed as an NTFS file system. All the files can be viewed in the virtual machine. 5. Back up the necessary files. Administrators must review the following carefully before implementing the Linux method:

404



The Linux method has been verified to work only for datastores.



VCB works only for Windows virtual machines. This alternative may work for any guest OS type whose file system can be loopback-mounted on a Linux host.



The offset for the loopback mount is not always the same. Determining the correct value may not be straightforward depending on the OS, partition, and so on.



This alternative works only when flat virtual disks are allocated as opposed to thin-provisioned. Testing has shown that thinly provisioned virtual disks cannot be mounted by using any offset. In

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

contrast, VCB comes with a utility mountvm.exe that allows mounting both flat and thin-provisioned virtual disks that contain NTFS partitions. ◆

After a successful mount of the virtual disk file, the file backup is performed on a Linux system. Thus, the Windows ACL metadata is not maintained and will be lost after a restore.

File-level backup can also be performed for RDM devices either in the physical compatibility mode or in the virtual compatibility mode by using the CBMCLI package in the following manner: 1. Take an iSCSI snapshot of the RDM LUN. 2. Promote the snapshot and provide access to the backup server by using the following command: # cbm_iscsi --snap /dev/sdh --promote --mask

3. Connect the snapshot to the backup server. The files in the snapshot can now be backed up. 4. Demote and remove the snapshot when finished.

Other file-level backup and restore alternatives

405

5.12 Summary Table 5 summarizes the backup and recovery options of Celerra storage presented to VMware vSphere or VMware Infrastructure. Table 5

Backup and recovery options Backup/recovery Image-level

File-level

NFS datastore

• • • • •

• VCB (Windows) • Loopback mount (all OS)

vStorage VMFS/iSCSI

• Celerra iSCSI snapshot (CBMCLI or server_iscsi) • Celerra iSCSI replication-based clone (CBMCLI, nas_replicate, or Celerra Manager) • VCB • Replication Manager • VDR

• VCB (Windows)

RDM/iSCSI (physical)

• Celerra iSCSI snapshot (CBMCLI, server_iscsi, or Replication Manager) • Celerra iSCSI replication-based clone (CBMCLI, nas_replicate, Celerra Manager, or Replication Manager)

• Celerra iSCSI snapshot (CBMCLI or server_iscsi)

RDM/iSCSI (virtual)

• Celerra iSCSI snapshot (CBMCLI or server_iscsi) • Celerra iSCSI replication-based clone (CBMCLI, nas_replicate, or Celerra Manager) • VDR

• Celerra iSCSI snapshot (CBMCLI or server_iscsi)

Celerra SnapSure Celerra NDMP VCB Replication Manager VDR

The best practices planning white papers on Powerlink provide more information and recommendations about protecting applications such as Microsoft Exchange and Microsoft SQL Server deployed on VMware

406

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

VMware vSphere or VMware Infrastructure. Access to Powerlink is based upon access privileges. If this information cannot be accessed, contact your local EMC representative.

Summary

407

408

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

6 Using VMware vSphere and VMware Virtual Infrastructure in Disaster Restart Solutions

This chapter presents these topics: ◆ ◆ ◆ ◆ ◆ ◆

6.1 Overview ......................................................................................... 6.2 Definitions ....................................................................................... 6.3 Design considerations for disaster recovery and disaster restart 6.4 Geographically distributed virtual infrastructure..................... 6.5 Business continuity solutions ....................................................... 6.6 Summary .........................................................................................

Using VMware vSphere and VMware Virtual Infrastructure in Disaster Restart Solutions

410 411 413 419 420 453

409

6.1 Overview VMware technology virtualizes the x86-based physical infrastructure into a pool of resources. Virtual machines are presented with a virtual hardware environment independent of the underlying physical hardware. This enables organizations to leverage different physical hardware in the environment and provide low total cost of ownership. The virtualization of the physical hardware can also be used to create disaster recovery and business continuity solutions that would have been impractical otherwise. These solutions normally involve a combination of virtual infrastructure at one or more geographically separated data centers and EMC remote replication technology. One example of such an architecture has physical servers running various business applications in their primary data center while the secondary data center has a limited number of virtualized physical servers. During normal operations, the physical servers in the secondary data center are used to support workloads such as QA and testing. In case of a disruption in services at the primary data center, the physical servers in the secondary data center run the business applications in a virtualized environment. The purpose of this chapter is to discuss:

410



EMC Celerra Replicator configurations and their interaction with an ESX server



EMC Celerra Replicator and ESX server application-specific considerations



Integration of guest operating environments with EMC technologies and an ESX server



The use of VMware vCenter Site Recovery Manager to manage and automate a site-to-site disaster recovery with EMC Celerra

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

6.2 Definitions In the next sections, the terms dependent-write consistency, disaster restart, disaster recovery, and roll-forward recovery are used. A sound understanding of these terms is required to understand the context of this section.

6.2.1 Dependent-write consistency A dependent-write I/O cannot be issued until a related predecessor I/O is completed. Dependent-write consistency is a state where data integrity is guaranteed by dependent-write I/Os embedded in an application logic. Database management systems are good examples of the practice of dependent-write consistency. Database management systems must devise a protection against abnormal termination to successfully recover from one. The most common technique used is to guarantee that a dependent-write cannot be issued until a predecessor write is complete. Typically, the dependent-write is a data or index write, while the predecessor write is a write to the log. Because the write to the log must be completed before issuing the dependent-write, the application thread is synchronous to the log write. The application thread waits for the write to complete before continuing. The result is a dependent-write consistent database.

6.2.2 Disaster restart Disaster restart involves the implicit use of active logs by various databases and applications during their normal initialization process to ensure a transactionally-consistent data state. If a database or application is shut down normally, the process of getting to a point of consistency during restart requires minimal work. If a database or application terminates abnormally, the restart process takes longer, depending on the number and size of in-flight transactions at the time of termination. An image of the database or application created by using EMC consistency technology such as Replication Manager while it is running, without any conditioning of the database or application, is in a dependent-write consistent data state, which is similar to that created by a local power failure. This is also known as a restartable image. The restart of this image transforms it to a transactionally consistent data state by completing committed transactions and rolling back uncommitted transactions during the normal initialization process. Definitions

411

6.2.3 Disaster recovery Disaster recovery is the process of rebuilding data from a backup image, and then explicitly applying subsequent logs to roll the data state forward to a designated point of consistency. The mechanism to create recoverable copies of data depends on the database and applications.

6.2.4 Roll-forward recovery With some databases, it may be possible to take a Database Management System (DBMS) restartable image of the database and apply subsequent archive logs to roll forward the database to a point in time after the image was created. This means the image created can be used in a backup strategy in combination with archive logs.

412

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

6.3 Design considerations for disaster recovery and disaster restart The effect of data loss or loss of application availability varies from one business type to another. For instance, the loss of transactions for a bank could cost millions of dollars, whereas system downtime may not have a major fiscal impact. In contrast, businesses primarily engaged in web commerce must have their applications available on a continual basis to survive in the market. The two factors, data loss and availability, are the business drivers that determine the baseline requirements for a disaster restart or disaster recovery solution. When quantified, loss of data is more frequently referred to as recovery point objective, while loss of uptime is known as recovery time objective. When evaluating a solution, the recovery point objective (RPO) and recovery time objective (RTO) requirements of the business must be met. In addition, the solution's operational complexity, cost, and its ability to return the entire business to a point of consistency need to be considered. Each of these aspects is discussed in the following sections.

6.3.1 Recovery point objective RPO is a point of consistency to which a user wants to recover or restart. It is measured by the difference between the time when the point of consistency was created or captured to the time when the disaster occurred. This time is the acceptable amount of data loss. Zero data loss (no loss of committed transactions from the time of the disaster) is the ideal goal, but the high cost of implementing such a solution must be weighed against the business impact and cost of a controlled data loss. Some organizations, such as banks, have zero data loss requirements. The transactions entered at one location must be replicated immediately to another location. This can affect application performance when the two locations are far apart. On the other hand, keeping the two locations close to one another might not protect the data against a regional disaster. Defining the required RPO is usually a compromise between the needs of the business, the cost of the solution, and the probability of a particular event happening.

6.3.2 Recovery time objective The RTO is the maximum amount of time allowed after the declaration of a disaster for recovery or restart to a specified point of consistency. Design considerations for disaster recovery and disaster restart

413

This includes the time taken to: ◆

Provision power and utilities



Provision servers with the appropriate software



Configure the network



Restore the data at the new site



Roll forward the data to a known point of consistency



Validate the data

Some delays can be reduced or eliminated by choosing certain disaster recovery options such as having a hot site where servers are preconfigured and are on standby. Also, if storage-based replication is used, the time taken to restore the data to a usable state is completely eliminated. Like RPO, each solution with varying RTO has a different cost profile. Defining the RTO is usually a compromise between the cost of the solution and the cost to the business when applications are unavailable.

6.3.3 Operational complexity The operational complexity of a disaster recovery solution may be the most critical factor that determines the success or failure of a disaster recovery activity. The complexity of a disaster recovery solution can be considered as three separate phases: 1. Initial setup of the implementation 2. Maintenance and management of the running solution 3. Execution of the disaster recovery plan in the event of a disaster While initial configuration complexity and running complexity can be a demand on people resources, the third phase, that is, execution of the plan, is where automation and simplicity must be the focus. When a disaster is declared, key personnel may be unavailable in addition to loss of servers, storage, networks, and buildings. If the disaster recovery solution is so complex that it requires skilled personnel with an intimate knowledge of all systems involved to restore, recover, and validate application and database services, the solution has a high probability of failure. Multiple database and application environments over time grow organically into complex federated database architectures. In these federated environments, reducing the complexity of disaster recovery is absolutely critical. Validation of transactional consistency within a 414

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

business process is time-consuming, costly, and requires application and database familiarity. One of the reasons for this complexity is the heterogeneous applications, databases, and operating systems in these federated environments. Across multiple heterogeneous platforms, it is hard to establish time synchronization, and therefore hard to determine a business point of consistency across all platforms. This business point of consistency has to be created from intimate knowledge of the transactions and data flows.

6.3.4 Source server activity Disaster recovery solutions may or may not require additional processing activity on the source servers. The extent of that activity can impact both the response time and throughput of the production application. This effect should be understood and quantified for any given solution to ensure that the impact to the business is minimized. The effect for some solutions is continuous while the production application is running. For other solutions, the impact is sporadic, where bursts of write activity are followed by periods of inactivity.

6.3.5 Production impact Some disaster recovery solutions delay the host activity while taking actions to propagate the changed data to another location. This action only affects write activity. Although the introduced delay may only be for a few milliseconds, it can negatively impact response time in a high-write environment. Synchronous solutions introduce delay into write transactions at the source site; asynchronous solutions do not.

6.3.6 Target server activity Some disaster recovery solutions require a target server at the remote location to perform disaster recovery operations. The server has both software and hardware costs and requires personnel with physical access to the server to perform basic operational functions such as power on and power off. Ideally, this server must have some usage such as running development or test databases and applications. Some disaster recovery solutions require more target server activity and some require none.

6.3.7 Number of copies of data Disaster recovery solutions require replication of data in one form or another. Replication of application data and associated files can be as simple as backing up data on a tape and shipping the tapes to a disaster Design considerations for disaster recovery and disaster restart

415

recovery site or as sophisticated as an asynchronous array-based replication. Some solutions require multiple copies of the data to support disaster recovery functions. More copies of the data may be required to perform testing of the disaster recovery solution in addition to those that support the data replication process.

6.3.8 Distance for the solution Disasters, when they occur, have differing ranges of impact. For instance, a fire may be isolated to a small area of the data center or a building; an earthquake may destroy a city; or a hurricane may devastate a region. The level of protection for a disaster recovery solution must address the probable disasters for a given location. This means for protection against an earthquake, the disaster recovery site should not be in the same locale as the production site. For regional protection, the two sites need to be in two different regions. The distance associated with the disaster recovery solution affects the kind of disaster recovery solution that can be implemented.

6.3.9 Bandwidth requirements One of the largest costs for disaster recovery is to provision bandwidth for the solution. Bandwidth costs are an operational expense; this makes solutions with reduced bandwidth requirements attractive to customers. It is important to recognize in advance the bandwidth consumption of a given solution to anticipate the running costs. Incorrect provisioning of bandwidth for disaster recovery solutions can adversely affect production performance and invalidate the overall solution.

6.3.10 Federated consistency Databases are rarely isolated islands of information with no interaction or integration with other applications or databases. Most commonly, databases are loosely or tightly coupled to other databases and applications using triggers, database links, and stored procedures. Some databases provide information downstream for other databases and applications using information distribution middleware and other applications and databases receive feeds and inbound data from message queues and Electronic Data Exchange (EDI) transactions. The result can be a complex, interwoven architecture with multiple interrelationships. This is referred to as federated architecture. With

416

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

federated environments, making a disaster recovery copy of a single database regardless of other components results in consistency issues and creates logical data integrity problems. All components in a federated architecture need to be recovered or restarted to the same dependent-write consistent point in time to avoid data consistency problems. With this in mind, it is possible that point solutions for disaster recovery, like host-based replication software, do not provide the required business point of consistency in federated environments. Federated consistency solutions guarantee that all components, databases, applications, middleware, and flat files are recovered or restarted to the same dependent-write consistent point in time.

6.3.11 Testing the solution Tested, proven, and documented procedures are also required for a disaster recovery solution. Often, the disaster recovery test procedures are operationally different from a true disaster set of procedures. Operational procedures need to be clearly documented. In the best-case scenario, companies should periodically execute the actual set of procedures for disaster recovery. This could be costly to the business because of the application downtime required to perform such a test, but is necessary to ensure validity of the disaster recovery solution.

6.3.12 Cost The cost of disaster recovery can be justified by comparing it with the cost of not following it. What does it cost the business when the database and application systems are unavailable to users? For some companies this is easily measurable and revenue loss can be calculated per hour of downtime or data loss. For all businesses, the disaster recovery cost is going to be an additional expense item and, in many cases, with little in return. The costs include, but are not limited to: ◆

Hardware (storage, servers, and maintenance)



Software licenses and maintenance



Facility leasing or purchase



Utilities



Network infrastructure



Personnel Design considerations for disaster recovery and disaster restart

417

418



Training



Creation and maintenance of processes

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

6.4 Geographically distributed virtual infrastructure Currently VMware does not provide any native tools to replicate the data from the ESX server to a geographically separated location. Software-based replication technology can be used inside virtual machines or the service console. However, these techniques add significantly to the network and CPU resource requirements. Integrating ESX server and storage-array based replication products adds a level of business data protection not attained easily. Using the SnapSure and Replicator families of Celerra products with VMware technologies enables customers to provide a cost-effective disaster recovery and business continuity solution. Some of these solutions are discussed in the following sections. Note: Similar solutions are possible using host-based replication software such as RepliStor®. However, utilizing storage-array based replication enables customers to provide a disaster restart solution that can provide a business-consistent view of the data that includes multiple hosts, operating systems, and application.

Geographically distributed virtual infrastructure

419

6.5 Business continuity solutions The business continuity solution for a production environment with VMware vSphere and VMware Infrastructure includes the use of EMC Celerra Replicator as the mechanism to replicate data from the production data center to the remote data center. The copy of the data in the remote data center can be presented to a VMware ESX server cluster group. The remote virtual data center thus provides a business continuity solution. For disaster recovery purposes, a remote replica of the PFS or an iSCSI LUN that is used to provide ESX server storage is required. Celerra offers advanced data replication technologies to help protect a file system or an iSCSI LUN. In case of a disaster, fail over to the destination side with minimum administrator intervention. The replication session has to be maintained and the snapshots need to be refreshed periodically. The update frequency is determined based on the WAN bandwidth and the RPO.

6.5.1 NAS datastore replication Providing high availability to virtual machines is crucial in large VMware environments. This section explains how Celerra replication technology provides high availability for virtual machines hosted on NAS datastores. Celerra Replicator technology along with Replication Manager can be used to provide the ability to instantly create virtual machine consistent replicas of NAS datastores containing virtual machines. 6.5.1.1 Replication using Celerra Replicator Celerra Replicator can be used to replicate file systems exported to ESX servers as NAS datastores. This is done by one of the following ways: ◆

Using Celerra Manager: Section , “Using Celerra Manager,” on page 422 provides more details.



Using the Celerra /nas/bin/nas_replicate command, or the /nas/bin/fs_replicate command, for versions earlier than Celerra version 5.6.

The replication operates at a datastore level. Multiple virtual machines will be replicated together if they reside in the same datastore. If further granularity is required at an image level for an individual virtual machine, move the virtual machine in its own NAS datastore. However, consider that the maximum number of NFS mounts per ESX server is 64

420

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

for VMware vSpehere, and 32 for VMware Infrastructure. Section 3.6.1.5, “ESX host timeout settings for NFS,” on page 118 provides details on how to increase the number from a default value of 8. After the failover operation to promote the replica, the destination file system can be mounted as a NAS datastore on the remote ESX server. When configuring the remote ESX server, the network must be configured such that the replicated virtual machines will be accessible. Virtual machines residing in the file system need to register with the new ESX server using the vSphere client for VMware vSphere, or the VI Client for VMware Infrastructure. While browsing the NAS datastore, right-click a .vmx configuration file and select Add to Inventory to complete the registration as shown in Figure 309.

Figure 309 Registration of a virtual machine with ESX

Alternatively, the ESX service console command vmware-cmd can be used to automate the process if a large number of virtual machines need to be registered. Run the following shell script to automate the process: for vm in `ls /vmfs/volumes/` do /usr/bin/vmware-cmd –s register /vmfs/volumes//$vm/*.vmx done

Business continuity solutions

421

After registration, the virtual machine can be powered on. This may take a while to complete. During power on, a pop-up message box regarding msg.uuid.altered appears. Select I-movedit to complete the power on procedure. Using Celerra Manager For remote replication using Celerra Manager, complete the following steps: 1. From the Celerra Manager, click Wizards in the left navigation pane. The Select a Wizard page opens in the right pane.

Figure 310 Select a Wizard

422

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

2. Click New Replication. The Replication Wizard - EMC Celerra Manager appears.

Figure 311 Select a Replication Type

3. Select the replication type as File System and click Next. The File System page appears.

Figure 312 File System

4. Select Ongoing File System Replication and click Next. The list of destination Celerra Network Servers appears. Note: It creates a read-only, point-in-time copy of a source file system at a destination and periodically updates this copy, making it consistent with the source file system. The destination for this read-only copy can be the

Business continuity solutions

423

same Data Mover (loop back replication), another Data Mover in the same Celerra cabinet (local replication) or a Data Mover in a different Celerra cabinet (remote replication).

Figure 313 Specify Destination Celerra Network Server

5. Click New Destination Celerra. The Create Celerra Network Server page appears.

Figure 314 Create Celerra Network Server

424

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

6. Specify the name, IP address and passphrase of the destination Celerra Network Server and click Next. The Specify Destination Credentials page appears.

Figure 315 Specify Destination Credentials

Note: A trust relationship allows two Celerra systems to replicate data between them. This trust relationship is required for Celerra Replicator sessions that communicate between the separate file systems. The passphrase must be the same for both source and target Celerra systems.

7. Specify the username and password credentials of the Control Station on the destination Celerra to gain appropriate access and click Next. The Create Peer Celerra Network Server page appears.

Figure 316 Create Peer Celerra Network Server

Note: The system will also automatically create the reverse communication relationship on the destination side between the destination and source Celerra systems.

Business continuity solutions

425

8. Specify the name by which the source Celerra system is known to the destination Celerra system. The time difference between the source and destination Control Station must be within 10 minutes. The Overview/Results page appears.

Figure 317 Overview/Results

9. Review the result and click Next. The Specify Destination Celerra Network Server page appears.

Figure 318 Specify Destination Celerra Network Server

10. Select the destination Celerra and click Next. The Select Data Mover Interconnect page appears. Note: Replication requires a connection between source Data Mover and peer Data Mover. This connection is called an interconnect.

426

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

Figure 319 Select Data Mover Interconnect

11. Click New Interconnect. The Source Settings page appears.

Figure 320 Source Settings

Note: An interconnect supports the Celerra Replicator™ V2 sessions by defining the communication path between a given Data Mover pair located on the same cabinet or different cabinets. The interconnect configures a list of local (source) and peer (destination) interfaces for all v2 replication sessions using the interconnect.

Business continuity solutions

427

12. Enter the Data Mover interconnect name, select the source Data Mover and click Next. The Specify Destination Credentials page appears.

Figure 321 Specify Destination Credentials

13. Specify the username and password of the Control Station on the destination Celerra and click Next. The Destination Settings page appears.

Figure 322 Destination Settings

14. Specify the name for the peer Data Mover interconnect and then select the Celerra Network Server Data Mover on the other (peer) side of the interconnect and click Next. The Overview/Results page appears.

428

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

Figure 323 Overview/Results

15. Review the results and click Next. The Select Data Mover Interconnect page appears.

Figure 324 Select Data Mover Interconnect

Business continuity solutions

429

16. Select an already created interconnect and click Next. The Select Replication Session's Interface page appears.

Figure 325 Select Replication Session's Interface

Note: Only one interconnect per Data Mover pair can be available.

17. Specify a source interface and a destination interface for this replication session or use the default of any and click Next. The Select Source page appears.

Figure 326 Select Source

Note: By using the default, the system selects an interface from the source and destination interface lists for the interconnect.

430

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

18. Specify a name for this replication session and select an existing file system as the source for the session and click Next. The Select Destination page appears.

Figure 327 Select Destination

19. Use the existing file system at the destination or create a new destination file system and click Next. The Update Policy page appears.

Figure 328 Update Policy

Business continuity solutions

431

Note: When replication creates a destination file system, it automatically assigns a name based on the source file system and ensures that the file system size is the same as the source. Administrators can select a storage pool for the destination file system, and can also select the storage pool used for future checkpoints.

20. Select the required update policy and click Next. The Select Tape Transport page appears.

Figure 329 Select Tape Transport

Note: Using this policy, replication can be used to respond only to an explicit request to update (refresh) the destination based on the source content or to specify a maximum time that the source and destination can be out of synchronization before an update occurs.

21. Click Next. The Overview/Results page appears.

432

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

Note: Select Use Tape Transport? if the initial copy (slivering) of the file system will be physically transported to the destination site using a disk array or tape unit. This will create the replication session and then stop it to enable it to initial copy using a physical tape.

Figure 330 Overview/Results

22. Review the result and click Finish. The job is submitted.

Figure 331 Command Successful

23. After the command is successful, click Close.

Business continuity solutions

433

6.5.1.2 Replication using Replication Manager and Celerra Replicator Replication Manager can replicate a Celerra-based NAS datastore that resides on an ESX server managed by the VMware vCenter Server. Replication Manager uses Celerra Replicator to create remote replicas of NAS datastores. Replication Manager version 5.2.2 supports NAS datastore replication. Because all operations are performed using the VMware vCenter Server, neither the Replication Manager nor its required software needs to be installed on a virtual machine or on the ESX server where the NAS datastore resides. Operations are sent from a proxy host that is either a Linux physical host or a separate virtual host. VMware snapshots are taken for all virtual machines, which are online and residing on the NAS datastore, just before the remote replication to ensure operating system consistency of the resulting replica. Figure 332 shows the NAS datastore replica in the Replication Manager.

Figure 332 NFS replication using Replication Manager

Administrators should ensure that the Linux proxy host is able to resolve the addresses of the Replication Manager server and mount the host and the Celerra Control Station by using DNS. After performing a failover operation, the destination file system can be mounted as an NAS datastore on the remote ESX server. When a NAS datastore replica is mounted to an alternate ESX server, Replication Manager performs all tasks necessary to make the NAS datastore visible to the ESX server. After that is complete, further administrative tasks such as restarting the virtual machines and the applications must be either completed by scripts or by manual intervention.

434

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

6.5.2 VMFS datastore replication over iSCSI Providing high availability to the virtual machines is crucial in large VMware environments. This section explains how Celerra replication technology provides high availability for virtual machines hosted on VMFS datastores over iSCSI. Celerra Replicator technology along with Replication Manager can be used to provide the ability to instantly create virtual machine consistent replicas of VMFS datastores containing virtual machines. 6.5.2.1 Replication using Celerra Replicator Celerra Replicator for iSCSI can be used to replicate the iSCSI LUNs exported to an ESX server as VMFS datastores. 1. From the Celerra Manager, click Wizards in the left navigation pane. The Select a Wizard page opens in the right pane.

Figure 333 Select a Wizard

Business continuity solutions

435

2. Click New Replication. The Replication Wizard - EMC Celerra Manager appears.

Figure 334 Select a Replication Type

3. Select the replication type as iSCSI LUN and click Next. The Specify Destination Celerra Network Server page appears.

Figure 335 Specify Destination Celerra Network Server

436

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

4. Select an existing destination Celerra. If the destination Celerra is not in the list, click New Destination Celerra. The Create Celerra Network Server page appears.

Figure 336 Create Celerra Network Server

5. Specify the name, IP address, and passphrase of the destination Celerra Network Server and click Next. The Specify Destination Credentials page appears.

Figure 337 Specify Destination Credentials

Note: A trust relationship allows two Celerra systems to replicate data between them. This trust relationship is required for Celerra Replicator sessions that communicate between the separate file systems. The passphrase must be same for both the source and target.

6. Specify the username and password credentials of the Control Station on the destination Celerra to gain appropriate access and click Next. The Create Peer Celerra Network Server page appears.

Business continuity solutions

437

Note: The system will also automatically create the reverse communication relationship on the destination side between the destination and local Celerra.

Figure 338 Create Peer Celerra Network Server

7. Specify the name by which the source Celerra will be known to the destination Celerra. The time difference between the local and destination Control Stations must be within 10 minutes and click Next. The Overview/Results page appears.

Figure 339 Overview/Results

8. Review the result and click Next. The Specify Destination Celerra Network Server page appears.

Figure 340 Specify Destination Celerra Network Server

438

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

9. Select an existing destination Celerra and click Next. The Select Data Mover Interconnect page appears.

Figure 341 Data Mover Interconnect

10. Click New Interconnect. The Source Settings page appears. Note: An interconnect supports the Celerra Replicator V2 sessions by defining the communication path between a given Data Mover pair located on the same cabinet or different cabinets. The interconnect configures a list of local (source) and peer (destination) interfaces for all V2 replication sessions using the interconnect.

Figure 342 Source Settings

Business continuity solutions

439

11. Type the name of the Data Mover interconnect, select the Data Mover, and then click Next. The Specify Destination Credentials page appears.

Figure 343 Specify Destination Credentials

12. Type the username and password of the Control Station on the destination Celerra and click Next. The Destination Settings page appears.

Figure 344 Destination Settings

440

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

13. Type the name of the peer Data Mover interconnect and select the Celerra Network Server Data Mover on the other side (peer) of the interconnect and click Next. The Overview/Results page appears.

Figure 345 Overview/Results

14. Review the results of the changes and click Next. The Select Data Mover Interconnect page appears.

Figure 346 Select Data Mover Interconnect

15. Select an already created interconnect and click Next. The Select Replication Session's Interface page appears.

Business continuity solutions

441

Note: Only one interconnect per Data Mover pair is available.

Figure 347 Select Replication Session's Interface

16. Specify a source interface and a destination interface for this replication session or use the default of any, which lets the system select an interface from the source and destination interface lists for the interconnect and click Next. The Select Source page appears.

Figure 348 Select Source

442

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

17. Specify a name for this replication session, select an available source iSCSI target and LUN for the source iSCSI LUN that needs to be replicated, and then click Next. The Select Destination page appears. Note: The target iSCSI LUN needs to be set to read only and has to be the same size as the source LUN

Figure 349 Select Destination

18. Select an available iSCSI target and iSCSI LUN and click Next. The Update Policy page appears.

Figure 350 Update Policy

19. Select the Update policy and click Next. The Overview/Results page appears.

Business continuity solutions

443

Note: Using this policy replication can be configured to respond only to an explicit request to update (refresh) the destination based on the source content. The maximum time that the source and destination can be out of synchronization before an update can also be specified.

Figure 351 Overview/Results

20. Review the changes and then click Finish.

Figure 352 Command Successful

Because the replication operates at a LUN level, multiple virtual machines will be replicated all together if they reside on the same iSCSI LUN. If better granularity is required at an image level for an individual virtual machine, place the virtual machine on its own iSCSI LUN. However, when using this design, the maximum number of VMFS file systems per ESX server is 256. As in the case of a NAS datastore, virtual machines need to be registered with the remote ESX

444

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

server after a failover. A virtual machine registration can be done by either using the datastore browser GUI interface or can be scripted by using the vmware-cmd command. 6.5.2.2 Replication using Replication Manager and Celerra Replicator Replication Manager can replicate a VMFS that resides on an ESX server managed by the VMware vCenter Server and is attached to a Celerra system. Replication Manager uses Celerra Replicator technology to create remote replicas. These replicas are actually snapshots that represent a crash-consistent replica of the entire VMFS. Because all operations are performed through the VMware vCenter Server, neither the Replication Manager nor its required software need to be installed on a virtual machine or on the ESX server where the VMFS resides. Operations are sent from a proxy host that is either a windows physical host or a separate virtual host. Replication Manager proxy host can be the same physical or virtual host that serves as a Replication Manager Server. In Celerra environments, the VMFS data may reside on more than one LUN. However, all LUNs must be from the same Celerra and must share the same target iSCSI qualified name (IQN). VMware snapshots are taken for all virtual machines that are online and reside on the VMFS just prior to replication. When a disaster occurs, the user can fail over this replica, enabling Replication Manager to make the clone LUN from the original production host's VMFS datastores on the remote ESX. Failover also makes the production storage read-only. After performing a failover operation, the destination LUN can be mounted as a VMFS datastore on the remote ESX server. After that is complete, further administrative tasks such as restarting the virtual machines and the applications must be either completed by scripts or by manual intervention. Figure 353 on page 445 shows the VMFS datastore replica in Replication Manager.

Figure 353 VMFS replication using Replication Manager

Business continuity solutions

445

6.5.3 RDM volume replication over iSCSI The iSCSI LUNs presented to an ESX server as RDM are normal raw devices just like they are in a non-virtualized environment. RDM provides some advantages of a virtual disk in the VMFS file system while retaining some advantages of direct access to physical devices. For example, administrators can take full advantage of storage array based data protection technologies regardless of whether the RDM is in a physical mode or virtual mode and another example of such a use case is physical-to-virtual clustering between a virtual machine and a physical server. Replication of RDM volumes is similar to the physical backup of RDM volumes. Celerra Replicator for iSCSI can be used to replicate iSCSI LUNs presented to the ESX server as RDM volumes either by using the cbm_replicate command of the CBMCLI package or by using the Celerra nas_replicate command in Celerra version 5.6 or Replication Manager. Replication Manager can only be used with a RDM volume that is formatted as NTFS and is in the physical compatibility mode.

6.5.4 Site failover over NFS and iSCSI using VMware SRM and Celerra VMware vCenter SRM is an integrated component of VMware vSphere and VMware Infrastructure that is installed within a vCenter-controlled VMware data center. SRM leverages the data replication capability of the underlying storage array to create a workflow that will fail over selected virtual machines from a protected site to a recovery site and bring the virtual machines and their associated applications back into production at the recovery site as shown in Figure 354 on page 447. VMware vCenter SRM 4 supports both Celerra iSCSI and NFS-based replications in VMware vSphere. With VMware Infrastructure and versions earlier than VMware vCenter SRM 4, only Celerra iSCSI based replications are supported. SRM accomplishes this by communicating with and controlling the underlying storage replication software through an SRM plug-in called Storage Replication Adapter (SRA). The SRA is a software provided by storage vendors that ensures integration of storage devices and replication with VMware vCenter SRM. These vendor-specific scripts support array discovery, replicated LUN discovery, test failover, and actual failover.

446

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

Figure 354 VMware vCenter SRM with VMware vSphere

The EMC Celerra Replicator SRA for VMware SRM is a software package that enables SRM to implement disaster recovery for virtual machines by using EMC Celerra systems running Celerra Replicator and Celerra SnapSure software. The SRA-specific scripts support array discovery, replicated LUN discovery, test failover, failback, and actual failover. Disaster recovery plans can be implemented for virtual machines running on NFS, VMFS, and RDM. Figure 355 on page 448 shows a sample screenshot of a VMware SRM configuration.

Business continuity solutions

447

Figure 355 VMware vCenter SRM configuration

During the test failover process, the production virtual machines at the protected site continue to run and the replication connection remains active for all the replicated iSCSI LUNs or file systems. When the test failover command is run, SRM requests Celerra at the recovery site to take a writeable snap or checkpoint by using the local replication feature licensed at the recovery site. Based on the definitions in the recovery plan, these snaps or checkpoints are discovered and mounted, and pre-power-on scripts or callouts are executed. Virtual machines are powered up and the post-power-on scripts or callouts are executed. The same recovery plan is used for the test as for the real failover so that the users can be confident that the test process is as close to a real failover as possible without actually failing over the environment. Companies realize a greater level of confidence in knowing that their users are trained on the disaster recovery process and can execute the process consistently and correctly each time. Users have the ability to add a layer of test-specific customization to the workflow that is only executed during a test failover to handle scenarios where the test may have differences from the actual failover scenario. If virtual machine power on is successful, the SRM test process is complete. Users can start applications and perform tests, if required. Prior to cleaning up the test environment, SRM uses a system callout to pause the simulated 448

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

failover. At this point, the user should verify that their test environment is consistent with the expected results. After verification, the user acknowledges the callout and the test failover process concludes and powers down and unregisters virtual machines, demotes and deletes the Celerra writeable snaps or checkpoints, and restarts any suspended virtual machines at the recovery site. The actual failover is similar to the test failover, except that rather than leveraging snaps or checkpoints at the recovery site while keeping the primary site running, the storage array is physically failed over to a remote location and the actual recovery site LUNs or file systems are brought online and virtual machines are powered up. VMware will attempt to power off the protected site virtual machines if they are active when the failover command is issued. However, if the protected site is destroyed, VMware will be unable to complete this task. SRM will not allow a virtual machine to be active on both sites. Celerra Replicator has an adaptive mechanism that attempts to ensure that RPOs are met, even with varying VMware workloads, so that users can be confident that the crash-consistent datastores that are recovered by SRM meet their pre-defined service level specifications.

6.5.5 Site failback over NFS and iSCSI using VMware vCenter SRM 4 and EMC Celerra Failback Plug-in for VMware vCenter SRM EMC Celerra Failback Plug-in for VMware vCenter SRM is a supplemental software package for VMware vCenter SRM 4. This plug-in enables users to fail back virtual machines and their associated datastores to the primary site after implementing and executing disaster recovery through VMware vCenter SRM for Celerra storage systems running Celerra Replicator V2 and Celerra SnapSure. The plug-in does the following: ◆

Provides the ability to input login information (hostname/IP, username, and password) for two vCenter systems and two Celerra systems



Cross-references replication sessions with vCenter Server datastores and virtual machines



Provides the ability to select one or more failed-over Celerra replication sessions for failback



Supports both iSCSI and NAS datastores

Business continuity solutions

449



Manipulates vCenter Server at the primary site to rescan storage, unregister orphaned virtual machines, rename datastores, register failed-back virtual machines, reconfigure virtual machines, customize virtual machines, remove orphaned .vswp files for virtual machines, and power on failed-back virtual machines.



Manipulates vCenter Server at the secondary site to power off the orphaned virtual machines, unregister the virtual machines, and rescan storage.



Identifies failed-over sessions created by EMC Replication Manager and directs the user about how these sessions can be failed back.

The Failback Plug-in version 4.0 introduces support for virtual machines on NAS datastores and support for virtual machines’ network reconfiguration before failback. 6.5.5.1 New features and changes New features include: ◆

Support for virtual machines on NAS datastores



Support for virtual machine network reconfiguration before failback

Changes include: ◆

Improved log file format for readability



Installation utility automatically determines the IP address of the plug-in server

6.5.5.2 Environment and system requirements The VMware infrastructure at both the protected (primary) and recovery (secondary) sites must meet the following minimum requirements: ◆

vCenter Server 2.5 or later



VI Client



SRM Server with the following installed: • SRM 1.0 or later • Celerra Replicator Adapter 1.X or later available on the VMware website

This server can be vCenter Server or a separate Windows host and should have one or more ESX 3.02, 3.5, 3i, or 4 servers connected to a Celerra storage system. 450

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

EMC Celerra Failback Plug-in for VMware vCenter Site Recovery Manager Release Notes available on Powerlink provide information on specific system requirements. 6.5.5.3 Known problems and limitations EMC Celerra Failback Plug-in for VMware vCenter SRM has the following known problems and limitations: ◆

Virtual machine dependencies are not checked.



Fibre Channel LUNs are not supported.

6.5.5.4 Installing the EMC Celerra Failback Plug-in for VMware vCenter SRM Before installing the EMC Celerra Failback Plug-in for VMware vCenter SRM, the following must be done. ◆

Install the VMware vCenter SRM on a supported Windows host (the SRM server) at both the protected and recovery sites. Note: Install the EMC Celerra Replicator Adapter for VMware SRM on a supported Windows host (preferably the SRM server) at both the protected and recovery sites.

To install the EMC Celerra Failback Plug-in for VMware vCenter SRM, extract and run the executable EMC Celerra Failback Plug-in for VMware vCenter SRM.exe from the downloaded zip file. Follow the on-screen instructions and provide the username and password for the vCenter Server where the Plug-in is registered. 6.5.5.5 Using the EMC Celerra Failback Plug-in for VMware vCenter SRM To run the EMC Celerra Failback Plug-in for VMware vCenter SRM: 1. Open an instance of VI Client or vSphere Client to connect to the protected site vCenter. 2. Click Celerra Failback Plug-in. 3. Follow the on-screen instructions to connect to the protected and recovery site Celerras and vCenters. 4. Click Discover. 5. Select the desired sessions for failback from the list on the Failed Over Datastores, Virtual Machines, and Replication Sessions areas. 6. Click Failback. Business continuity solutions

451

Note: The failback progress is displayed in the Status Messages area.

EMC Celerra Failback Plug-in for VMware vCenter Site Recovery Manager Release Notes available on Powerlink provide further information on troubleshooting and support when using the plug-in.

452

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

6.6 Summary The following table provides data replication solutions of Celerra storage presented to an ESX server. Table 6

Data replication solution Type of virtual object

Replication

NAS datastore

• Celerra Replicator • Replication Manager • VMware vCenter SRM

VMFS/iSCSI

• Celerra Replicator (CBMCLI, nas_replicate, or Celerra Manager) • Replication Manager • VMware vCenter SRM

RDM/iSCSI (physical)

Celerra Replicator (CBMCLI, nas_replicate, Celerra Manager, or Replication Manager) and SRM

RDM/iSCSI (virtual)

Celerra Replicator (CBMCLI, nas_replicate, or Celerra Manager) and SRM

Summary

453

454

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

A CLARiiON Back-End Array Configuration for Celerra Unified Storage

This appendix presents these topics: ◆ ◆

A.1 Back-end CLARiiON storage configuration ............................. 457 A.2 Present the new CLARiiON back-end configuration to Celerra unified storage...................................................................................... 468

CLARiiON Back-End Array Configuration for Celerra Unified Storage

455

A Note: This appendix contains procedures to configure the captive back-end CLARiiON storage in the Celerra unified storage. As such, this procedure should only be performed by a skilled user who is experienced in CLARiiON configuration with Celerra. This appendix is only provided for completion. Given the automation already included as part of the initial Celerra unified storage setup, a typical user will not need to perform this procedure.

The procedure in this appendix should be performed whenever there is a need to modify the configuration of the captive back-end CLARiiON storage of the Celerra unified storage. This procedure will include CLARiiON configuration and presenting this new configuration to Celerra in the form of new Celerra disk volumes that will be added to the existing Celerra storage pools.

456

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

A.1 Back-end CLARiiON storage configuration To configure the back-end CLARiiON storage, create LUNs and add them to the storage group: 1. Create a RAID group 2. Create LUNs from the RAID group 3. Add LUNs to the storage group Create a RAID group To create a RAID group: 1. In Navisphere Manager, right-click the RAID group, and then click Create RAID Group.

Figure 356 Create RAID Group option

The Create Storage Pool dialog box appears.

Back-end CLARiiON storage configuration

457

Figure 357 Create Storage Pool

2. Select the Storage Pool ID and RAID Type. Select Manual, and then click Select. The Disk Selection dialog box appears. 3. Select the disks for the RAID type from the Available Disks box, and then click OK. The selected disks appear in the Selected Disks box.

458

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

Figure 358 Disk Selection

4. Click Apply. This RAID group is created. Create LUNs from the RAID group After the RAID group is created, the LUNs must be created. With FC, SAS, and SATA disks, use the following RAID configuration: RAID 5 (4+1) group in CLARiiON with two LUNs per RAID group. These LUNs should be load balanced between the CLARiiON storage processors (SPs). Section 3.5.3, ”Storage considerations for using Celerra EFDs,” on page 107 provides configuration details for EFDs.

Back-end CLARiiON storage configuration

459

To create LUNs from the RAID group: 1. In Navisphere Manager, right-click the RAID group, and then click Create LUN.

Figure 359 Create LUN option 460

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

The Create LUN dialog box appears.

Figure 360 Create LUN

2. Select the RAID Type, Storage Pool for new LUN, User Capacity, LUN ID, Number of LUNS to create, and then click Apply. The Confirm: Create LUN dialog box appears. Note: With FC disks, use a RAID 5 (4+1) group in CLARiiON. Create two LUNs per RAID group and load-balance LUNs between CLARiiON SPs.

Back-end CLARiiON storage configuration

461

Figure 361 Confirm: Create LUN

3. Click Yes. The Message: Create LUN dialog box appears when the LUN operation is created successfully.

462

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

Figure 362 Message: Create LUN

4. Click OK. Add LUNs to the storage group The host can access the required LUNs only when the LUN is added to the storage group that is connected to the host. To add LUNs to the storage group: 1. In Navisphere Manager, right-click the storage group, and then click Select LUNs.

Back-end CLARiiON storage configuration

463

Figure 363 Select LUNs

464

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

The Storage Group Properties dialog box appears.

Figure 364 Storage Group Properties

2. Select the LUNs that need to be added, and then click Apply. The Confirm dialog box appears.

Back-end CLARiiON storage configuration

465

Figure 365 Confirm

3. Click Yes to confirm the operation. The Success dialog box appears when the LUNs are added successfully.

466

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

Figure 366 Success

4. Click OK.

Back-end CLARiiON storage configuration

467

A.2 Present the new CLARiiON back-end configuration to Celerra unified storage After the backup CLARiiON storage was configured, this new configuration should be presented to Celerra. To add the disk volume to the default storage pool a disk mark is required. To do the disk mark, type the following command at the CLI prompt of Celerra: $ nas_diskmark -mark -all -discovery y -monitor y

Figure 367 Disk mark

New disk volumes are added to the default storage pool.

468

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

B Windows Customization

This appendix presents these topics: ◆ ◆ ◆

B.1 Windows customization ............................................................... 470 B.2 System Preparation tool ................................................................ 471 B.3 Customization process for the cloned virtual machines .......... 472

Windows Customization

469

B.1 Windows customization Windows customization provides a mechanism to assign customized installations efficiently to different user groups. Windows Installer places all the information about the installation in a relational database. The installation of an application or product can be customized for particular user groups by applying transform operations to the package. Transforms can be used to encapsulate various customizations of a base package required by different workgroups. When a virtual machine is cloned, the exact copy of the virtual machine is built with the same asset ID, product key details, IP address, system name, and other system details. This leads to software and network conflicts. The customization of a clone's guest OS is recommended to prevent the possible network and software conflicts.

470

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

B.2 System Preparation tool The System Preparation tool (Sysprep) can be used with other deployment tools to install Microsoft Windows operating systems with minimal intervention by an administrator. Sysprep is typically used during large-scale rollouts when it would be too slow and costly to have administrators or technicians interactively install the operating system on individual computers.

System Preparation tool

471

B.3 Customization process for the cloned virtual machines Install Sysprep on the source virtual machine to avoid possible network and software conflicts. Running Sysprep will re-signature the present software and network setting of the source virtual machine. To customize virtual machines: 1. Run Sysprep on the source virtual machine that is identified to be cloned. Figure 368 shows the welcome screen for the customization wizard by using Sysprep.

Figure 368 System Preparation tool

2. Click OK. The following screen appears.

472

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

Figure 369 Reseal option

3. Click Reseal. The following dialog box appears.

Customization process for the cloned virtual machines

473

Figure 370 Generate new SID

4. Click OK. The virtual machine reboots and a new SID is created for the cloned system. 5. Clone the customized virtual machine using the Celerra-based technologies: a. Create the checkpoint/snap in Celerra Manager. b. Add the checkpoint/snap to the storage of vCenter Server. c. Create the cloned virtual machine. d. Switch on the cloned virtual machine. e. Confirm the details of the new cloned virtual machine. Any possible conflict between the cloned virtual machine and the source virtual machine is avoided.

474

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure