Using EMC Celerra Storage with VMware vSphere and VMware ... [PDF]

3 downloads 328 Views 8MB Size Report
Jun 3, 2010 - 5.9 Backup and recovery using VMware Data Recovery .... 398. 5.10 Virtual ...... storage array to determine the best path selection and leverage ...
Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure Version 4.0

• Connectivity of VMware vSphere or VMware Infrastructure to Celerra Storage • Backup and Recovery of VMware vSphere or VMware Infrastructure on Celerra Storage • Disaster Recovery of VMware vSphere or VMware Infrastructure on Celerra Storage

Yossi Mesika

Copyright © 2008, 2009, 2010 EMC Corporation. All rights reserved. EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without notice. THE INFORMATION IN THIS PUBLICATION IS PROVIDED “AS IS.” EMC CORPORATION MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND WITH RESPECT TO THE INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Use, copying, and distribution of any EMC software described in this publication requires an applicable software license. For the most up-to-date regulatory document for your product line, go to the Technical Documentation and Advisories section on EMC Powerlink. For the most up-to-date listing of EMC product names, see EMC Corporation Trademarks on EMC.com. All other trademarks used herein are the property of their respective owners.

H5536.9

2

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

Contents

Errata EMC VSI for VMware vSphere: Unified Storage Management replaces EMC Celerra Plug-in for VMware............................. 21

Preface Top five optimization recommendations .................................28

Chapter 1

Introduction to VMware Technology 1.1 VMware vSphere and VMware Infrastructure virtualization platforms ............................................................. 30 1.2 VMware vSphere and VMware Infrastructure data centers .................................................................................. 34 1.3 Distributed services in VMware vSphere and VMware Infrastructure ............................................................................... 45 1.4 Backup and recovery solutions with VMware vSphere and VMware Infrastructure............................................................... 50 1.4.1 VMware Data Recovery ............................................. 50 1.4.2 VMware Consolidated Backup ................................. 52 1.5 VMware vCenter Site Recovery Manager ......................... 54 1.5.1 Key benefits of VMware SRM ................................... 55 1.6 VMware View ........................................................................ 57 1.6.1 Key benefits of VMware View .................................. 57 1.6.2 Components of the VMware View solution ........... 58 1.7 VMware vCenter Converter ................................................ 60 1.7.1 Migration with vCenter Converter........................... 61

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

3

Chapter 2

EMC Foundation Products 2.1 EMC Celerra .......................................................................... 2.1.1 Celerra unified storage platform .............................. 2.1.2 Celerra gateway .......................................................... 2.2 Celerra Manager.................................................................... 2.3 EMC CLARiiON.................................................................... 2.4 EMC Symmetrix .................................................................... 2.4.1 Symmetrix VMAX platform...................................... 2.5 Relevant key Celerra features ............................................. 2.5.1 Celerra Virtual Provisioning ..................................... 2.5.2 Celerra SnapSure ........................................................ 2.5.3 Temporary writeable snap ........................................ 2.5.4 Celerra iSCSI snapshots ............................................. 2.5.5 Celerra Replicator ....................................................... 2.5.6 EMC Replication Manager and Celerra................... 2.5.7 Celerra Data Deduplication.......................................

Chapter 3

64 66 68 70 71 73 74 76 76 76 77 77 78 80 81

VMware vSphere and VMware Infrastructure Configuration Options 3.1 Introduction ........................................................................... 88 3.2 Storage alternatives............................................................... 89 3.3 Configuration roadmap ....................................................... 90 3.4 VMware vSphere or VMware Infrastructure installation 93 3.5 Storage considerations ......................................................... 94 3.5.1 AVM ............................................................................. 96 3.5.2 MVM............................................................................. 97 3.5.3 Storage considerations for using Celerra EFDs.... 107 3.6 VMware vSphere or VMware Infrastructure configuration ............................................................................. 109 3.6.1 ESX and Celerra storage settings............................ 109 3.6.2 ESX iSCSI HBA and NIC driver configuration .... 120 3.6.3 VMkernel port configuration in ESX ..................... 120 3.7 Using NFS storage .............................................................. 128 3.7.1 Add a Celerra file system to ESX............................ 128 3.7.2 Create a NAS datastore on an ESX server............. 133 3.8 Using iSCSI storage ............................................................ 137 3.8.1 Configuration considerations for Celerra iSCSI with VMware vSphere and VMware Infrastructure ............. 138 3.8.2 Add a Celerra iSCSI device/LUN to ESX ............. 139 3.8.3 Create VMFS datastores on ESX............................. 174 3.8.4 Create RDM volumes on ESX servers.................... 182 3.9 Introduction to using Fibre Channel storage .................. 205

4

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

3.9.1 Create LUNs and add them to a storage group ... 205 3.9.2 Create a RAID group................................................ 205 3.9.3 Present the LUN to VMware vSphere or VMware Infrastructure...................................................................... 219 3.10 Virtual machine considerations....................................... 222 3.10.1 Virtual machine disk partitions alignment ......... 222 3.10.2 Virtual machine swap file location....................... 229 3.10.3 Guest OS SCSI timeout settings ............................ 236 3.10.4 Paravirtual SCSI (PVSCI) adapters....................... 237 3.11 Monitor and manage storage........................................... 248 3.11.1 Celerra file system notification ............................. 248 3.11.2 vCenter Server storage monitoring and alarms . 254 3.12 Virtually provisioned storage.......................................... 258 3.12.1 Configure a NAS datastore on a virtually provisioned NFS file system............................................. 259 3.12.2 Considerations to use Virtual Provisioning over NFS....................................................................................... 259 3.12.3 Create a virtually provisioned iSCSI LUN.......... 261 3.12.4 Configure a VMFS datastore on a virtually provisioned iSCSI LUN..................................................... 261 3.12.5 Considerations to use Virtual Provisioning over iSCSI/VMFS ....................................................................... 262 3.12.6 Leverage ESX thin provisioning and Celerra Virtual Provisioning........................................................................ 263 3.12.7 Virtual storage expansion using Celerra storage 265 3.13 Storage multipathing ........................................................ 278 3.13.1 Configure VMware NMP with Celerra iSCSI and the ESX iSCSI software initiator ............................................. 278 3.13.2 Multipathing using Microsoft iSCSI Initiator and Celerra iSCSI inside a Windows guest OS ..................... 284 3.13.3 Scaling bandwidth of NAS datastores on Celerra NFS....................................................................................... 292 3.13.4 VMware vSphere configuration with Celerra iSCSI using PowerPath/VE ........................................................ 298 3.14 VMware Resiliency ........................................................... 315 3.14.1 The rationale for VMware Resiliency .................. 315 3.14.2 EMC recommendations for VMware Resiliency with Celerra ................................................................................. 315 3.14.3 Install appropriate SCSI drivers ........................... 316 3.14.4 Summary for VMware Resiliency with Celerra . 319 3.14.5 Considerations for Windows virtual machines.. 319 3.14.6 Considerations for Linux virtual machines ........ 320

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

5

3.14.7 Upgrade LSI Logic Parallel drivers to LSI Logic Storport drivers ................................................ 321 3.14.8 Using paravirtual drivers in vSphere 4 environments...................................................................... 333

Chapter 4

Cloning Virtual Machines 4.1 Introduction ......................................................................... 348 4.2 Cloning methodologies ...................................................... 349 4.2.1 Clone Virtual Machine wizard in vCenter Server 349 4.2.2 VMware vCenter Converter.................................... 352 4.3 Cloning virtual machines by using Celerra-based technologies ............................................................................... 353 4.3.1 Clone virtual machines over NAS datastores using Celerra SnapSure ............................................................... 354 4.3.2 Clone virtual machines over iSCSI/vStorage VMFS datastores using iSCSI snapshots .................................... 355 4.3.3 Clone virtual machines over iSCSI or RDM volumes by using iSCSI snapshots.................................................. 357 4.4 Celerra-based cloning with Virtual Provisioning........... 359 4.4.1 Clone virtual machines over NAS using SnapSure and Virtual Provisioning .................................................. 359 4.4.2 Clone virtual machines over VMFS or RDM using iSCSI snapshot and Virtual Provisioning....................... 360 4.5 Conclusion ........................................................................... 363

Chapter 5

Backup and Restore of Virtual Machines 5.1 Backup and recovery options............................................ 366 5.2 Recoverable as compared to restartable copies of data . 367 5.2.1 Recoverable disk copies ........................................... 367 5.2.2 Restartable disk copies............................................. 367 5.3 Virtual machines data consistency ................................... 369 5.4 Backup and recovery of a NAS datastore........................ 371 5.4.1 Logical backup and restore using Celerra SnapSure ............................................................................. 371 5.4.2 Logical backup and restore using Replication Manager .............................................................................. 373 5.4.3 Physical backup and restore using the nas_copy command ............................................................................ 376 5.4.4 Physical backup and restore using Celerra NDMP and NetWorker .................................................................. 376

6

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

5.4.5 Physical backup and restore using Celerra Replicator ............................................................................ 378 5.4.6 Physical backup and restore using Replication Manager............................................................................... 378 5.5 Backup and recovery of a vStorage VMFS datastore over iSCSI ............................................................................................ 382 5.5.1 Logical backup and restore using Celerra iSCSI snapshots............................................................................. 382 5.5.2 Logical backup and restore using Replication Manager............................................................................... 384 5.5.3 Physical backup and restore using Celerra Replicator ............................................................................ 385 5.5.4 Physical backup and restore using Replication Manager............................................................................... 387 5.6 Backup and recovery of an RDM volume over iSCSI .... 388 5.7 Backup and recovery using VCB ...................................... 389 5.8 Backup and recovery using VCB and EMC Avamar ..... 395 5.9 Backup and recovery using VMware Data Recovery .... 398 5.10 Virtual machine single file restore from a Celerra checkpoint .................................................................................. 401 5.11 Other file-level backup and restore alternatives........... 404 5.12 Summary ............................................................................ 406

Chapter 6

Using VMware vSphere and VMware Virtual Infrastructure in Disaster Restart Solutions 6.1 Overview .............................................................................. 410 6.2 Definitions ............................................................................ 411 6.2.1 Dependent-write consistency.................................. 411 6.2.2 Disaster restart........................................................... 411 6.2.3 Disaster recovery....................................................... 412 6.2.4 Roll-forward recovery .............................................. 412 6.3 Design considerations for disaster recovery and disaster restart .......................................................................................... 413 6.3.1 Recovery point objective.......................................... 413 6.3.2 Recovery time objective ........................................... 413 6.3.3 Operational complexity............................................ 414 6.3.4 Source server activity ............................................... 415 6.3.5 Production impact .................................................... 415 6.3.6 Target server activity................................................ 415 6.3.7 Number of copies of data......................................... 415 6.3.8 Distance for the solution .......................................... 416 6.3.9 Bandwidth requirements ......................................... 416

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

7

6.3.10 Federated consistency ............................................ 416 6.3.11 Testing the solution ................................................ 417 6.3.12 Cost ........................................................................... 417 6.4 Geographically distributed virtual infrastructure ......... 419 6.5 Business continuity solutions............................................ 420 6.5.1 NAS datastore replication ....................................... 420 6.5.2 VMFS datastore replication over iSCSI ................. 435 6.5.3 RDM volume replication over iSCSI...................... 446 6.5.4 Site failover over NFS and iSCSI using VMware SRM and Celerra ......................................................................... 446 6.5.5 Site failback over NFS and iSCSI using VMware vCenter SRM 4 and EMC Celerra Failback Plug-in for VMware vCenter SRM ...................................................... 449 6.6 Summary .............................................................................. 453

Appendix A

CLARiiON Back-End Array Configuration for Celerra Unified Storage A.1 Back-end CLARiiON storage configuration ................. 457 A.2 Present the new CLARiiON back-end configuration to Celerra unified storage ............................................................ 468

Appendix B

Windows Customization B.1 Windows customization ................................................... 470 B.2 System Preparation tool .................................................... 471 B.3 Customization process for the cloned virtual machines .................................................................................... 472

8

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

Figures

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31

VMware vSphere architecture ................................................................. 30 VMware vSphere data center physical topology.................................. 35 vNIC, vSwitch, and port groups ............................................................. 36 VMware vNetwork Distributed Switch ................................................. 38 VMware vSphere and VMware Infrastructure storage architecture . 39 Raw device mapping................................................................................. 41 Storage map of vSphere inventory objects ............................................ 44 VMware vMotion ...................................................................................... 45 Storage vMotion......................................................................................... 46 VMware DRS.............................................................................................. 47 VMware HA ............................................................................................... 48 VMware Fault Tolerance .......................................................................... 49 VMware Data Recovery............................................................................ 51 Site Recovery Manager ............................................................................. 55 VMware View with VMware vSphere 4 ................................................ 57 VMware vCenter Converter..................................................................... 60 Celerra block diagram............................................................................... 64 Celerra storage topology .......................................................................... 66 Celerra unified storage ............................................................................. 68 Celerra gateway storage ........................................................................... 69 Celerra Manager GUI................................................................................ 70 Celerra Replicator ...................................................................................... 79 EMC Celerra Plug-in for VMware .......................................................... 83 Celerra Data Deduplication calculator ................................................... 85 Celerra storage with VMware vSphere and VMware Infrastructure 89 Configuration roadmap............................................................................ 90 Storage layout ............................................................................................ 99 Volumes .................................................................................................... 101 Create a stripe volume ............................................................................ 102 New Volume ............................................................................................ 103 File Systems .............................................................................................. 105

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

9

32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 10

New File System ...................................................................................... Sample storage layout............................................................................. Network Interface Properties ................................................................ Modify NFS.MaxVolumes on each ESX host ...................................... Set Net.TcpipHeapSize and Net.TcpipHeapMax parameters .......... Configure ESX NFS heartbeat parameters........................................... VMkernel configuration - Add Networking ....................................... Add Network Wizard - Connection Type ........................................... VMkernel - Network Access .................................................................. Add Network Wizard - VMkernel - Connection Settings ................. Add Network Wizard - VMkernel - IP Connection Settings ............ DNS Configuration ................................................................................. Routing...................................................................................................... Add Network Wizard - Ready to Complete ....................................... File Systems .............................................................................................. New File System ...................................................................................... NFS Exports.............................................................................................. NFS Export Properties ............................................................................ Add Storage.............................................................................................. Add Storage - Select Storage Type........................................................ Add Storage - Locate Network File System ........................................ Add Storage - Network File System ..................................................... Security Profile......................................................................................... Firewall Properties .................................................................................. Storage Adapters ..................................................................................... iSCSI Initiator Properties........................................................................ General Properties................................................................................... Add Send Target Server ......................................................................... iSCSI Initiator Properties - Dynamic Discovery ................................. Wizards - Select a Wizard ...................................................................... New iSCSI Lun Wizard .......................................................................... Select/Create Target ............................................................................... Select/Create File System ...................................................................... Enter LUN Info ........................................................................................ LUN Masking........................................................................................... Overview/Results ................................................................................... Storage Adapters ..................................................................................... Rescan ....................................................................................................... Storage Adapters ..................................................................................... Storage Adapters - Properties................................................................ iSCSI Initiator Properties........................................................................ General Properties................................................................................... iSCSI Initiator Properties - Dynamic Discovery .................................

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

106 107 115 116 117 119 121 121 122 123 124 125 126 127 129 130 131 132 133 134 135 136 140 141 142 142 143 144 145 146 147 148 149 150 151 152 153 154 155 155 156 157 158

75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117

Add Send Target Server ......................................................................... Wizards - Select a Wizard ...................................................................... Remove from Inventory option ............................................................. Datastores ................................................................................................. Datastores - Delete................................................................................... iSCSI Target Properties - Target ............................................................ iSCSi Target Properties - LUN Mask .................................................... Networking .............................................................................................. iSCSi Initiator Properties ........................................................................ iSCSi Initiator Properties - Discovery ................................................... Add Target Portal .................................................................................... iSCSI Initiator Properties - Target portal added ................................. iSCSI Initiator Properties - Targets ....................................................... Log On to Target...................................................................................... Advanced Settings................................................................................... iSCSI Initiator Properties - Targets ....................................................... Datastores ................................................................................................. Add Storage - Select Storage Type ........................................................ Add Storage - Select Disk/LUN............................................................ Add Storage - Select VMFS Mount Options ........................................ Add Storage - Current Disk Layout...................................................... Add Storage - Properties ........................................................................ Add Storage - Disk/LUN - Formatting................................................ Add Storage - Ready to Complete ........................................................ New Virtual Machine option ................................................................. Create New Virtual Machine ................................................................. Create New Virtual Machine - Name and Location........................... Create New Virtual Machine - Datastore............................................. Create New Virtual Machine - Virtual Machine Version .................. Create New Virtual Machine - Guest Operating System................... Create New Virtual Machine - CPUs.................................................... Create New Virtual Machine - Memory............................................... Create New Virtual Machine - Network .............................................. Create New Virtual Machine - SCSI Controller .................................. Create New Virtual Machine - Select a Disk ....................................... Create New Virtual Machine - Select a Disk ....................................... Create New Virtual Machine - Create a Disk ...................................... Create New Virtual Machine - Advanced Options ............................ Create New Virtual Machine - Ready to Complete............................ Edit Settings.............................................................................................. Virtual Machine Properties .................................................................... Add Hardware......................................................................................... Add Hardware - Select a Disk ...............................................................

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

159 160 162 163 163 164 165 166 167 168 169 169 170 171 172 173 174 175 176 177 179 180 180 181 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 11

118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 12

Add Hardware - Select and Configure a Raw LUN........................... 202 Add Hardware - Select a Datastore ...................................................... 202 Add Hardware - Advanced Options.................................................... 203 Add Hardware - Ready to Complete ................................................... 204 Create a Storage Pool .............................................................................. 206 Create Storage Pool ................................................................................. 207 Disk Selection........................................................................................... 208 Create LUNs from a RAID group ......................................................... 209 Create LUN .............................................................................................. 210 Confirm: Create LUN ............................................................................. 210 Message: Create LUN - LUN created successfully............................. 211 Create Storage Group ............................................................................. 212 Create Storage Group ............................................................................. 212 Confirm: Create Storage Group ............................................................ 213 Success: Create Storage Group .............................................................. 213 Connect Hosts .......................................................................................... 214 Select a host for the storage group........................................................ 215 Confirm the connected host................................................................... 215 Connect Host operation succeeded ...................................................... 216 Select LUNs for the storage group........................................................ 217 Select LUNs .............................................................................................. 218 Confirm addition of LUNs to the storage group ................................ 219 Successful addition of LUNs to the storage group............................. 219 Rescan FC adapter................................................................................... 220 Rescan dialog box.................................................................................... 220 FC LUN added to the storage................................................................ 221 Command prompt - diskpart ................................................................ 224 Select the disk........................................................................................... 224 Create a partition with a 1 MB disk boundary.................................... 224 Computer Management ......................................................................... 225 NTFS data partition alignment (Windows system Information) ..... 227 NTFS data partition alignment (wmic command) ............................. 228 Allocation unit size of a formatted NTFS data partition ................... 228 Output for a Linux partition aligned to a 1 MB disk boundary (starting sector 2048) ................................................................................................. 229 Output for an unaligned Linux partition (starting sector 63) ........... 229 Edit Virtual Machine Swapfile Location.............................................. 231 Virtual Machine Swapfile Location ...................................................... 232 List of datastores...................................................................................... 233 Advanced Settings................................................................................... 234 Mem.Host.LocalSwapDirEnabled parameter ..................................... 235 Mem.Host.LocalSwapDir parameter.................................................... 236 Edit DWORD Value ................................................................................ 237

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202

Edit Settings for the virtual machine .................................................... Virtual Machine Properties .................................................................... Add Hardware......................................................................................... Select a Disk.............................................................................................. Create a Disk ............................................................................................ Advanced Options................................................................................... Ready to Complete .................................................................................. Change the SCSI controller type ........................................................... Change SCSI Controller Type................................................................ Virtual Machine Properties .................................................................... Disk Management.................................................................................... Notifications ............................................................................................. New Notification: Storage Projection ................................................... Storage Usage........................................................................................... Notifications page.................................................................................... New Notification: Storage Projection ................................................... Notifications page.................................................................................... List of Datastores ..................................................................................... General tab................................................................................................ Alarm settings .......................................................................................... Actions tab ................................................................................................ Create virtually provisioned NFS file system ..................................... NAS datastore in vCenter Server .......................................................... Creating the virtually provisioned iSCSI LUN ................................... iSCSI VMFS datastore in vCenter Server ............................................. Virtual machines provisioned ............................................................... Create a Disk ............................................................................................ File Systems .............................................................................................. Extend File System .................................................................................. Auto Extend Enabled .............................................................................. iSCSI LUN expansion.............................................................................. Extend iSCSI LUN ................................................................................... Configuration tab..................................................................................... Data capacity ............................................................................................ iSCSI datastore expansion ...................................................................... Additional available space ..................................................................... iSCSI datastore expansion ...................................................................... Test Properties.......................................................................................... Increase Datastore Capacity................................................................... Disk Layout .............................................................................................. Extent Size................................................................................................. Ready to complete page.......................................................................... Add Extent in VMware Infrastructure .................................................

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

239 240 240 241 242 243 244 244 245 246 247 249 250 251 252 253 254 255 255 256 257 258 259 261 262 264 265 266 267 268 269 270 271 271 272 273 273 274 274 275 275 276 277 13

203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 14

iSCSI Target Properties........................................................................... LUN Mask ................................................................................................ vSwitch configuration............................................................................. Rescan ....................................................................................................... Properties.................................................................................................. iSCSI_ppve Properties ............................................................................ iSCSI Disk Manage Paths ....................................................................... iSCSI Target Properties........................................................................... LUN Mask ................................................................................................ vSwitches .................................................................................................. iSCSI Initiator Properties........................................................................ Discovery .................................................................................................. Log On to Target...................................................................................... Advanced Settings................................................................................... Advanced Settings................................................................................... Target Properties ..................................................................................... Device Details .......................................................................................... Network page .......................................................................................... New Network Device ............................................................................. Interfaces................................................................................................... New Network Interface .......................................................................... Create a VMkernel port .......................................................................... vSwitch3 Properties ................................................................................ vSwitch3 Properties ................................................................................ Celerra Data Mover interfaces............................................................... PowerPath architecture .......................................................................... Claim rule to ESX server ........................................................................ Kernel and esx conf ................................................................................. Rescan the ESX host ................................................................................ iSCSI Target Properties........................................................................... LUN Mask ................................................................................................ Storage Adapters ..................................................................................... Storage....................................................................................................... Add Storage wizard ................................................................................ Select Disk/LUN ..................................................................................... Current Disk Layout ............................................................................... Ready to Complete.................................................................................. vCenter Server storage configuration .................................................. iSCSI_ppve Properties ............................................................................ iSCSI Disk Manage Paths ....................................................................... PowerPath ................................................................................................ iSCSI Target Properties........................................................................... vSwitches ..................................................................................................

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

280 280 281 282 282 283 284 286 286 287 288 289 290 290 291 291 292 293 294 295 295 296 297 297 298 299 300 300 301 302 302 303 304 304 305 305 306 307 307 308 308 309 310

246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288

iSCSI Target Properties........................................................................... Rescan........................................................................................................ Add a new iSCSI LUN to the ESX host ................................................ iSCSI_ppve Properties ............................................................................ iSCSI Disk Manage Paths ....................................................................... Windows virtual machines system event viewer ............................... Upgrade the LSI Logic PCI-X Ultra 320 driver ................................... Hardware Update Wizard...................................................................... Install software......................................................................................... Select device driver from a list............................................................... Select the device driver ........................................................................... Install from Disk ...................................................................................... Locate File ................................................................................................. Select device driver ................................................................................. Completing the Hardware Update Wizard ......................................... ESX host .................................................................................................... Datastore Browser ................................................................................... Configuration file .................................................................................... Update virtual machine file configuration .......................................... LSI Storport drivers are upgraded successfully.................................. Virtual Machine Properties .................................................................... Browse Datastores ................................................................................... Virtual Machine Properties .................................................................... Install the third-party driver .................................................................. Select VMware PVSCSI Controller ....................................................... Virtual Machine Properties .................................................................... Select Hard Disk ...................................................................................... Select a Disk.............................................................................................. Create a Disk ............................................................................................ Advanced Options................................................................................... Ready to Complete .................................................................................. Virtual Machine Properties .................................................................... Change SCSI Controller Type................................................................ Clone Virtual Machine wizard .............................................................. Host/Cluster ............................................................................................ Datastore ................................................................................................... Disk Format .............................................................................................. Guest Customization............................................................................... Ready to Complete .................................................................................. Create a writeable checkpoint for NAS datastore .............................. Promote a snapshot ................................................................................. Assign a new signature option .............................................................. File system usage on Celerra Manager.................................................

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

310 311 312 313 314 317 322 323 324 325 326 326 327 328 329 330 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 349 350 350 351 351 352 354 355 357 360 15

289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 16

Parameter setting using Celerra Manager ........................................... Checkpoint creation in Celerra Manager GUI 5.6 .............................. ShowChildFsRoot Server Parameter Properties in Celerra Manager Datastore Browser view after checkpoints are visible ....................... Job Wizard ................................................................................................ Restoring the datastore replica from Replication Manager .............. Replica Properties in Replication Manager ......................................... Read-only copy of the datastore view in the vSphere client............. NDMP recovery using EMC NetWorker............................................. Backup with integrated checkpoint...................................................... Mount Wizard - Mount Options ........................................................... VMFS mount options to manage snapshots........................................ Celerra Manager Replication Wizard................................................... VCB............................................................................................................ NetWorker configuration settings for VCB ......................................... VCB backup with EMC Avamar Virtual Edition ............................... VMware Data Recovery ......................................................................... VDR backup process ............................................................................... Mapped CIFS share containing a virtual machine in the vCenter Server......................................................................................................... Virtual machine view from the vSphere client ................................... Registration of a virtual machine with ESX......................................... Select a Wizard......................................................................................... Select a Replication Type........................................................................ File System................................................................................................ Specify Destination Celerra Network Server ...................................... Create Celerra Network Server ............................................................. Specify Destination Credentials ............................................................ Create Peer Celerra Network Server .................................................... Overview/Results ................................................................................... Specify Destination Celerra Network Server ...................................... Select Data Mover Interconnect ............................................................ Source Settings......................................................................................... Specify Destination Credentials ............................................................ Destination Settings ................................................................................ Overview/Results ................................................................................... Select Data Mover Interconnect ............................................................ Select Replication Session's Interface.................................................... Select Source............................................................................................. Select Destination .................................................................................... Update Policy........................................................................................... Select Tape Transport ............................................................................. Overview/Results ...................................................................................

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

362 371 372 372 373 374 375 376 377 377 380 383 385 390 392 396 398 399 402 403 421 422 423 423 424 424 425 425 426 426 427 427 428 428 429 429 430 430 431 431 432 433

331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370

Command Successful .............................................................................. NFS replication using Replication Manager........................................ Select a Wizard......................................................................................... Select a Replication Type........................................................................ Specify Destination Celerra Network Server ...................................... Create Celerra Network Server ............................................................. Specify Destination Credentials ............................................................ Create Peer Celerra Network Server .................................................... Overview/Results ................................................................................... Specify Destination Celerra Network Server ...................................... Data Mover Interconnect........................................................................ Source Settings ......................................................................................... Specify Destination Credentials ............................................................ Destination Settings................................................................................. Overview/Results ................................................................................... Select Data Mover Interconnect............................................................. Select Replication Session's Interface.................................................... Select Source ............................................................................................. Select Destination .................................................................................... Update Policy ........................................................................................... Overview/Results ................................................................................... Command Successful .............................................................................. VMFS replication using Replication Manager .................................... VMware vCenter SRM with VMware vSphere................................... VMware vCenter SRM configuration................................................... Create RAID Group option .................................................................... Create Storage Pool ................................................................................. Disk Selection ........................................................................................... Create LUN option .................................................................................. Create LUN............................................................................................... Confirm: Create LUN.............................................................................. Message: Create LUN ............................................................................. Select LUNs .............................................................................................. Storage Group Properties ....................................................................... Confirm ..................................................................................................... Success....................................................................................................... Disk mark.................................................................................................. System Preparation tool.......................................................................... Reseal option ............................................................................................ Generate new SID ....................................................................................

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

433 434 435 436 436 437 437 438 438 438 439 439 440 440 441 441 442 442 443 443 444 444 445 447 448 457 458 459 460 461 462 463 464 465 466 467 468 472 473 474

17

18

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

Tables

1 2 3 4 5 6

Default and recommended values of ESX NFS heartbeat parameters ................................................................................................. SCSI driver recommendations for Windows guest OSs ..................... Linux guest OS recommendations ......................................................... Virtual machine cloning methodology comparison............................ Backup and recovery options ................................................................. Data replication solution .........................................................................

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

118 320 321 363 406 453

19

20

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

Errata

ErrataEMC EMC VSI for VMware vSphere: Unified Storage Management replaces EMC Celerra Plug-in for VMware.

Description

Following the recent release of EMC VSI for VMware vSphere: Unified Storage Management, EMC Celerra Plug-in for VMware is no longer supported, and is not available for download. Therefore, customers should use EMC VSI for VMware vSphere: Unified Storage Management instead. All references in this document to EMC Celerra Plug-in for VMware or its documentation should be regarded instead as references to EMC VSI for VMware vSphere: Unified Storage Management or its documentation.

Affected sections

Information on EMC VSI for VMware vSphere: Unified Storage Management

References to EMC Celerra Plug-in for VMware and its documentation in the following sections must be regarded as references to EMC VSI for VMware vSphere: Unified Storage Management. ◆

Section 2.5.7, “Celerra Data Deduplication,” on page 81: References to the plug-in, including the figures.



Section 3.3, “Configuration roadmap,” on page 90: References to the plug-in in the note.



Section 4.3, “Cloning virtual machines by using Celerra-based technologies,” on page 353: References to the plug-in in the note.

The EMC VSI for VMware vSphere: Unified Storage Management Release Notes contain installation instructions and supplemental information. The EMC VSI for VMware vSphere: Unified Storage Management Product Guide contains prerequisites and best practices. These two documents provide more information on the features of EMC VSI for VMware vSphere: Unified Storage Management that are broader than those of EMC Celerra Plug-in for VMware. Read both these documents before installing EMC VSI for VMware vSphere: Unified Storage Management.

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

21

Contents of the EMC VSI for VMware vSphere: Unified Storage Management solution

Installing EMC VSI for VMware vSphere: Unified Storage Management

22

The EMC VSI for VMware vSphere: Unified Storage Management solution contains the following: ◆

EMC VSI for VMware vSphere: Unified Storage Management 4.0 Zip file (P/N 300-012-064)



EMC VSI for VMware vSphere: Unified Storage Management Read Me First (P/N 300-012-099)



EMC VSI for VMware vSphere: Unified Storage Management Release Notes (P/N 300-012-098)



EMC VSI for VMware vSphere: Unified Storage Management Product Guide (P/N 300-012-100)

EMC VSI for VMware vSphere: Unified Storage Management is distributed as a Zip file containing a single-file installer. After you download the Zip file from the EMC Powerlink® website and unzip the file, the EMC VSI for VMware vSphere: Unified Storage Management software can be installed. The EMC VSI for VMware vSphere: Unified Storage Management Release Notes contain detailed installation instructions.

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

Preface

As part of an effort to improve and enhance the performance and capabilities of its product lines, EMC periodically releases revisions of its hardware and software. Therefore, some functions described in this document may not be supported by all versions of the software or hardware currently in use. For the most up-to-date information on product features, refer to the product release notes. Audience

This TechBook describes how VMware vSphere and VMware Infrastructure work with EMC Celerra storage systems and software technologies. The intended audience for this TechBook is storage administrators, system administrators, and VMware vSphere and VMware Infrastructure administrators. This document can also be used by individuals who are involved in acquiring, managing, or operating EMC Celerra storage arrays and host devices. Readers of this guide are expected to be familiar with the following topics: ◆

EMC Celerra system operation



EMC Celerra Manager



EMC CLARiiON



EMC Symmetrix



VMware vSphere and VMware Infrastructure operation

Note: This TechBook was previously called VMware ESX using EMC Celerra Storage Systems.

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

23

Related documentation

24

Related documents include the following from EMC: ◆

EMC Celerra Replicator Adapter for VMware vCenter Site Recovery Manager Version 4.0—Release Notes



EMC Celerra Failback Plug-in for VMware vCenter Site Recovery Manager—Release Notes



Managing EMC Celerra Volumes and File Systems with Automatic Volume Management Technical Module



Managing EMC Celerra Volumes and File Systems Manually Technical Module



PowerPath/VE for VMware vSphere Installation and Administration Guide 5.4



Implementing Virtual Provisioning on EMC CLARiiON and Celerra with VMware Virtual Infrastructure—Applied Technology white paper



EMC Infrastructure for Deploying VMware View in the Enterprise EMC Celerra Unified Storage Plaforms—Solution Guide



Configuring NFS on Celerra TechModule Version 5.6



Configuring iSCSI Targets on Celerra TechModule Version 5.6



Managing Celerra Volumes and File Systems with Automatic Volume Management TechModule Version 5.6



Configuring and Managing Celerra Networking TechModule Version 5.6



Configuring and Managing Celerra Network High Availability TechModule Version 5.6



Configuring Standbys on Celerra TechModule Version 5.6



Configuring NDMP Backups on Celerra TechModule Version 5.6



Using SnapSure on Celerra TechModule Version 5.6



Using Celerra Replicator (V2) TechModule Version 5.6



Using Celerra AntiVirus Agent Technical Module Version 5.6



Using the Celerra Data Deduplication Technical Module Version 5.6



E-Lab Interoperability Navigator utility



Using EMC CLARiiON Storage with VMware vSphere and VMware Infrastructure TechBook

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

The following related documents are from VMware:

Conventions used in this document



ESX Configuration Guide - ESX 4.0 and vCenter Server 4.0



ESX Server 3 Configuration Guide Update 2 and later for ESX Server 3.5 and VirtualCenter 2.5



Recommendations for Aligning VMFS Partitions—VMware Performance Study



SAN System—Design and Deployment Guide

EMC uses the following conventions for special notices. Note: A note presents information that is important, but not hazard-related.

!

CAUTION A caution contains information essential to avoid data loss or damage to the system or equipment.

!

IMPORTANT An important notice contains information essential to operation of the software or hardware. WARNING A warning contains information essential to avoid a hazard that can cause severe personal injury, death, or substantial property damage if you ignore the warning. DANGER A danger notice contains information essential to avoid a hazard that will cause severe personal injury, death, or substantial property damage if you ignore the message.

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

25

Typographical conventions EMC uses the following type style conventions in this document: Normal

Used in running (nonprocedural) text for: • Names of interface elements (such as names of windows, dialog boxes, buttons, fields, and menus) • Names of resources, attributes, pools, Boolean expressions, buttons, DQL statements, keywords, clauses, environment variables, functions, utilities • URLs, pathnames, filenames, directory names, computer names, filenames, links, groups, service keys, file systems, notifications

Bold

Used in running (nonprocedural) text for: • Names of commands, daemons, options, programs, processes, services, applications, utilities, kernels, notifications, system calls, man pages Used in procedures for: • Names of interface elements (such as names of windows, dialog boxes, buttons, fields, and menus) • What user specifically selects, clicks, presses, or types

26

Italic

Used in all text (including procedures) for: • Full titles of publications referenced in text • Emphasis (for example a new term) • Variables

Courier

Used for: • System output, such as an error message or script • URLs, complete paths, filenames, prompts, and syntax when shown outside of running text

Courier bold

Used for: • Specific user input (such as commands)

Courier italic

Used in procedures for: • Variables on command line • User input variables



Angle brackets enclose parameter or variable values supplied by the user

[]

Square brackets enclose optional values

|

Vertical bar indicates alternate selections - the bar means “or”

{}

Braces indicate content that you must specify (that is, x or y or z)

...

Ellipses indicate nonessential information omitted from the example

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

The team that wrote this TechBook This TechBook was authored by a team from Unified Storage Solutions based at Research Triangle Park in North Carolina. The main author is Yossi Mesika. Yossi has 15 years of experience in software engineering in the areas of virtualization, network-attached storage, and databases. Additional contributors to this TechBook are: ◆

Saranya Balasundaram



Venkateswara Rao Etha



Bala Ganeshan - Symmetrix Partner Engineering



John Jom



Sheetal Kochavara - USD Corporate Systems Engineering



Vivek Srinivasa

We'd like to hear from you! Your feedback on our TechBooks is important to us! We want our books to be as helpful and relevant as possible, so please feel free to send us your comments, opinions and thoughts on this or any other TechBook: [email protected]

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

27

Top five optimization recommendations When using EMC Celerra storage with VMware vSphere or VMware Infrastructure, consider the following key optimization recommendations. Nevertheless, this list does not replace the need to review the various recommendations that are included in this document. Virtual machines alignment — Follow the guidelines for virtual machines alignment when deploying virtual machines on Celerra storage (NFS and iSCSI). This guideline includes partition alignment, and for Windows virtual machines, adjustment of the NTFS allocation unit size. Section 3.10.1, “Virtual machine disk partitions alignment,” on page 222 provides more details. Uncached write mechanism for Celerra NFS — When deploying virtual machines on Celerra NFS storage, it is recommended to enable the uncached write mechanism because it can improve the overall virtual machine performance. Section 3.6.1.1, “Celerra uncached write mechanism,” on page 109 provides more details. VMware resiliency with Celerra storage — Follow the guidelines for VMware resiliency with Celerra. This includes settings in ESX, in the virtual machines, and in the guest operation systems. This would allow the virtual machines to better withstand Celerra events such as Data Mover reboot and Data Mover failover. Section 3.14, “VMware Resiliency,” on page 315 provides more details. VMware multipathing and failover with Celerra storage — Follow the guidelines for I/O multipathing when using Celerra with VMware vSphere or VMware Infrastructure. This can improve the overall performance and resource utilization of the virtual data center. Section 3.13, “Storage multipathing,” on page 278 provides more details. ESX thin provisioning and Celerra Virtual Provisioning — With VMware vSphere and VMware Infrastructure, leverage Celerra Virtual Provisioning for better storage utilization. In addition to this, use ESX thin provisioning with VMware vSphere. This can improve the overall storage utilization. Section 3.11.1, “Celerra file system notification,” on page 248 provides more details.

28

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

1 Introduction to VMware Technology

This chapter presents these topics: ◆ ◆ ◆ ◆ ◆ ◆ ◆

1.1 VMware vSphere and VMware Infrastructure virtualization platforms ................................................................................................. 1.2 VMware vSphere and VMware Infrastructure data centers ...... 1.3 Distributed services in VMware vSphere and VMware Infrastructure.......................................................................................... 1.4 Backup and recovery solutions with VMware vSphere and VMware Infrastructure ......................................................................... 1.5 VMware vCenter Site Recovery Manager .................................... 1.6 VMware View ................................................................................... 1.7 VMware vCenter Converter ...........................................................

Introduction to VMware Technology

30 34 45 50 54 57 60

29

1.1 VMware vSphere and VMware Infrastructure virtualization platforms VMware vSphere and VMware Infrastructure are two virtualization platforms from VMware. VMware Infrastructure 3.5 is the previous major release of the platform, whereas VMware vSphere 4 is the next generation of the platform that VMware recently released. VMware vSphere and VMware Infrastructure virtualization platforms consist of various components including ESX/ESXi hosts and VMware vCenter Server. In addition, VMware vSphere and VMware Infrastructure offer a set of services like distributed resource scheduling, high availability, and backup. The relationship between the various components within the VMware vSphere platform is shown in Figure 1 on page 30.

Figure 1 30

VMware vSphere architecture

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

ESX and ESXi — ESX and ESXi are the foundation to deliver virtualization-based distributed services to IT environments. As a core building block of VMware vSphere and VMware Infrastructure, both ESX and ESXi form a production-proven virtualization layer that abstracts processor, memory, storage, and networking resources into multiple virtual machines running side-by-side on the same physical server. Sharing hardware resources across a large number of virtual machines increases hardware utilization and decreases capital and operating costs. The two versions of ESX available are: ◆

VMware ESX — Contains a built-in service console, which is installed as the first component and is used to bootstrap the ESX server installation. Using a command line interface, the service console can then be used to configure ESX. ESX is available as an installable DVD-ROM boot image. The service console is a virtual machine consisting of a Red Hat Linux kernel. It runs inside the ESX server and can be used to run local commands or scripts agents within it.



VMware ESXi — VMware ESXi does not contain a service console. It is available in two forms: VMware ESXi Embedded and VMware ESXi Installable. ESXi Embedded is a firmware that is built into a server's physical hardware or as an internal USB drive. ESXi Installable is a software that is available as an installable CD-ROM boot image. The ESXi Installable software can be installed on a server's hard drive or on an external USB drive.

vCenter Server — vCenter Server delivers centralized management, operational automation, resource optimization, and high availability to IT environments. Virtualization-based distributed services provided by vMotion, VMware Distributed Resource Scheduler (DRS), and VMware High Availability (HA) equip the dynamic data center with unprecedented levels of serviceability, efficiency, and reliability. Automated resource optimization with VMware DRS aligns available resources with predefined business priorities while streamlining labor-intensive and resource-intensive operations. Migration of live virtual machines with vMotion makes the maintenance of IT environments nondisruptive. VMware HA enables cost-effective application availability independent of hardware and operating systems. VMware Virtual Machine — A virtual machine is a representation of a physical machine by software. A virtual machine as an entity exists as a series of files on the disk. For example, there is a file for the hard drives, VMware vSphere and VMware Infrastructure virtualization platforms

31

a file for memory swap space, and for virtual machine configuration. A virtual machine has its own set of virtual hardware (such as RAM, CPU, NIC, and hard disks) upon which an operating system and an application is loaded. The operating system sees a consistent and normalized set of hardware regardless of the actual physical hardware components. VMware virtual machines use advanced hardware features such as 64-bit computing and virtual symmetric multiprocessing. VMware vSphere Client and VMware Infrastructure Client — Interfaces that allow administrators and users to connect remotely to vCenter Server or ESX/ESXi from any Windows machine. VMware vSphere Web Access and VMware Infrastructure Web Access — Web interfaces for virtual machine management and remote console access. Some of the optional components of VMware vSphere 4 and VMware Infrastructure are: VMware vMotion — VMware vMotion enables the live migration of running virtual machines from one physical server to another. VMware Storage vMotion — Storage vMotion enables the migration of virtual machine files from one datastore to another, even across storage arrays, without service interruption. VMware High Availability (HA) —VMware HA provides high availability for applications running on virtual machines. In the event of a server failure, affected virtual machines are automatically restarted on other production servers with spare capacity. VMware Distributed Resource Scheduler (DRS) —VMware DRS leverages vMotion to dynamically allocate and balance computing capacity across a collection of hardware resources aggregated into logical resource pools. VMware Fault Tolerance (FT) — When VMware FT is enabled for a virtual machine, a secondary copy of the original (or primary) virtual machine is created in the same data center. All actions completed on the primary virtual machine are also applied to the secondary virtual machine. If the primary virtual machine becomes unavailable, the secondary machine becomes active and provides continuous availability. VMware Fault Tolerance is unique to VMware vSphere. vNetwork Distributed Switch — This feature, which is also unique to VMware vSphere, includes a distributed virtual switch, which is created and maintained by vCenter Server and spans many ESX/ESXi 32

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

hosts, enabling significant reduction of ongoing network maintenance activities and increasing network capacity. This allows virtual machines to maintain consistent network configuration and advanced network features and statistics as they migrate across multiple hosts. VMware Consolidated Backup (VCB) — This feature provides a centralized facility for agent-free backup of virtual machines with VMware Infrastructure. It simplifies backup administration and reduces the impact of backups on ESX/ESXi performance. VMware Data Recovery — A backup and recovery product for VMware vSphere environments that provides quick and complete data protection for virtual machines. VMware Data Recovery is a disk-based solution that is built on the VMware vStorage API for data protection and is fully integrated with vCenter Server. Pluggable Storage Architecture (PSA) — A modular partner plug-in storage architecture that enables greater array certification flexibility and improved array-optimized performance. PSA is a multipath I/O framework that allows storage partners to enable array compatibility asynchronously to ESX release schedules. VMware partners can deliver performance-enhancing multipath load-balancing behaviors that are optimized for each array. VMware vSphere Software Development Kit (SDK) and VMware Infrastructure SDK — SDKs that provide a standard interface for VMware and third-party solutions to access VMware vSphere and VMware Infrastructure. vStorage APIs for data protection — This API leverages the benefits of Consolidated Backup and makes it significantly easier to deploy, while adding several new features that deliver efficient and scalable backup and restore of virtual machines. Like Consolidated Backup, this API offloads backup processing from ESX servers, thus ensuring that the best consolidation ratio is delivered, without disrupting applications and users. This API enables backup tools to directly connect the ESX servers and the virtual machines running on them, without any additional software installation. The API enables backup tools to do efficient incremental, differential, and full-image backup and restore of virtual machines.

VMware vSphere and VMware Infrastructure virtualization platforms

33

1.2 VMware vSphere and VMware Infrastructure data centers VMware vSphere and VMware Infrastructure virtualize the entire IT infrastructure including servers, storage, and networks. VMware vSphere and VMware Infrastructure aggregate these resources and present a uniform set of elements in the virtual environment. With VMware vSphere and VMware Infrastructure, IT resources can be managed like a shared utility and resources can be dynamically provisioned to different business units and projects. A typical VMware vSphere or VMware Infrastructure data center consists of basic physical building blocks such as x86 virtualization servers, storage networks and arrays, IP networks, a management server, and desktop clients. The physical topology of a VMware vSphere data center is illustrated in Figure 2 on page 35.

34

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

Figure 2

VMware vSphere data center physical topology

VMware vSphere and VMware Infrastructure data centers

35

Network architecture The virtual environment provides similar networking elements as the physical world: virtual network interface cards (vNIC), virtual switches (vSwitch), and port groups. VMware vSphere introduced a new type of switch architecture, called vNetwork Distributed Switch, that expands this network architecture. The network architecture is depicted in Figure 3.

Figure 3

vNIC, vSwitch, and port groups

Like a physical machine, each virtual machine has one or more vNICs. The guest operating system and applications communicate with the vNIC through a standard device driver or a VMware optimized device driver in the same way as a physical NIC. Outside the virtual machine, the vNIC has its own MAC address and one or more IP addresses, and responds to the standard Ethernet protocol in the same way as a physical NIC. An outside agent cannot detect that it is communicating with a virtual machine. VMware vSphere 4 offers two types of switch architecture: vSwitch and vNetwork Distributed Switch. A vSwitch works like a layer 2 physical switch. Each ESX host has its own vSwitch. One side of the vSwitch has port groups that connect to virtual machines. The other side has uplink connections to physical Ethernet adapters on the server where the 36

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

vSwitch resides. Virtual machines connect to the outside world through the physical Ethernet adapters that are connected to the vSwitch uplinks. A vSwitch can connect its uplinks to more than one physical Ethernet adapter to enable NIC teaming. With NIC teaming, two or more physical adapters can be used to share the traffic load or provide passive failover in the event of a physical adapter hardware failure or a network outage. With VMware Infrastructure, only vSwitch is available. Port group is a unique concept in the virtual environment. A port group is a mechanism for setting policies that govern the network connected to it. A vSwitch can have multiple port groups. Instead of connecting to a particular port on the vSwitch, a virtual machine connects its vNIC to a port group. All virtual machines that connect to the same port group belong to the same network inside the virtual environment, even if they are on different physical servers. The vNetwork Distributed Switch is a distributed network switch that spans many ESX hosts and aggregates networking to a centralized cluster level. Therefore, vNetwork Distributed Switches are available at the data center level of the vCenter Server inventory. vNetwork Distributed Switches abstract configuration of individual virtual switches and enables centralized provisioning, administration, and monitoring through VMware vCenter Server. Figure 4 on page 38 illustrates a vNetwork Distributed Switch that spans between ESX server hosts.

VMware vSphere and VMware Infrastructure data centers

37

Figure 4

VMware vNetwork Distributed Switch

Storage architecture The VMware vSphere and VMware Infrastructure storage architecture consists of abstraction layers to manage the physical storage subsystems. The key layer in the architecture is the datastores layer. Figure 5 on page 39 shows the storage architecture.

38

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

Figure 5

VMware vSphere and VMware Infrastructure storage architecture

A datastore is like a storage appliance that allocates storage space for virtual machines across multiple physical storage devices. The datastore provides a model to allocate storage space to the individual virtual machines without exposing them to the complexity of the physical storage technologies, such as Fibre Channel SAN, iSCSI SAN, direct-attached storage, or NAS.

VMware vSphere and VMware Infrastructure data centers

39

A virtual machine is stored as a set of files in a datastore directory. A virtual disk, inside each virtual machine, is also a set of files in the directory. Therefore, operations such as copy, move, and backup can be performed on a virtual disk just like with a file. New virtual disks can be hot-added to a virtual machine without powering it down. In such a case, either a virtual disk file (.vmdk) is created in a datastore to provide new storage space for the hot-added virtual disk or an existing virtual disk file is added to a virtual machine. The two types of datastores available in this storage architecture are vStorage VMFS and NAS. A VMFS datastore is a clustered file system built across one or more physical volumes (LUNs) originating from block storage systems. A NAS datastore is a NFS volume on a file storage system. In this case, the storage is managed entirely by the file storage system. VMFS datastores can span multiple physical storage subsystems. A single VMFS volume can contain one or more LUNs from a local SCSI disk array on a physical host, a Fibre Channel SAN disk farm, or an iSCSI SAN disk farm. New LUNs added to any of the physical storage subsystems are detected and can be made available to all existing or new datastores. The storage capacity on a previously created VMFS datastore (volume) can be hot-extended by adding a new physical LUN from any of the storage subsystems that are visible to it as long as the VMFS volume extent has not reached the 2 TB minus 1 MB limit. Alternatively, a VMFS volume can be extended (Volume Grow) within the same LUN. With VMware vSphere, this can be done without powering off physical hosts or storage subsystems. If any of the LUNs (except for the LUN which has the first extent of the spanned volume) within a VMFS volume fails or becomes unavailable, only virtual machines that interact with that LUN are affected. All other virtual machines with virtual disks residing in other LUNs continue to function as normal. Furthermore, a VMFS datastore can be configured to be mapped to a physical volume on a block storage system. To achieve this, the datastore can be configured with virtual disks that map to a physical volume on a block storage system. This functionality of vStorage VMFS is called Raw Device Mapping (RDM). RDM is illustrated in Figure 6 on page 41.

40

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

Figure 6

Raw device mapping

With RDM functionality, a virtual machine can be given direct access to a physical LUN in the storage system. This is helpful in various use cases in which the guest OS or the applications within the virtual machine require direct access to the physical volume. One example of such a use case is a physical-to-virtual clustering between a virtual machine and a physical server.

VMware vSphere and VMware Infrastructure data centers

41

New VMware vSphere 4 storage-related features The key storage-related features that are new and available with VMware vSphere 4 are:

42



Virtual disk thin provisioning — VMware vSphere offers an option to create thin provisioned virtual disks when deploying or migrating virtual machines. VMware vCenter Server has also been updated with new management screens and capabilities such as raising alerts, alarms, and improved datastore utilization reports to enable the management of over-provisioned datastores. Virtual disk thin provisioning increases the efficiency of storage utilization for virtualization environments by using only the amount of underlying storage resources needed for that virtual disk. In the past, thin provisioning was the default format for only virtual disks created on NAS datastores in VMware Infrastructure. However, VMware has integrated the management of virtual disk thin provisioning and now fully supports this format for all virtual disks with the release of vSphere. Virtual disk thin provisioning should not be confused with thin provisioning capabilities that an array vendor might offer. In fact, with vSphere, it is even possible to thin provision a virtual disk at the datastore level that resides on a thinly provisioned device on the storage array.



Storage vMotion — This technology performs the migration of the virtual machine while the virtual machine is active. With VMware vSphere, Storage vMotion can be administered through vCenter Server and works across all storage protocols including NFS (in addition to Fibre Channel and iSCSI). In addition, Storage vMotion allows the user to move between different provisioning states. For example, from a thick to thin virtual disk.



VMFS Volume Grow — VMFS Volume Grow offers a new way to increase the size of a datastore that resides on a VMFS volume. It complements the dynamic LUN expansion capability that exists in many storage array offerings today. If a LUN is increased in size, then the VMFS Volume Grow enables the VMFS volume extent to dynamically increase in size as well (up to the standard 2 TB minus 1 MB limit).



Pluggable Storage Architecture (PSA) — In vSphere, leveraging third-party storage vendor multipath software capabilities is introduced through a modular storage architecture that allows storage partners to write a plug-in for their specific capabilities. These modules communicate with the intelligence running in the storage array to determine the best path selection and leverage

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

parallel paths to increase performance and reliability of the I/O from the ESX to the storage array. Typically the native multipath driver (NMP) supplied by VMware will be used. It can be configured to support round-robin multipath as well. However, if the storage vendor module is available, it can be configured to manage the connections between the ESX and the storage. EMC PowerPath®/VE is an excellent example of such a storage vendor module. ◆

Datastore alarms — Datastore alarms track and warn users on potential resource over-utilization or event conditions for datastores. With the release of vSphere, alarms can be set to trigger on events and notify the administrator when critical error conditions occur.



Storage reports and maps — Storage reports help monitor storage information like datastore, LUNs, virtual machines on datastore, and host access to datastore. Storage maps help to visually represent and understand the relationship between a vSphere datacenter inventory object and the virtual and physical storage resources available for this object. Figure 7 on page 44 shows a storage map that includes both NFS and iSCSI storage resources from EMC Celerra®.

VMware vSphere and VMware Infrastructure data centers

43

Figure 7

44

Storage map of vSphere inventory objects

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

1.3 Distributed services in VMware vSphere and VMware Infrastructure VMware vSphere and VMware Infrastructure include distributed services that enable efficient and automated resource management and high availability of virtual machines. These services include VMware vMotion, VMware Storage vMotion, VMware DRS, and VMware HA. The VMware vSphere platform introduced a new distributed service, VMware Fault Tolerance (FT). This section describes these services and illustrates their functionality. Shared storage, such as EMC Celerra, EMC CLARiiON®, and EMC Symmetrix®, is required to use these services. VMware vMotion Virtual machines run on and consume resources from ESX/ESXi. vMotion enables the migration of running virtual machines from one physical server to another without service interruption, as shown in Figure 8 on page 45. vMotion can help perform maintenance activities such as upgrade or security patches on ESX hosts without any downtime. vMotion is also the foundation for DRS.

Figure 8

VMware vMotion

Distributed services in VMware vSphere and VMware Infrastructure

45

Storage vMotion Storage vMotion enables the migration of virtual machines from one datastore to another datastore without service interruption, as shown in Figure 9 on page 46. This allows administrators, for example, to offload virtual machines from one storage array to another to perform maintenance, reconfigure LUNs, resolve out-of-space issues, and upgrade VMFS volumes. Administrators can also use Storage vMotion to optimize the storage environment for improved performance by seamlessly migrating virtual machine disks. With VMware vSphere, Storage vMotion is supported across all available storage protocols, including NFS. Furthermore, with VMware vSphere, Storage vMotion is fully integrated into vCenter Server and does not require any CLI execution.

Figure 9

Storage vMotion

VMware Distributed Resource Scheduler (DRS) VMware DRS helps to manage a cluster of physical hosts as a single compute resource. A virtual machine can be assigned to a cluster. DRS will then find an appropriate host on which the virtual machine will run. DRS places virtual machines in such a way that the load across the cluster is balanced, and cluster-wide resource allocation policies (such as reservations, priorities, and limits) are enforced. When a virtual machine is powered on, DRS performs an initial placement of the 46

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

virtual machine on a host. As cluster conditions change (such as the load and available resources), DRS migrates virtual machines (leveraging vMotion) to other hosts as necessary. When a new physical server is added to a cluster, DRS enables virtual machines to immediately and automatically take advantage of the new resources because it distributes the running virtual machines by way of vMotion. Figure 10 on page 47 shows the DRS.

Figure 10

VMware DRS

DRS can be configured to automatically execute virtual machine placement, virtual machine migration, and host power actions, or to provide recommendations, which the data center administrator can assess and manually act upon. For host power actions, DRS leverages the VMware Distributed Power Management (DPM) feature. DPM allows a DRS cluster to reduce its power consumption by powering hosts on and off based on cluster resource utilization. VMware High Availability (HA) If a host or virtual machines fail, VMware HA automatically restarts the virtual machines on a different physical server within a cluster. All applications within the virtual machines have the high availability benefit through application clustering. HA monitors all physical hosts and virtual machines in a cluster and detects failure of hosts and virtual machines. An agent placed on each physical host maintains a heartbeat with the other hosts in the resource Distributed services in VMware vSphere and VMware Infrastructure

47

pool. Loss of a heartbeat initiates the process of restarting all affected virtual machines on other hosts. VMware tools help HA check the health of virtual machines. Figure 11 on page 48 gives an example of VMware HA. HA ensures that sufficient resources are available in the cluster at all times to restart virtual machines on different physical hosts in the event of a host failure.

Figure 11

VMware HA

VMware Fault Tolerance (FT) VMware FT, which was introduced in VMware vSphere, provides continuous availability by protecting a virtual machine (the primary virtual machine) with a shadow copy (secondary virtual machine) that runs in virtual lockstep on a separate host. Figure 12 on page 49 shows an example of VMware FT. It is worth noting that at this time FT is provided as an initial release that is supported in a limited configuration. VMware vSphere documentation provides further details on the configuration supported for FT.

48

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

Figure 12

VMware Fault Tolerance

Inputs and events performed on the primary virtual machine are recorded and replayed on the secondary virtual machine to ensure that the two remain in an identical state. Actions such as mouse-clicks and keystrokes that are recorded on the primary virtual machine are replayed on the secondary virtual machine. Because the virtual machine is in virtual lockstep with the primary virtual machine, it can take over execution at any point without interruption or loss of data.

Distributed services in VMware vSphere and VMware Infrastructure

49

1.4 Backup and recovery solutions with VMware vSphere and VMware Infrastructure VMware vSphere and VMware Infrastructure platforms include a backup and recovery solution for virtual machines that resides in the data center. VMware Consolidated Backup (VCB) is such a solution for VMware Infrastructure environments. VMware Data Recovery is a backup application for VMware vSphere environments that is based on the VMware vStorage for Data Protection API. The following two sections provide further details on these two solutions.

1.4.1 VMware Data Recovery VMware Data Recovery is a new backup and recovery solution for VMware vSphere. VMware Data Recovery, distributed as a VMware virtual appliance, creates backups of virtual machines without interrupting their use or the data and services they provide. VMware Data Recovery manages existing backups and removes backups as they become older. It also supports target-based deduplication to remove redundant data. VMware Data Recovery supports the Microsoft Windows Volume Shadow Copy Service (VSS), which provides the backup infrastructure for certain Windows operating systems. VMware Data Recovery is built on the VMware vStorage for Data Protection API. It is integrated with VMware vCenter Server and enables centralized scheduling of backup jobs. Integration with vCenter Server also enables virtual machines to be backed up, even when they are moved using VMware vMotion or VMware DRS. Figure 13 on page 51 illustrates how VMware Data Recovery works.

50

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

Figure 13

VMware Data Recovery

Backups can be stored on any virtual disk supported by virtual machines hosted on VMware ESX, including SANs, NAS devices, or Common Internet File System (CIFS) based storage such as SAMBA. All backed-up virtual machines are stored in a deduplicated store. 1.4.1.1 Benefits of deduplication store VMware deduplication store technology used by VMware Data Recovery provides tight integration, evaluating patterns to be saved to restore points and performing checks to see if identical sections have already been saved. To maximize deduplication rates, ensure that similar virtual machines are backed up to the same destination because VMware supports storing the results of multiple backup jobs to use the same deduplication store. While backing up similar virtual machines to Backup and recovery solutions with VMware vSphere and VMware Infrastructure

51

the same deduplication store may increase space savings, similar virtual machines do not need to be backed up during the same job. (Deduplication is evaluated for all virtual machines stored, even if some are not currently being backed up.) VMware Data Recovery is designed to support deduplication stores that are up to 1 TB in size and each backup appliance is designed to support the use of two deduplication stores. VMware Data Recovery does not impose limits on the size of deduplication stores or the number of deduplication stores. But, if more than two stores are used or if the size of a store exceeds 1 TB, the performance may be affected.

1.4.2 VMware Consolidated Backup VCB integrates with third-party software to perform backups of virtual machine disks with VMware Infrastructure. The following are the key features of VCB: ◆

Integrate with most major backup applications to provide a fast and efficient way to back up data in virtual machines.



Eliminates the need for a backup agent in a virtual machine (for crash-consistent backup only).



Reads virtual disk data directly from the SAN storage device by using Fibre Channel or iSCSI, or by using a network connection to an ESX server host.



Can run in a virtual machine to back up virtual machines that reside on a storage device accessed over a network connection.



When used with iSCSI, VCB can run in a virtual machine.



Supports file-level full and incremental backup for virtual machines running Microsoft Windows operating system and image-level backup for virtual machines running any operating system.



Can be used with a single ESX/ESXi host or with a vCenter Server.



Supports the Volume Shadow Copy Service (VSS), which provides the backup infrastructure for certain Windows operating systems running inside ESX 3.5 update 2 and later.

1.4.2.1 How VCB works VCB consists of a set of utilities and scripts that work in conjunction with third-party backup software. To ensure that VCB works with specific backup software, either VMware or the backup software vendor provides integration modules that contain the required pre-backup and post-backup scripts. 52

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

The third-party backup software, integration module, and VCB run on the VCB proxy, which is either a physical or a virtual machine that has Microsoft Windows operating system installed.

Backup and recovery solutions with VMware vSphere and VMware Infrastructure

53

1.5 VMware vCenter Site Recovery Manager VMware vCenter Site Recovery Manager (SRM) delivers advanced capabilities for disaster recovery management, nondisruptive testing, and automated failover. VMware SRM can manage the failover from production data centers to disaster recovery sites, as well as the failover between two sites with active workloads. Multiple sites can even recover into a single shared recovery site. VMware SRM can also help with planned data center failovers such as data center migrations. VMware SRM is integrated with a range of storage replication technologies including EMC SRDF® for Symmetrix, EMC MirrorView™ for CLARiiON, EMC Celerra Replicator™, and EMC RecoverPoint. VMware SRM 4 introduces NFS storage replication support, many-to-one failover using shared recovery sites, and full integration with VMware vSphere 4.

54

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

Figure 14

Site Recovery Manager

1.5.1 Key benefits of VMware SRM VMware SRM provides capabilities to do the following: Disaster recovery management ◆

Create and manage recovery plans directly from VMware vCenter Server. These recovery plans can be extended with custom scripts. Access to these recovery plans can be controlled with granular role-based access controls.

VMware vCenter Site Recovery Manager

55



Discover and display virtual machines protected by storage replication using integration certified by storage vendors.



Monitor the availability of remote sites and alert users of possible site failures.



Store, view, and export results of test and failover execution from VMware vCenter Server.



Leverage iSCSI, Fibre Channel, or NFS-based storage replication solutions.



Recover multiple sites into a single shared recovery site.

Nondisruptive testing ◆

Use storage snapshot capabilities to perform recovery tests without losing replicated data.



Connect virtual machines to an existing isolated network for testing purposes.



Automate the execution of recovery plan tests. Customize the execution of tests for recovery plan scenarios. Automate the cleanup of testing environments after completing tests.

Automated failover

56



Initiate the recovery plan execution from VMware vCenter Server with a single button. Manage and monitor the execution of recovery plans within VMware vCenter Server.



Automate the promotion of replicated datastores for recovery by using adapters created by leading storage vendors for their replication platforms.



Execute user-defined scripts and pauses during recovery.



Reconfigure virtual machine IP addresses to match the network configuration at the failover site.

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

1.6 VMware View VMware View is an end-to-end desktop virtualization solution that leverages VMware vSphere or VMware Infrastructure to enable customers to manage and secure virtual desktops across the enterprise from within the data center.

Figure 15

VMware View with VMware vSphere 4

1.6.1 Key benefits of VMware View VMware View provides the capabilities to do the following: ◆

Get control and manageability in a single solution — VMware View is a comprehensive solution that provides the functionality that most organizations need to connect and manage their remote clients and centralized virtual desktops while keeping data safe and secure VMware View

57

in the data center. Designed for desktop administrators, VMware View offers an intuitive web-based management interface with Microsoft Active Directory (AD) integration for user authentication and policy enforcement. Centralized administration of all desktop images helps simplify upgrades, patches, and desktop maintenance, and enables the use of VMware View to manage connections between remote clients and their centralized virtual desktop. ◆

Support remote users without sacrificing security — Since all the data is maintained within the corporate firewall, VMware View minimizes overall risk and data loss. Built-in SSL encryption provides secure tunneling to virtual desktops from unmanaged devices. Furthermore, optional integration with RSA SecurID enables two-factor authentication.



Provide end users with a familiar desktop experience — With VMware View, end users get the same desktop experience that they would have with a traditional desktop. The VMware View display protocol, PC over IP (PCoIP), provides a superior end-user experience over any network on up to four different displays. Adaptive technology ensures an optimized virtual desktop delivery on both the LAN and the WAN and addresses the broadest list of use cases and deployment options with a single protocol. Personalized virtual desktops, complete with applications and end-user data and settings, can be accessed anywhere and anytime with VMware View.



Extend the power of VMware vSphere to the desktop — VMware View is built on VMware vSphere 4 and can automate desktop backup and recovery of business processes in the data center.

1.6.2 Components of the VMware View solution The components of the VMware View solution are:

58



VMware View Manager — VMware View Manager is an enterprise-class desktop management solution that streamlines the management, provisioning, and deployment of virtual desktops.



VMware View Composer — VMware View Composer is an optional tool that uses VMware Linked Clone technology to rapidly create desktop images that share virtual disks by using a master image. This conserves disk space and streamlines management.



VMware ThinApp — VMware ThinApp is an optional application virtualization software that decouples applications from operating systems and packages them into an isolated and encapsulated file.

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

This allows multiple versions of applications to execute on a single desktop without conflict, or the same version of an application to run on multiple operating systems without modification. ◆

Offline Desktop (experimental) — Offline Desktop is a technology that allows complete virtual desktops to be moved between the data center and the physical desktop devices, with the security policies intact. Changes to the virtual desktop are intelligently synchronized between the data center and the physical desktop devices.

VMware View

59

1.7 VMware vCenter Converter VMware vCenter Converter is an optional module of VMware vCenter Server to import, export, or reconfigure source physical machines, virtual machines, or system images of VMware virtual machines.

Figure 16

60

VMware vCenter Converter

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

1.7.1 Migration with vCenter Converter Migration with vCenter Converter involves cloning a source machine or image and encapsulating, configuring virtual hardware, and registering it with the destination. The tool allows the conversion of virtual machines, which are managed by vCenter Server, to different VMware virtual machine formats and exports those virtual machines for use with other VMware products. vCenter Converter can be used to perform the following tasks: ◆

Convert running remote physical machines to virtual machines and import the virtual machines to ESX/ESXi or ESX/ESXi hosts that are managed by vCenter Server.



Convert and import virtual machines, such as those created with VMware Workstation or Microsoft Virtual Server 2005, to ESX/ESXi hosts that are managed by vCenter Server.



Convert third-party backup or disk images to ESX/ESXi hosts that are managed by vCenter Server.



Restore VCB images to ESX/ESXi hosts that are managed by vCenter Server.



Export virtual machines managed by vCenter Server hosts to other VMware virtual machine formats.



Reconfigure virtual machines managed by vCenter Server hosts so that they are bootable.



Customize virtual machines in the vCenter Server inventory (for example, to change the hostname or to update network settings).

It is important to note that vCenter Converter does not support creating thin provisioned target disks on ESX 4 and ESXi 4. However, this can be achieved by performing a Storage vMotion migration after the virtual machines have been imported using vCenter Converter. Furthermore, thin provisioned virtual disks are supported using the standalone edition of this tool, VMware vCenter Convertor Standalone. This edition runs separately from vCenter Server. Depending on the vCenter Converter component installed, perform hot or cold cloning by using a command line interface, or with the vCenter Converter Import, Export, or Reconfigure wizard available in the VMware vSphere Client.

VMware vCenter Converter

61

62

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

2 EMC Foundation Products

This chapter presents these topics: ◆ ◆ ◆ ◆ ◆

2.1 EMC Celerra...................................................................................... 2.2 Celerra Manager ............................................................................... 2.3 EMC CLARiiON............................................................................... 2.4 EMC Symmetrix ............................................................................... 2.5 Relevant key Celerra features.........................................................

EMC Foundation Products

64 70 71 73 76

63

2.1 EMC Celerra EMC Celerra platforms cover a broad range of configurations and capabilities that scale from midrange to high-end networked storage. Although differences exist along the product line, there are some common building blocks. These building blocks are combined to fill out a broad and scalable product line with consistent support and configuration options. A Celerra frame provides n+1 power and cooling redundancy and supports a scalable number of physical disks, depending on the model and the needs of the solution. The primary building blocks in a Celerra system are: ◆

Data Movers



Control Stations

Data Movers move data back and forth between the LAN and the back-end storage (disks). The Control Station is the management station for the system. The Celerra system is configured and controlled through the Control Station. Figure 17 shows how Celerra works.

Figure 17

64

Celerra block diagram

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

Data Movers A Celerra system has one or more Data Movers installed in its frame. A Data Mover is an independent server running EMC's optimized NAS operating system and data access in real time (DART). Each Data Mover has multiple network ports, network identities, and connections to back-end storage. Each Data Mover can support multiple iSCSI, NFS, and/or Common Internet File System (CIFS) shares. In many ways, a Data Mover operates as an independent server, bridging the LAN and the back-end storage disk array. Multiple Data Movers are grouped together as a single system for high availability and user friendliness. To ensure high availability, Celerra supports a configuration in which one Data Mover acts as a standby for one or more active Data Movers. When an active Data Mover fails, the standby boots and takes over the identity and storage of the failed device. Data Movers in a cabinet are logically grouped together so that they can be managed as a single system by using the Control Station. Control Station The Control Station is the single point of management and control of a Celerra frame. Regardless of the number of Data Movers or disk drives in the system, the administration of the system is done through the Control Station. Control Stations not only provide the interface to configure Data Movers and back-end storage, but they also provide heartbeat monitoring of the Data Movers. Even if a Control Station is inoperable for any reason, the Data Movers continue to operate normally. The Celerra architecture provides an option for a redundant Control Station to support continuous management for an increased level of availability. The Control Station runs a version of the Linux OS that EMC has optimized for Celerra and NFS/CIFS administration. Figure 17 on page 64 shows a Celerra system with two Data Movers. The Celerra NAS family supports up to eight Data Movers depending on the product model. Basics of storage on Celerra Celerra provides access to block and file data using iSCSI, CIFS, and NFS and Fibre Channel protocols. These storage protocols provide standard TCP/IP and Fibre Channel network services. Using these network services, EMC Celerra platforms deliver a complete multi-protocol foundation for a VMware vSphere virtual data center, as depicted in Figure 18 on page 66. EMC Celerra

65

Figure 18

Celerra storage topology

Celerra supports a range of advanced features such as Virtual Provisioning™, advanced VMware integrated local and remote replication, advanced storage tiering, and mobility. Furthermore, Celerra also includes advanced IP-based technologies such as IPv6 and 10 GbE. These are now supported with VMware vSphere 4. The Celerra family includes two platforms: Celerra unified storage and Celerra gateway. The Celerra unified storage and gateway configurations are described in the following sections.

2.1.1 Celerra unified storage platform The Celerra unified storage platform is comprised of one or more autonomous Data Movers, also called X-Blades, a Control Station blade (one or two blades for NS-960), called X-Blades, and a Storage Processor Enclosure (SPE). The X-Blades control data movement from the disks to the network. Each X-Blade contains two Intel processors and runs EMC's DART operating system, designed and optimized for high performance, multi-protocol network file and block access. The SPE manages the back-end CLARiiON disk arrays that include disk array 66

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

enclosure (DAEs), which can hold up to 15 disk drive modules. The SPE has two storage processors (SPs) that deliver the same processing power as the X-Blades and is based on the industry-leading EMC UltraScale™ architecture. The SPs provide Fibre Channel connectivity to the X-Blades and to the external Fibre Channel clients by using additional Fibre Channel ports in the SPs. The X-Blades provide NAS and iSCSI connectivity by using IP ports. The combination of the front-end X-Blades with the SPE back end forms an integrated and high-availability offering in the midtier IP storage market. Depending on the operating needs, Celerra can be deployed in several operating modes including primary/standby, primary/primary, or advanced N+1 clustering. Primary/standby is designed for environments that cannot tolerate any system downtime due to hardware failure. In this mode, one of the X-Blades operates in the standby mode while the second one manages all the data movement between the network and the storage. Other environments that value performance over continuous availability can choose to operate their dual X-Blade Celerra unified storage systems in primary/primary mode. Through a simple menu selection, both X-Blades can be made available to handle unusually large loads and user populations that can bring standard file servers to a virtual standstill. All Celerra unified storage platforms deliver NFS, CIFS, iSCSI, and Fibre Channel (FC) capabilities to consolidate application storage and file servers. The Celerra unified storage platforms include the NX4, NS-120, NS-480, and NS-960. Figure 19 on page 68 shows Celerra unified platforms that support NFS/CIFS, FC, and iSCSI.

EMC Celerra

67

Figure 19

Celerra unified storage

2.1.2 Celerra gateway The Celerra gateway is a dedicated IP storage gateway optimized to bring fast file access, high availability, and advanced functionality to existing SAN infrastructures (CLARiiON or Symmetrix storage arrays). It has the same features as the Celerra unified storage platforms but combines a NAS head and existing SAN storage for a flexible, cost-effective implementation that maximizes the utilization of existing resources. The Celerra gateway platforms include NS-40G and NS-G8. Figure 20 on page 69 shows Celerra gateway platforms that support NFS/CIFS, FC, and iSCSI.

68

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

Figure 20

Celerra gateway storage

EMC Celerra

69

2.2 Celerra Manager Celerra Manager is a web-based software tool that enables intuitive management of the EMC Celerra IP storage (NFS, CIFS, and iSCSI) solution and ensures high availability. Celerra Manager helps to configure, administer, and monitor Celerra networked storage from a single online interface, saving time and eliminating the need for a dedicated management workstation. Celerra Manager Basic Edition supports the most common functions to configure and manage a single device, from at-a-glance statistics to simple user/group quota controls. The Celerra Manager Advanced Edition offers greater configuration, data migration, and monitoring capabilities across multiple Celerra environments. An example of the Celerra Manager GUI is shown in Figure 21.

Figure 21

70

Celerra Manager GUI

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

2.3 EMC CLARiiON EMC CLARiiON is a midtier storage system that can be connected to a Celerra gateway platform. EMC CLARiiON is a highly available storage system designed for no single points of failure, and delivers industry-leading performance for mission-critical applications and databases. CLARiiON storage systems provide both iSCSI and Fibre Channel connectivity options for open systems hosts, and supports advanced data replication capabilities. The core software that runs on CLARiiON, called FLARE®, provides a robust set of functions including data protection, host connectivity, and local and remote data replication such as RecoverPoint and MirrorView™. CLARiiON uses a modular architecture that allows the system to grow nondisruptively as business requirements change. The two major components are the SPE and the DAE. The SPE contains two independent high-performance storage processors that provide front-end connectivity, read and write cache, and connectivity to the back end. The DAE provides the back-end storage and each DAE can hold up to 15 disk drive modules. Multiple DAEs can be interconnected across multiple back-end loops to meet capacity and performance requirements. CX4 is the current generation of CLARiiON. It uses the UltraFlex™ technology that uses cut-through switch technology and full 4 Gb/s back-end disk drives along with 8 Gb/s and 4 Gb/s Fibre Channel front-end connections. 10 Gb/s and 1 Gb/s iSCSI connections are also available. The UltraScale architecture provides both high performance and reliability with advanced fault-detection and isolation capabilities. High performance Flash and Fibre Channel disks and low-cost and high-capacity SATA disk technologies can be deployed within the same storage system, enabling tiered storage solutions within a single system. CLARiiON implements a LUN ownership model where I/O operations for a LUN are serviced by the owned storage processor. Because physical disk drives are shared by both storage processors, in the event of a path failure, the LUN ownership can be moved (trespassed) to the peer storage processor, allowing the I/O operation to proceed. This ownership model provides high availability and performance, by balancing the workload across processing resources. With release 26 of the FLARE operating environment, the Asymmetric Logical Unit Access (ALUA) standard is supported. ALUA provides asymmetric active or active LUN ownership for the CLARiiON. With ALUA, either storage processor can accept an I/O operation and will forward it to the EMC CLARiiON

71

owner storage processor through the internal high-speed messaging interface. This capability requires that the path management software support the ALUA standard. EMC PowerPath leverages the ALUA architecture to optimize the performance and to provide advanced failover intelligence for CLARiiON. VMware vSphere 4 supports ALUA connectivity to CLARiiON. CLARiiON arrays provide the flexibility to configure data protection levels appropriate for the application performance and availability requirements. A combination of RAID 0, 1, 3, 1/0, 5, and 6 can be configured within the same system. Additional availability features include nondisruptive software and hardware upgrades, proactive diagnostics, alerts, and phone-home capabilities. CLARiiON also supports global hot sparing and provides automatic and online rebuilds of redundant RAID groups when any of the group's disk drives fail. The current CX4 family includes the midrange CX4-960, CX4-480, CX4-240, and CX4-120. The AX4 is an entry-level storage system with a similar architecture and many of the same features and interfaces as the arrays in the CX4 family. Compatibility and interoperability between CLARiiON systems enable customers to perform data-in-place upgrades of their storage solutions from one generation to the next, protecting their investment as their capacity and connectivity demands increase.

72

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

2.4 EMC Symmetrix EMC Symmetrix is a high-end storage system that can be connected to a Celerra gateway platform. Symmetrix hardware architecture and the EMC Enginuity™ operating environment are the foundations for the Symmetrix storage platform. This environment consists of the following components: ◆

Symmetrix hardware



Enginuity-based operating functions



Solutions Enabler



Symmetrix application program interface (API)



Symmetrix-based applications



Host-based Symmetrix applications



Independent software vendor (ISV) applications

Symmetrix storage systems provide advanced data replication capabilities, full mainframe and open systems support, and flexible connectivity options, including Fibre Channel, FICON, ESCON (DMX-4 and earlier), Gigabit Ethernet, and iSCSI. Interoperability between Symmetrix storage systems enables customers to migrate storage solutions from one generation to the next, protecting their investment even as their storage demands expand. Symmetrix-enhanced cache director technology allows configurations of up to 512 GB of cache on the DMX-4 and up to 1 TB for the VMAX. Symmetrix storage arrays feature Dynamic Cache Partitioning that optimizes the storage by allowing administrators to allocate and reserve portions of the cache to specific devices or groups of devices. Dynamic Cache Partitioning allows the definition of a maximum of eight cache partitioned groups, including the default group to which all devices initially belong. The Symmetrix on-board data integrity features include: ◆

Continuous cache and on-disk data integrity checking and error detection/correction



Fault isolation



Nondisruptive hardware and software upgrades



Automatic diagnostics and phone-home capabilities

EMC Symmetrix

73

At the software level, advanced integrity features ensure that information is always protected and available. By choosing a mix of RAID 1 (mirroring), RAID 1/0, high-performance RAID 5 (3+1 and 7+1) protection, and RAID 6, users have the flexibility to choose the protection level most appropriate to the value and performance requirements of their information. Symmetrix DMX-4 and VMAX are EMC's latest generation of high-end storage solutions. From the perspective of the host operating system, a Symmetrix system appears as multiple physical devices connected through one or more I/O controllers. The host operating system addresses each of these devices using a physical device name. Each physical device includes attributes, vendor ID, product ID, revision level, and serial ID. The host physical device maps to a Symmetrix device. In turn, the Symmetrix device is a virtual representation of a portion of the physical disk called a hypervolume.

2.4.1 Symmetrix VMAX platform The EMC Symmetrix VMAX Series with Enginuity is a new entry to the Symmetrix product line. Built on the strategy of simple, intelligent, and modular storage, it incorporates a new scalable fabric interconnect design that allows the storage array to seamlessly grow from an entry-level configuration into the world's largest storage system. Symmetrix VMAX scales up to 2 PB of usable protected capacity and consolidates more workloads with a much smaller footprint than alternative arrays. The Symmetrix VMAX provides improved performance and scalability for demanding enterprise storage environments while maintaining support for EMC's broad portfolio of platform software offerings. The Enginuity operating environment for Symmetrix version 5874 is a feature-rich Enginuity release supporting Symmetrix VMAX storage arrays. With the release of Enginuity 5874, Symmetrix VMAX systems deliver new software capabilities that improve capacity utilization, ease of use, business continuity, and security. The Symmetrix VMAX also maintains customer expectations for high-end storage in terms of availability. High-end availability is more than just redundancy; it means nondisruptive operations and upgrades, and being always online. Symmetrix VMAX provides: ◆

74

Nondisruptive expansion of capacity and performance at a lower price point

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure



Sophisticated migration for multiple storage tiers within the array



The power to maintain service levels and functionality as consolidation grows



Simplified control for provisioning in complex environments

Many of the new features provided by the new EMC Symmetrix VMAX platform can reduce operational costs for customers deploying virtualization solutions and enhance functionality to enable greater benefits.

EMC Symmetrix

75

2.5 Relevant key Celerra features This section describes key features of EMC Celerra storage systems to consider with ESX. For a complete description of all Celerra features, refer to the Celerra documentation.

2.5.1 Celerra Virtual Provisioning Celerra Virtual Provisioning is a thin provisioning feature used to improve capacity utilization. With Virtual Provisioning, provided through Celerra file systems and iSCSI LUNs, storage is consumed in a pragmatic manner. Virtual Provisioning allows for the creation of storage devices that do not pre-allocate storage capacity for virtual disk space until the virtual machine application generates some data to the virtual disk. The virtual provisioning model avoids the need to overprovision disks based upon the expected growth. Storage devices still represent and support the upper size limits to the host that access them. But in most cases, the actual disk usage falls well below the apparent allocated size. The benefit is that like virtual resources in the ESX server architecture, storage is presented as a set of virtual devices that share from a pool of disk resources. Disk consumption increases based upon the needs of the virtual machines in the ESX environment. As a way to address future growth, Celerra monitors the available space and can be configured to automatically extend the file system size as the amount of free space decreases.

2.5.2 Celerra SnapSure The Celerra SnapSure™ feature creates a read-only, or read-writeable, logical point-in-time image (checkpoint) of a production file system (PFS). SnapSure can maintain up to 96 PFS checkpoints and 16 read-writeable checkpoints while allowing PFS read-only applications continued access to the real-time data. The principle of SnapSure is copy old on modify. When a block within the PFS is modified, a copy containing the block's original content is saved to a separate volume called the SavVol. Subsequent changes made to the same block in the PFS are not copied into the SavVol. The original blocks from the PFS (in the SavVol) and the unchanged PFS blocks (remaining in the PFS) are read by SnapSure according to a bitmap and blockmap data-tracking structure. These blocks combine to provide a complete point-in-time file system image called a checkpoint. Celerra version 5.6 and later support the creation of writeable checkpoints. A writeable checkpoint is always

76

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

created from a baseline read-only checkpoint, and each baseline checkpoint can only have one writeable checkpoint associated with it at a time.

2.5.3 Temporary writeable snap A read/write snap or checkpoint is a feature of the SnapSure product that provides a mountable and shareable file system, created from a baseline read-only snap. Read/write snap allows the creation of a fully usable, lightweight copy of a file system without the space requirements of a full clone. This type of snap can be used for such applications as database testing, VMware VDI, general data re-purposing that requires update access to the file system copy (such as reporting and data mining) Each writeable snap is built on a read-only snap. When an NFS or a CIFS client writes to a writeable snap, SnapSure saves the changed blocks in the SavVol save area. While using a writeable snap, SnapSure uses the snapshot bitmap and blockmap to locate file system blocks in the same way as a read-only snap, while also tracking written blocks (from the host writing to the mounted read/write snap) in the SavVol, to provide an updateable point-in-time picture of the PFS. SnapSure can maintain up to 16 writeable snaps per file system. Only one writeable snap is allowed per read-only snap.

2.5.4 Celerra iSCSI snapshots A Celerra iSCSI snapshot is a point-in-time representation of the data stored on an iSCSI LUN. Snapshots can be created either by a host application (such as the CBMCLI commands on a Linux host or Replication Manager on a Windows host) or on the Control Station. Each snapshot requires only as much space as the data that is changed in the production LUN. In addition to available space restrictions, the Celerra Network Server supports a maximum of 2,000 snapshots per iSCSI LUN. Each snapshot creates a copy of the production LUN (PLU). The currently allocated (modified) blocks in the PLU are transferred to the snapshot, which becomes the owner of those blocks. The PLU shares the allocated blocks with the snapshot. Subsequent snapshots of the PLU repeat the process. The latest snapshot takes ownership of blocks written (allocated) to the PLU since the previous snapshot, and also shares the allocated blocks owned by previous snapshots. Relevant key Celerra features

77

Unless promoted, a snapshot is not visible to the iSCSI initiator. The promotion operation creates a temporary writeable snapshot (TWS) and mounts it to an iSCSI LUN so that it can be configured as a disk device and used as a production LUN. The TWS also shares the allocated blocks owned by the promoted snapshot. For the promotion of the iSCSI LUN snapshot (TWS), the Celerra administrator can choose to reserve the same space as the size of the production LUN or only hold the changes. For most use cases, including VDI, EMC recommends no extra space reservation for promoted iSCSI LUN snapshots (TWS). A snapshot can be promoted only once (that is, an already promoted snapshot cannot be promoted). After a snapshot is demoted, it can be promoted again. Typical uses of the promoted snapshot are to restore damaged or lost data by using the data on the SLU or to provide an alternative data source. Snapshots can be used within a backup process. However, one should be aware that snapshots are only crash-consistent (as if after a power failure) and cannot guarantee application-consistent data. If an application-consistent backup of virtual machines or datastores is required, EMC suggests that EMC Replication Manager must be leveraged in combination with Celerra snapshots to orchestrate an application-consistent backup. Although a promoted snapshot LUN is writeable, any changes made to the LUN are allocated to the TWS alone. When the snapshot is demoted, the LUN is unmounted and its LUN number is unassigned. Any data written to the promoted LUN is lost and irretrievable. A production LUN can also be restored from a snapshot. This operation performs a fast (destructive) restore, which deletes all newer snapshots.

2.5.5 Celerra Replicator Celerra Replicator is an asynchronous remote replication infrastructure tool for Celerra. It is accessed through Celerra Manager and supports a single, intuitive interface for all types of replications - file systems, virtual Data Movers, and iSCSI LUNs. It produces a read-only, point-in-time copy of a source file system, an iSCSI LUN, or a virtual Data Mover and periodically updates this copy, making it consistent with the source object.

78

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

Figure 22

Celerra Replicator

With Celerra Replicator, users can set granular recovery point objectives (RPOs) for each of the objects being replicated, allowing business compliance service levels to be met especially when scaling a large NAS infrastructure. The recovery time objective (RTO) is the maximum amount of time allowed after the declaration of a disaster for recovery or restart to a specified point of consistency. As with RPO, each solution with varying RTO has a different cost profile. Defining the RTO is usually a compromise between the cost of the solution and the cost to the business when applications are unavailable. Celerra version 5.6 and Relevant key Celerra features

79

later support the use of Celerra Replicator V2. This new version consolidates data replication and can be used to replicate both file systems and iSCSI LUNs with the same mechanism. Celerra Replicator can maintain up to 1,024 replication sessions per Data Mover. Administrators can protect extremely large IP storage deployments or have finer granularity in segmenting their information based on RTO/RPO requirements. Users can implement policy-based, adaptive QoS by specifying a schedule of times, days, and bandwidth limits on the source to the destination IP network interconnects. Celerra Replication supports 1 to N replication and cascading. With 1 to N replication, an object can be replicated from a single source to up to four remote locations. With cascading, replication allows a single object to be replicated from a source site (Site A) to a secondary site (Site B), and from there to a tertiary site (Site C). Cascading replication is typically used in a multi-tiered disaster recovery strategy. The first local hop is used to allow operational recovery with a short recovery point objective where there may be a network locality to allow a local office to actually run applications from the Celerra located at the secondary location. Typically RPOs will be of the order of minutes, given that Celerra Replicator is an asynchronous data replication solution. The tertiary location is used for major and wide-reaching disaster scenarios and protection of the local disaster recovery site and RPOs from the secondary site to this tertiary site would be of the order of hours.

2.5.6 EMC Replication Manager and Celerra EMC Replication Manager manages EMC point-in-time replication technologies and coordinates the entire data replication process from discovery and configuration to the management of multiple disk-based replicas. With Replication Manager, the right data can be put in the right place at the right time, on-demand, or based on schedules and policies that are defined. Replication Manager provides a graphical user interface for managing the replication of iSCSI LUNs. It controls the creation of snapshots, marks the snapshots for replication, and initiates the copy job from the source to the destination. Before creating a snapshot, the Replication Manager ensures that applications are in a quiescent state and that the cache is flushed so that the snapshot is consistent from the point of view of client applications.

80

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

2.5.7 Celerra Data Deduplication The Celerra Data Deduplication feature provides data reduction through data compression and data deduplication. The main objective of Celerra Data Deduplication is to increase the file storage efficiency by eliminating redundant data from files located on the file system. By reducing the data stored in the file system, the cost of storing the information is decreased. With VMware vSphere and VMware Infrastructure, Celerra Data Deduplication provides data reduction cost savings capabilities in two usage categories: ◆

Efficient deployment and cloning of virtual machines that are stored on Celerra file systems using NFS.



Efficient storage of file-based business data stored on NFS/CIFS network shares that are mounted or mapped by virtual machines.

The following two sections describe how each of these usage categories uses the capabilities of the Celerra Data Deduplication technology. The Using the Celerra Data Deduplication Technical Module available on Powerlink® provides further information on the Celerra Data Deduplication feature. Efficient deployment and cloning of virtual machines that are stored on Celerra file systems using NFS. Starting from Celerra version 5.6.48, Celerra Data Deduplication was enhanced to also target active virtual disk files (VMDK files) for data compression and cloning purposes. This feature is for VMware vSphere virtual machines that are deployed on Celerra-based NFS datastores. Celerra Data Deduplication allows the VMware administrator to compress a virtual machine at the Celerra level. This can reduce storage consumption by up to 50 percent and permit the storage of additional virtual machines on the same file system. The compression of virtual machines places an added overhead on the file system that can cause some performance impact on the virtual machines. Celerra Data Deduplication includes several optimization techniques to greatly minimize this performance impact. Read operations from a compressed virtual machine are performed by uncompressing only the portion of the file store requested. On the other hand, write operations are performed on a set-aside file from which the data is periodically compressed into the original file. The outcome of both these techniques

Relevant key Celerra features

81

for handling read and write operations to a compressed virtual machine is a performance impact that is typically negligible compared to a non-compressed virtual machine. Furthermore, Celerra Data Deduplication also provides the ability to perform efficient, array-level cloning of virtual machines. Two cloning alternatives are available: ◆

Full Clone - With this operation the VMware administrator can create a full virtual machine clone. This operation is comparable to a native VMware vSphere clone operation. A full clone operation can be done across Celerra file systems (provided they are from the same Data Mover). However, a full clone operation is performed on the Celerra level rather than at the ESX level. Therefore the ESX cycles that would have been spent to perform the cloning operation natively are freed up. Because the clone operation is done at the Celerra level, data need not pass on the wire to and from the ESX server. This results in a virtual machine clone operation that is more efficient, and can be up to 2 to 3 times faster than a native vSphere virtual machine clone operation.



Fast Clone - with this operation the VMware administrator can create a clone of a virtual machine that holds only the changes to the cloned virtual machines while referring to the source virtual machine for unchanged data. A fast clone operation is done within a single file system. Here too the clone operation is done at the Celerra array level. But, in the case of a fast clone, the operation is almost instantaneous because no data needs to be copied from the source virtual machine at the time the cloned virtual machine is created. This is very similar to a Celerra iSCSI snapshot operation except that in this case the operation is done on files rather than LUNs.

Furthermore, all the virtual machine compression and cloning operations available in Celerra Data Deduplication are virtual machine based rather than file system based. This provides the administrator with high flexibility to use Celerra Data Deduplication with VMware vSphere to further reduce the Celerra storage consumption. To perform these operations, Celerra Data Deduplication can be configured with the EMC Celerra Plug-in for VMware. This vCenter Server Plug-in allows the VMware administrator to perform Celerra-based virtual machines compression and cloning operations using only the VMware vSphere Client.

82

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

The plug-in also allows the VMware administrator to provision and manage Celerra NFS storage for virtual machines deployment. Figure 23 shows an example of using the EMC Celerra Plug-in for VMware on a VMware vSphere virtual machine.

Figure 23

EMC Celerra Plug-in for VMware

EMC Celerra Plug-in for VMware—Solution Guide provides more information on the VMware vCenter Plug-in and how it can be used with VMware vSphere.

Relevant key Celerra features

83

Efficient storage of file-based business data stored on NFS/CIFS network shares that are mounted or mapped by virtual machines In addition, Celerra Data Deduplication provides a high degree of storage efficiency by eliminating redundant files with minimal impact on the end-user experience. This feature also goes one step further and compresses the remaining data. This two-step process can reduce required storage space by up to 50 percent. Celerra Data Deduplication automatically targets files that are the best candidates for deduplication and subsequent compression in terms of the file access frequency and file size. Furthermore, with Celerra version 5.6.47.11 or later, Celerra Data Deduplication was enhanced to target large files, as well as active files using the CIFS compressed file attribute (for files exported using the CIFS protocol). When using a tiered storage architecture, Celerra Data Deduplication can also be enabled on the secondary tier to reduce the archived data set size. With VMware vSphere and VMware Infrastructure, Celerra Data Deduplication can be used on Celerra file systems that are mounted or mapped by virtual machines using NFS or CIFS. This is suitable for business data such as home directories and network-shared folders. Similarly, Celerra Data Deduplication can be used on archived virtual machines. This eliminates redundant data in these file systems and improves the storage efficiency of these file systems. Celerra Data Deduplication calculator EMC provides a deduplication calculator that produces a printable graph of the estimated savings that can be realized with the Celerra Data Deduplication feature. This calculator can be used with the two usage categories of Celerra Data Deduplication with VMware vSphere and VMware Infrastructure. This is a web application that estimates the effect of Celerra Deduplication on a data set based on a user-entered about the size and type of stored data. Figure 24 on page 85 shows an example of this application. The Celerra Data Deduplication calculator is available on EMC.com.

84

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

Figure 24

Celerra Data Deduplication calculator

Relevant key Celerra features

85

86

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

3 VMware vSphere and VMware Infrastructure Configuration Options

◆ ◆ ◆ ◆ ◆ ◆ ◆ ◆ ◆ ◆ ◆ ◆ ◆ ◆

3.1 Introduction ...................................................................................... 88 3.2 Storage alternatives.......................................................................... 89 3.3 Configuration roadmap................................................................... 90 3.4 VMware vSphere or VMware Infrastructure installation .......... 93 3.5 Storage considerations..................................................................... 94 3.6 VMware vSphere or VMware Infrastructure configuration .... 109 3.7 Using NFS storage.......................................................................... 128 3.8 Using iSCSI storage........................................................................ 137 3.9 Introduction to using Fibre Channel storage ............................. 205 3.10 Virtual machine considerations.................................................. 222 3.11 Monitor and manage storage...................................................... 248 3.12 Virtually provisioned storage ..................................................... 258 3.13 Storage multipathing ................................................................... 278 3.14 VMware Resiliency ...................................................................... 315

VMware vSphere and VMware Infrastructure Configuration Options

87

3.1 Introduction Celerra unified storage provides flexible network deployment options for VMware vSphere and VMware Infrastructure including CIFS, NFS, iSCSI, and FC connectivity. This chapter contains the following information about integrating Celerra unified storage with VMware vSphere and VMware Infrastructure:

88



Storage considerations for using Celerra with VMware vSphere or VMware Infrastructure



Configuration of VMware vSphere and VMware Infrastructure when using Celerra storage



Use of Celerra CIFS, NFS, iSCSI, and FC storage with VMware vSphere and VMware Infrastructure



Virtual machine considerations when using VMware vSphere and VMware Infrastructure with Celerra



Storage multipathing of VMware vSphere and VMware Infrastructure



VMware resiliency with EMC Celerra

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

3.2 Storage alternatives VMware vSphere and VMware Infrastructure support the NFS, iSCSI, and FC protocols as storage for virtual machines. Celerra unified storage supports all these protocols. NFS is the only NAS protocol supported for virtual machines. Celerra CIFS can also be used to store and share user data and can be mounted inside the virtual machines. With iSCSI and FC protocols, ESX builds VMFS volumes on top of the LUNs. The Celerra NFS file system and iSCSI LUNs can be provisioned using Celerra Manager or CLI. FC LUNs are provisioned from the CLARiiON back end by using Navisphere Manager. Using these network services, Celerra platforms deliver a complete multi-protocol foundation for a VMware vSphere and VMware Infrastructure virtual data center as shown in Figure 25.

Figure 25

Celerra storage with VMware vSphere and VMware Infrastructure

Storage alternatives

89

3.3 Configuration roadmap Figure 26 shows the roadmap that identifies the configuration steps to use Celerra storage with VMware vSphere and VMware Infrastructure.

Figure 26

90

Configuration roadmap

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

The configuration blocks in Figure 26 on page 90 are: 1. NIC and iSCSI HBA driver configuration with ESX server — Configure the physical NIC or the iSCSI HBA that will be used to connect the ESX server to Celerra. Section 3.6.2, ”ESX iSCSI HBA and NIC driver configuration,” on page 120 provides details about the configuration. 2. VMkernel port configuration in ESX server — Configure the ESX server for IP storage connections to Celerra for both NFS and iSCSI network storage protocols. Section 3.6.3, ”VMkernel port configuration in ESX,” on page 120 provides details about the configuration. 3. Based on the storage protocol, complete the NFS, iSCSI, or FC configuration steps. • NFS Add Celerra file systems to or from an ESX server — Create and export the Celerra file system to the ESX server. Section 3.7.1, ”Add a Celerra file system to ESX,” on page 128 provides details about these procedures. Create NAS datastores on ESX server — Configure NAS datastores in the ESX server on the provisioned file system from Celerra. Section 3.7.2, ”Create a NAS datastore on an ESX server,” on page 133 provides details about this procedure. • iSCSI Add and remove iSCSI LUNs to or from the ESX server — Configure a Celerra iSCSI LUN and link it to the ESX server. Section 3.8.2, ”Add a Celerra iSCSI device/LUN to ESX,” on page 139 provides details about these procedures. Create VMFS datastores on the ESX server — Configure a VMFS datastore over the iSCSI LUN that was provisioned from Celerra. Section 3.8.3, ”Create VMFS datastores on ESX,” on page 174 provides details about this procedure. • FC Add and remove CLARiiON LUNs to or from the ESX server — Configure a CLARiiON FC LUN and link it to the ESX server. Section 3.9, “Introduction to using Fibre Channel storage,” on page 205 provides details about these procedures.

Configuration roadmap

91

Create VMFS datastores on the ESX server — Configure a VMFS datastore over the FC LUN that was provisioned from CLARiiON. Section 3.9, “Introduction to using Fibre Channel storage,” on page 205 provides details about this procedure. Note: The chapter includes applicable procedures for using Celerra storage with VMware vSphere and VMware Infrastructure. However, starting from Celerra version 5.6.48, the EMC Celerra Plug-in for VMware is available. This plug-in allows administrators to conveniently configure and provision Celerra-based NAS datastores from the VMware vSphere Client interface. Using this plug-in, administrators can also perform Celerra-based virtual machines’ compression and cloning operations leveraging the enhanced Celerra data deduplication technology. For details about this plug-in and how it can be used, refer to the EMC Celerra Plug-in for VMware—Solution Guide.

92

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

3.4 VMware vSphere or VMware Infrastructure installation VMware ESX 3.5 and 4 can be installed on a local disk of the physical server. No special configuration is required on ESX during its installation with Celerra storage. Similarly, the VMware ESXi Installable editions 3.5 and 4.0 can be installed on a local disk of the physical server. The VMware documentation provides additional information about installing ESX. VMware vCenter Server should also be installed as a part of the VMware vSphere and VMware Infrastructure suite. VMware documentation provides additional information about installing vCenter Server. Other components can also be installed based on the requirements. The following link on VMware provides information about optional components that are available for VMware vSphere and VMware Infrastructure: http://www.vmware.com/products/ The following link provides information about VMware vSphere installation and administration: http://www.vmware.com/support/pubs/vs_pubs.html The following link provides information about VMware Infrastructure installation and administration: http://www.vmware.com/support/pubs/vi_pubs.html

VMware vSphere or VMware Infrastructure installation

93

3.5 Storage considerations This section explains the storage considerations to use Celerra with VMware vSphere and VMware Infrastructure. Celerra offers several disk types, which are selected based on the use cases. Similarly, the Celerra RAID type selection also depends on the protection and performance required. After RAID is configured on the required number of spindles, the volumes should also be configured using one of the volume management alternatives available in Celerra. Celerra disk types Four types of storage device technologies can be used on EMC unified platforms: Enterprise Flash Drive (EFD), Fibre Channel (FC), Serial Attached SCSI (SAS), and Serial-ATA (SATA). Celerra NX4 can use SAS and SATA hard drives, whereas all other Celerra models can use EFD, FC, and SATA hard drives. EFDs are recommended for virtual machines or parts of virtualized applications with low response time and high-throughput requirements. FC hard drives are recommended for large-capacity, high-performance VMware environments. SAS hard drives provide performance and reliability that is equivalent to FC drives. SATA drives are recommended to back up the VMware environment and to store virtual machine templates and ISO images. RAID configuration with Celerra unified storage Celerra unified storage provides mirrored and striped (RAID 1/0) and striped with parity (RAID 3/RAID 5/RAID 6) options for performance and protection of the devices that are used to create ESX volumes. RAID protection is actually provided by the underlying captive CLARiiON array of the Celerra unified storage system. The storage and RAID algorithm chosen is largely based on the throughput requirements of the applications or virtual machines. Parity RAID such as RAID 5 and RAID 6 provides the most efficient use of disk space to satisfy the requirements of the applications. RAID 1/0 mirrors and stripes, with data written to two disks simultaneously. Data transfer rates are higher than with RAID 5, but RAID 1/0 uses more disk space for mirroring. From tests performed in EMC labs, RAID with parity protection mechanisms was chosen for both virtual machine boot disk images and the virtual disk storage used for the application data. RAID 6 provides added disk protection over RAID 5. An understanding of the application and storage requirements in the computing environment will help to identify the appropriate RAID configuration for servers

94

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

where very large pools of disks are used. Celerra uses advanced on-disk parity and proactive soft-error and is not susceptible to dual-disk failures during the RAID 5 rebuild. Storage pool Storage pools are used to allocate available storage to Celerra file systems. Storage pools can be created automatically by Celerra Automatic Volume Management (AVM) or manually by the system administrator. A storage pool must contain volumes from only one disk type and must be created from equally sized CLARiiON LUNs. Use one or both types of AVM storage pools to create file systems: ◆

System-defined storage pools



User-defined storage pools

System-defined storage pools System-defined storage pools are predefined and available with Celerra Network Server. The pre-defined storage pools cannot be created or deleted because they are set up to make managing volumes and file systems easier than manually managing them. Some of the attributes of the system-defined storage pools can be modified, but this is generally unnecessary. User-defined storage pools If applications require precise placement of file systems on particular disks or locations on specific disks, AVM user-defined storage pools enable greater control. Disk volumes can be reserved so that the system-defined storage pools cannot use them. Celerra volume management In Celerra, users can create and manage Celerra volumes and file systems manually or automatically for VMware. Celerra offers flexible volume and file system management. Volume management provides the flexibility to create and aggregate different volume types into usable file system storage that meets the configuration needs of VMware vSphere and VMware Infrastructure. There are a variety of volume types and configurations available to optimize the file system's storage potential. Users can divide, combine, and group volumes to meet specific configuration needs. Users can also manage Celerra volumes and file systems without having to create and manage underlying volumes.

Storage considerations

95

Two types of volume management are available in Celerra to provision storage: ◆

Automatic Volume Management (AVM)



Manual Volume Management (MVM)

For most VMware deployments, Celerra AVM works well. Some deployments such as virtualized databases and e-mail servers will benefit from MVM. This is because MVM allows administrators to configure storage locations that are tailored and sized for each application object to maximize the performance of I/O-intensive applications. Section 3.5.2, ”MVM,” on page 97 provides more details about MVM. With the current release of Celerra (5.6.47 or later), it is recommended that MVM be used for EFDs. The following sections provide further details about the two volume management types.

3.5.1 AVM AVM automates volume creation and management. AVM runs an internal algorithm that identifies the optimal location of the disks that make up a file system. Storage administrators are only required to select the storage pool type and the desired capacity to establish a file system that can be presented to ESX as NAS and block storage without creating and managing the underlying volumes. The storage pools are configured as part of the Celerra unified storage installation base on the configuration of the back-end CLARiiON captive storage. This allows users to conveniently deploy file systems on these storage pools without the need to configure the back-end storage or to specify specific storage locations for these file systems. However, a skilled user can modify the configuration of the back-end CLARiiON storage and present it to Celerra (for example, when new disks where added to Celerra unified storage). Appendix A, “ CLARiiON Back-End Array Configuration for Celerra Unified Storage,” provides further details.

The Celerra AVM feature automatically creates and manages usable file system storage. After the disk volumes are added to the storage pool, AVM uses an internal algorithm that identifies the optimal location of the disks that make up a file system. Users can directly create file systems on this usable storage.

96

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

Managing EMC Celerra Volumes and File Systems with Automatic Volume Management Technical Module provides detailed information about Celerra AVM configuration. File system configuration with AVM File systems should be provisioned from Celerra storage pools to be used as VMware storage. Multiple Celerra file systems can be provisioned from the Celerra system-defined storage pool using Celerra Manager GUI or CLI. This is the default method to create file systems in Celerra. All that is required is to select the storage pool and to specify the size of the file system. AVM then optimally allocates capacity from the storage pool for the file system. To create a file system using Celerra Manager GUI, refer to steps 1 to 4 in Section 3.7.1, ”Add a Celerra file system to ESX,” on page 128. To create a file system using CLI with a system-defined storage pool, use the following command: $ nas_fs -name -create size= pool= storage=

where: name is the name of the file system. size is the amount of space users want to add to the file system. Enter the size in gigabytes by typing G (for example, 250 G), in megabytes by typing M (for example, 500 M), or by typing T for terabytes (for example, 1 T). pool is the storage pool name. system_name is the storage system from which space for the file system is allocated. Example: $ nas_fs -name ufs1 -create size=10G pool=symm_std storage=00018350149

3.5.2 MVM MVM enables the storage administrator to create and aggregate different volume types into a usable file system storage that meets the configuration needs. There are various volume types and configuration options available from which the file system can be optimized to use the storage system's potential.

Storage considerations

97

Note: Given the complexity of MVM and the wide range of possible configuration options that it includes, it is only recommended for NAS experts. AVM is more appropriate for most users.

RAID and LUN configuration with MVM With FC, SAS, SATA disks, use the RAID 5 (4+1) group in CLARiiON. Create two LUNs per RAID group and load-balance LUNs between CLARiiON SPs. Stripe across all RAID groups with a 32 KB Celerra stripe element size (default). Create a metavolume on the stripe volume. Section 3.5.3, ”Storage considerations for using Celerra EFDs,” on page 107 provides configuration details about EFDs. Managing EMC Celerra Volumes and File Systems Manually Technical Module provides detailed information about Celerra AVM configuration. Sample storage layout with MVM Figure 27 on page 99 is a sample storage layout that shows the storage configuration for three shelves. It is recommended to have one hot spare for 15 disks. This layout shows seven 4+1 RAID 5 groups (the leftmost RAID group in shelf 0_0 is not counted because it is used for Celerra software). Two LUNs are created on each RAID group with alternate SP ownership. A stripe volume is created across one LUN with alternate SP ownership in each RAID group (for example, a stripe volume is created with LUNs d41+d45+543+d32+d33+d40+d36). Using this stripe volume, a metavolume is created and a single file system is created on the metavolume.

98

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

Figure 27

Storage layout

Storage considerations

99

3.5.2.1 Determine storage availability Before a new volume is created, unused disk space must be identified. If a disk is unused, its space is available for volume and file system creation. The simplest way to determine storage availability on the Celerra Network Server is to find out which disks are unused. To view a list of unused disks and their sizes, use the following command: $ nas_disk -list

3.5.2.2 Create volumes Different types of volumes can be created. Stripe volumes A stripe volume is a logical arrangement of participating disk, slice, or metavolumes organized, as equally as possible, into a set of interlaced stripes. Stripe volumes achieve greater performance and higher aggregate throughput because all participating volumes can be active concurrently. Stripe size The stripe size can be 32 KB, 64 KB, or 256 KB. The recommended stripe depth must be typed in multiples of 8,192 bytes with a recommended size of 32,768 bytes (default) for file systems running in an NFS environment with a CLARiiON storage system. Metavolume File systems can only be created and stored on metavolumes. A metavolume is an end-to-end concatenation of one or more disk volumes, slice volumes, stripe volumes, or metavolumes. A metavolume is required to create a file system because metavolumes provide the expandable storage capacity that is needed to dynamically expand file systems. A metavolume also provides a way to form a logical volume larger than a single disk. Create a stripe volume To create a stripe volume using Celerra Manager: 1. Select Storage > Volumes in Celerra Manager. The Volumes page appears.

100

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

Figure 28

Volumes

2. Click New. The New Volume dialog box appears. 3. Select Stripe and name the new storage volume. Select the disk volumes. From the Stripe Size (KB) box, select the stripe size as 32.

Storage considerations

101

Figure 29

Create a stripe volume

In CLI, to create a stripe volume, use the following command: $ nas_volume -name -create -Stripe ,,...

where: name is the name of the stripe volume stripe_size is the size of the stripe volume in megabytes volume_name is the name of the volume

102

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

Example: To create a stripe volume called stv1, type: $ nas_volume -name stv1 -create -Stripe 8192 d10,d12,d13,d15

Create a metavolume To create a metavolume using Celerra Manager: 1. Select Storage > Volumes in Celerra Manager. The Volumes page appears. 2. Click New. The New Volume dialog box appears.

Figure 30

New Volume

3. In Type, select Meta. 4. In the Volume Name field, type the name of the new storage volume. Storage considerations

103

5. In Volumes CLSTD field, select the stripe volume and then click OK. The metavolume is created.

In CLI, to create a metavolume from a stripe volume, use the following command: $ nas_volume -name -create -Meta

where: name is the name of the stripe volume volume_name is the name assigned to the metavolume 3.5.2.3 File system configuration with MVM A file system is simply a method of naming and logically organizing files and directories on a storage system. A file system on a Celerra Network Server must be created and stored on a metavolume. A metavolume is required to create a file system; the metavolume provides the expandable storage capacity that might be needed to dynamically expand a file system. Create a single file system on the metavolume using Celerra Manager GUI or CLI: 1. In Celerra Manager, click File Systems on the left pane, and then click New. The New File System page appears.

104

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

Figure 31

File Systems

2. Select Meta Volume and name the file system. Select the metavolume created in Section 3.5.2.2, ”Create volumes,” on page 100 for the file system creation.

Storage considerations

105

Figure 32

New File System

To create a file system using CLI, use the following command: $ nas_fs -name -create

where: name is the name assigned to a file system volume_name is the name of the existing volume Example: To create a file system with existing volumes called ufs1, type: $ nas_fs -name ufs1 -create mtv1

The file system created will be used for NFS export to create the NAS datastore or for creating the iSCSI LUN for creating the VMFS datastore. Section 3.7, ”Using NFS storage,” on page 128 provides details about configuring the NFS export, and Section 3.8.3, ”Create VMFS datastores on ESX,” on page 174 provides details about creating a VMFS datastore. 106

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

3.5.3 Storage considerations for using Celerra EFDs An EFD is based on a single-level cell-based flash technology and is suitable for high-performance and mission-critical applications. Celerra supports EFDs beginning from version 5.6.43.8. EMC EFDs are currently available in two variants: tuned performance and tuned capacity. All EFDs that are 100 GB and larger are tuned-capacity drives. CLARiiON cache settings when using Celerra EFDs The CLARiiON write cache for all EFD LUNs used by Celerra must be enabled, and the CLARiiON read cache for all EFD LUNs must be disabled. RAID and LUN configuration when using Celerra EFDs Currently, Celerra only supports RAID 5 (4+1 or 8+1) with EFDs. Furthermore, with EFDs, it is recommended that four LUNs per EFD RAID group be created. Balance the ownership of the LUNs between the CLARiiON storage processors. Sample EFD storage layout Figure 33 is a sample EFD storage layout that shows five disks configured in RAID 5 (4+1). An EFD hot spare is also configured. Four LUNs are configured on the RAID group with alternate SP ownership.

Figure 33

Sample storage layout

With the current release (5.6.47 or later), Celerra MVM is recommended to configure EFD volumes. Users can stripe across multiple dvols from the same EFD RAID group. A Celerra stripe element size of 256 KB is recommended and can greatly improve sequential write performance without impacting other workloads.

Storage considerations

107

File system configuration when using Celerra EFDs File system creation is similar to the procedure followed with non-EFDs using MVM. Section 3.5.2.3, ”File system configuration with MVM,” on page 104 provides further details.

108

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

3.6 VMware vSphere or VMware Infrastructure configuration Celerra platforms cover a broad range of configuration and capabilities that scale from midrange to high-end network storage. Although differences exist along the product line, there are some common building blocks. These building blocks are combined to fill out a broad, scalable product line with consistent support and configuration options.

3.6.1 ESX and Celerra storage settings When using VMware vSphere or VMware Infrastructure with Celerra, consider the following ESX and Celerra settings for optimal functionality and performance: ◆

Celerra uncached write mechanism



Celerra AntiVirus Agent



Jumbo frames



Maximum number of NAS datastores



ESX NFS heartbeat settings for NFS timeout

Use the default ESX and Celerra settings otherwise. The following sections provide further details on each of these settings. 3.6.1.1 Celerra uncached write mechanism The uncached mechanism is recommended as it can enhance write performance to Celerra over the NFS protocol. This mechanism allows well-formed writes (for example, multiple disk blocks and disk block alignments) to be sent directly to the disk without being cached on the server. The uncached write mechanism is designed to improve the performance of applications with many connections to a large file such as a virtual disk file of a virtual machine. This mechanism can enhance access to such large files through the NFS protocol. By default, the uncached mechanism is turned off on Celerra. However, it can be turned on for a specific file system. When replication software is used, enable the uncached option on the primary file system. The uncached option should also be enabled on the secondary file system to maintain the performance in case of a failover.

VMware vSphere or VMware Infrastructure configuration

109

Celerra version 5.6.46.3 or later is required to use the uncached write mechanism on Celerra NFS with VMware vSphere or VMware Infrastructure. The uncached write mechanism can be turned on for a specified file system using the Control Station command line or the CLI interface in Celerra Manager. To enable uncached on a mounted file system, it is not necessary to unmount the file system and disrupt access to it. Simply issue the server_mount command using this procedure as though the file system is not mounted. From the Control Station command line or the CLI interface in Celerra Manager, type the following command to enable the uncached write mechanism for a file system: $ server_mount -option , uncached

where: movername is the name of the specified Data Mover options specifies the mount options, separated by commas fs_name is the name of the file system mount_point is the path to mount point for the specified Data Mover Example: $ server_mount server_2 -option uncached ufs1 /ufs1

Output: server_2: done

To turn off the uncached option, use the following command: $ server_umount -perm {fs_name|mount_point}

Example: $ server_umount server_2 -perm ufs1

Output: server_2: done

110

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

It is possible to disable uncached on a mounted file system that has uncached enabled. However, it is necessary to disrupt access to the file system and shut down the virtual machines associated with the file system. To disable uncached on a mounted file system that has uncached enabled: 1. Power off the virtual machines that are running on the datastore that is configured on the affected file system. Alternatively, critical virtual machines can be migrated online to another datastore using Storage vMotion. 2. At this point, when no virtual machine is running on the affected file system, enter the following two commands from the Control Station command line or the CLI interface in Celerra Manager. It is important to ensure that the commands are entered one after the other. $ server_umount -perm $ server_mount -option

where: does not include uncached Example: $ server_umount server_2 -perm ufs1 $ server_mount server_2 -option rw ufs1/ufs1

After executing both commands, all virtual machines that were running on the affected datastore can be powered on. 3.6.1.2 Celerra AntiVirus Agent Celerra AntiVirus Agent (CAVA) provides an antivirus solution to file-based clients using an EMC Celerra Network Server. It uses industry-standard CIFS protocols in a Microsoft Windows Server 2003, Windows 2000, or Windows NT domain. CAVA uses third-party antivirus software to identify and eliminate known viruses before they infect files on the storage system. CAVA provide benefits such as scan on first read, scan on write, and automatic update of virus definition files to ensure that infected files will not be stored in the Celerra based shared storage. Further details on CAVA can be found in the Using Celerra AntiVirus Agent Technical Module. VMware vSphere or VMware Infrastructure configuration

111

The Celerra antivirus solution is only for clients running the CIFS protocol. If NFS or FTP protocols are used to move or modify files, the files are not scanned for viruses. Therefore, files accessed by ESX as part of the virtual machine deployment (that is, files virtualized in virtual disks) will not be scanned for viruses. Furthermore, since CAVA is a file-based solution, block-level storage that is presented to ESX from Celerra will not be scanned for viruses as well. However, files accessed by Windows virtual machines through the CIFS protocol (that is, by using mapped network shares from Celerra) will be scanned for viruses. CAVA is most suitable for user data that is accessed using CIFS such as home directories and network shares. This will permit a centralized solution for virus scanning avoiding the need to scan these files locally on each virtual machines. When CAVA is used, action is required in order to ensure that the third-party antivirus software that is configured as part of the CAVA will not attempt to scan virtual disk files. However, if CAVA is not used in the system no further action is required. Therefore, if CAVA is used, use either one of the following steps: ◆

For NFS file systems that are presented to ESX, mount the file system on the Celerra Data Mover with the noscan option. This will instruct CAVA not to scan this file system. This is the optimal alternative, as it would get CAVA to focus solely on the file systems that hold files that should be scanned. If CAVA is not used, using the noscan option will have no performance impact on the file system because this option is only used by CAVA.



Alternatively, if a file system is presented to ESX using NFS and simultaneously also to virtual machines using CIFS, then CAVA can be configured to exclude all file types that are used for file encapsulation of a virtual machine. This involves using the excl= parameter in the viruschecker.conf configuration file.

EMC recommends using the noscan mount option for NFS file systems presented to ESX. File systems containing virtual machine files and shared as NFS exports should not also be shared as CIFS shares. Therefore, the reminder of this section will focus on the first alternative.

112

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

Disable CAVA virus scanning on file systems presented to ESX using NFS By default, a Celerra file system is mounted with virus scanning enabled. If CAVA is used, this setting should be turned off for file systems that are presented to ESX using NFS. This can be done using the noscan option. When using replication software, enable the noscan option on the primary file system. The noscan option should also be enabled on the secondary file system to prevent virus scanning in case of a failover. Use the procedure outlined to turn on the noscan option for a file system that is presented to ESX using NFS. From the Control Station command line, or the CLI interface in Celerra Manager, enter the following command to turn on the noscan option for a file system: $ server_mount -option ,noscan

where: movername is the name of the specified Data Mover options specifies mount options, separated by commas fs_name is the name of the file system mount_point is the path to mount point for the specified Data Mover Example: $ server_mount server_2 -option rw,uncached,noscan ufs1 /ufs1

Output: server_2: done

As seen in the example, it is possible and recommended to enable both the noscan option and the uncached write mechanism in a single server_mount command. To enable noscan on a mounted file system, it is not necessary to unmount the file system and disrupt access to it. Simply issue the server_mount command using this procedure as though the file system is not mounted.

VMware vSphere or VMware Infrastructure configuration

113

3.6.1.3 Jumbo frames Jumbo frames improve the performance of certain I/O-intensive applications such as databases and backup. VMware vSphere supports jumbo frames for IP-based storage. It is important to set jumbo frames in all components in the data path from ESX to Celerra. Therefore, jumbo frames should be set on the VMkernel port group and vSwitch in ESX, on the physical network switch, and on the Data Mover ports in Celerra. Set the Maximum Transmission Unit (MTU) to 9,000. Jumbo frames must be enabled on the following devices: ◆

The ESX host CLI should be used to configure jumbo frames on the VMkernel port group and vSwitch: 1. Use the following command to set MTU in the vSwitch: esxcfg-vswitch -m 9000

2. Create a port group to attach the VMkernel interface. 3. Create a new VMkernel interface with jumbo frames enabled by using the following command: esxcfg-vmknic -a -i -n -m 9000 -p

114



Set MTU on the physical switch that connects ESX and Celerra. For details about setting jumbo frames, refer to the switch documentation.



In the Celerra Manager GUI, MTU can be set for an interface from the Network folder.

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

Figure 34

Network Interface Properties

3.6.1.4 Adjust the ESX maximum number of NAS datastores By default, ESX 4 supports eight NAS datastores and can support a maximum of 64 datastores. In contrast, ESX 3.5 can support a maximum of 32 datastores. If the ESX setting is adjusted, more datastores can be presented to an ESX host simultaneously. An example is when many virtual machines are deployed across a large number of file systems. To adjust the ESX maximum number of NAS datastores, perform the following steps on each ESX host: 1. Log in to the vSphere Client or the VMware Infrastructure Client and select the server from the Inventory area. 2. Click the Configuration tab and click Advanced Settings from the left pane. The Advanced Settings dialog box appears. 3. Select NFS to display the NFS settings for the ESX host. 4. Modify NFS.MaxVolumes to 64 (or 32 with VMware Infrastructure) as shown in Figure 35 on page 116. VMware vSphere or VMware Infrastructure configuration

115

Figure 35

Modify NFS.MaxVolumes on each ESX host

Note that presenting additional datastores to ESX may require additional server resources. To accommodate this additional ESX, heap memory should be allocated. Heap is the memory allocated in runtime during the VMkernel program execution. To do this, the following settings should be made on each ESX host: Net.TcpipHeapSize to 30, and Net.TcpipHeapMax to 120 as shown in Figure 36 on page 117. Refer to the VMware KB article 2239 at http://kb.vmware.com/kb/2239 for more details and for the steps to define the settings.

116

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

Figure 36

Set Net.TcpipHeapSize and Net.TcpipHeapMax parameters

VMware vSphere or VMware Infrastructure configuration

117

3.6.1.5 ESX host timeout settings for NFS The following NFS heartbeat parameters should be tuned to increase the NAS datastore availability: ◆

NFS.HeartbeatTimeout The amount of time taken before the NFS heartbeat request terminates.



NFS.HeartbeatMaxFailures The number of consecutive heartbeat requests that must fail before the NFS server is marked as unavailable.



NFS.HeartbeatDelta The amount of time after a successful GETATTR request before the heartbeat world issues a heartbeat request for a volume. If an NAS datastore is in an unavailable state, an update is sent every time the heartbeat world runs (NFS.HeartbeatFrequency seconds).



NFS.HeartbeatFrequency The frequency at which the NFS heartbeat world runs to see if any NAS datastore needs a heartbeat request.

Table 1 lists the default and recommended values of ESX NFS heartbeat parameter settings, which ensure that virtual machines will be consistent during Data Mover outages. Table 1

Default and recommended values of ESX NFS heartbeat parameters ESX NFS parameters

Default

Recommended

NFS.HeartbeatFrequency

9

12

NFS.Heartbeat Timeout

5

5

NFS.HeartbeatDelta

5

5

NFS.HeartbeatMaxFailures

3

10

To view and modify the ESX NFS heartbeat parameters, perform the following steps on each ESX host: 1. Log in to the vSphere Client or the VMware Infrastructure Client and select the server from the Inventory area. 2. Select the ESX host.

118

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

3. Click Configuration and click Advanced Settings. The Advanced Settings dialog box appears. 4. Select NFS as shown in Figure 37 on page 119. 5. Modify the required NFS heartbeat parameters, and then click OK.

Figure 37

Configure ESX NFS heartbeat parameters

VMware vSphere or VMware Infrastructure configuration

119

Section 3.14, “VMware Resiliency,” on page 315 provides more details on this recommended setting.

3.6.2 ESX iSCSI HBA and NIC driver configuration Drivers for supported iSCSI HBA and NIC cards are provided by VMware as part of the VMware ESX distribution. The VMware Compatibility Guide provides information about supported HBA and NIC cards with VMware vSphere or VMware Infrastructure. The EMC E-Lab Interoperability Navigator utility available on EMC Powerlink provides information about supported HBA and NIC cards for connectivity of VMware vSphere or VMware Infrastructure to Celerra.

3.6.3 VMkernel port configuration in ESX The VMkernel port group enables the use of iSCSI and NFS storage on ESX. When storage is configured on Celerra, the ESX host must have a VMkernel port group defined with network access to the Celerra storage. At a functional level, the VMkernel manages the IP storage interfaces including those used for iSCSI and NFS access to Celerra. When ESX is configured for IP storage with Celerra, the VMkernel network interfaces are configured to access one or more Data Mover iSCSI targets or NFS servers. To configure the VMkernel interface: Note: This configuration also applies to VMware Infrastructure.

1. Log in to the vSphere Client or the VMware Infrastructure Client and select the server from the Inventory area. 2. Click Configuration and click Networking from the left pane. The Networking page appears. 3. Click Add Networking.

120

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

Figure 38

VMkernel configuration - Add Networking

The Add Network Wizard appears.

Figure 39

Add Network Wizard - Connection Type

4. Select VMkernel, and then click Next. The VMkernel - Network Access dialog box appears.

VMware vSphere or VMware Infrastructure configuration

121

Figure 40

VMkernel - Network Access

5. To set the network access: a. Select the vSwitch that will handle the network traffic for the connection. b. Select the checkboxes for the network adapters the vSwitch will use. Select adapters for each vSwitch so that virtual machines or other services that connect through the adapter can reach the correct Ethernet segment. If no adapters appear under Create a virtual switch, all network adapters in the system are being used by existing vSwitches. Create a new vSwitch without a network adapter, or select a network adapter that an existing vSwitch uses. c. Click Next. The VMkernel - Connection Settings dialog box appears. 122

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

Figure 41

Add Network Wizard - VMkernel - Connection Settings

6. To set the connection settings: a. Type a Network Label to identify the VMkernel connection when managing the connection and identify the VLAN ID (Optional) that the port group's network traffic will use. b. Select Use this port group for VMotion to enable this port group to advertise itself to another host as the network connection where vMotion traffic should be sent. This property can be enabled for only one vMotion and IP storage port group for each host. If this property is not enabled for any port group, migration with vMotion to this host is not possible. c. If required, select Use this port group for Fault Tolerance logging, and then click Next. The VMkernel - IP Connection Settings dialog box appears. Note: It is recommended that Fault Tolerance logging be done on a dedicated interface. The VMware Availability guide provides further information (VMware vSphere only). VMware vSphere or VMware Infrastructure configuration

123

Figure 42

Add Network Wizard - VMkernel - IP Connection Settings

7. To specify the VMkernal IP settings, do one of the following: • Select Obtain IP settings automatically to use DHCP to obtain IP settings. • Select Use the following IP settings to specify IP settings manually. 8. If Use the following IP settings is selected, provide the following details: a. Type the IP Address and Subnet Mask for the VMkernel interface. This address must be different from the IP address set for the service console. b. Click Edit to set the VMkernel Default Gateway for VMkernel services, such as vMotion, NAS, and iSCSI. c. Click DNS Configuration. The name of the host is entered by default. The DNS server addresses that were specified during installation and the domain are also preselected.

124

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

Figure 43

DNS Configuration

d. Click Routing. The service console and the VMkernel each need their own gateway information. A gateway is needed for connectivity to machines not on the same IP subnet as the service console or VMkernel. The default is static IP settings.

VMware vSphere or VMware Infrastructure configuration

125

Figure 44

Routing

e. Click OK, and then click Next. 9. On an IPv6-enabled host, select No IPv6 settings to use only IPv4 settings on the VMkernel interface, or select Use the following IPv6 settings to configure IPv6 for the VMkernel interface. Note: This dialog box does not appear when IPv6 is disabled on the host.

10. If IPv6 is used for the VMkernel interface, select one of the following options to obtain IPv6 addresses: • Obtain IPv6 addresses automatically through DHCP • Obtain IPv6 addresses automatically through router advertisement • Static IPv6 addresses 11. If static IPv6 addresses are used: a. Click Add to add a new IPv6 address. b. Type the IPv6 address and subnet prefix length, and then click OK. c. To change the VMkernel default gateway, click Edit. 12. Click Next. 13. In the Ready to Complete dialog box, verify the settings and click Finish to complete the process. 126

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

Figure 45

Add Network Wizard - Ready to Complete

Because the VMkernel interface is in effect the I/O path to the data, it is a recommended practice to segment the Celerra network traffic from other network traffic. This can be achieved either through a private LAN in a virtual LAN environment, or through a dedicated IP SAN switch and a dedicated physical NIC. Based on the throughput requirements for the virtual machines, additional interfaces can be configured for additional network paths to the Celerra Data Mover. Section 3.13, “Storage multipathing,” on page 278 provides further details about implementing advanced multipathing configurations with Celerra.

VMware vSphere or VMware Infrastructure configuration

127

3.7 Using NFS storage The configuration of Celerra NFS with VMware vSphere and VMware Infrastructure includes two primary steps: ◆

Add a Celerra file system to ESX – In Celerra, create a Celerra file system and export it to ESX.



Create a NAS datastore on ESX – In ESX, configure a NAS datastore in ESX on the provisioned Celerra file system.

The following sections provide details about these two steps.

3.7.1 Add a Celerra file system to ESX The administrator should make appropriate changes such as creating and exporting a file system to the EMC Celerra storage. Create an NFS export using Celerra Manager. Exported file systems are available across the network and can be presented to the ESX hosts. To create a Celerra file system and add it to ESX: 1. Create a file system using Celerra Manager. Section 3.5, ”Storage considerations,” on page 94 provides details about the file system configuration considerations. 2. In Celerra Manager, click File Systems on the left pane.

128

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

Figure 46

File Systems

3. Click New. The New File System page appears. Enter the details, and then click OK.

Using NFS storage

129

Figure 47

New File System

4. Enable recommended settings for the file system by using the following command: $ server_mount -option : [uncached] [noscan]

Section 3.6.1.1, ”Celerra uncached write mechanism,” on page 109 provides steps on how to turn on the uncached option when mounting the filesystem. Section 3.6.1.2, “Celerra AntiVirus Agent,” on page 111 provides information for the noscan option if CAVA is used.

130

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

5. To create an NFS export, click NFS Exports in Celerra Manager, and then click New. The NFS Exports page appears.

Figure 48

NFS Exports

6. Select the File System and Path and click New. The NFS export is created. The NFS export must include the following permissions for the VMkernel port that is configured in the VMware ESX server: • Root Hosts – Provides the VMkernel port with root access to the file system • Access Hosts – Provides only the VMkernel port mount with access to the file system (while denying such access from other hosts)

Using NFS storage

131

Figure 49

NFS Export Properties

7. Configure access permissions for the VMkernel port on the NFS Export. 8. To view the access permissions, type the VMkernel port IP address in the Root Hosts and Access Hosts fields. Figure 49 shows an NFS export that consists of required access permissions for the VMkernel port. 9. Click OK. The NFS export is created.

132

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

3.7.2 Create a NAS datastore on an ESX server vCenter Server and VMware Infrastructure are used to configure and mount NFS file systems from Celerra to the ESX hosts. The vSphere Client is also used to assign a datastore name to the export. The datastore name is the key reference that is used to manage the datastore within the ESX environment. A NAS datastore is viewed as a pool of space used to provision virtual machines. One or more virtual disks and all the virtual machine's encapsulated files are created within the datastore and assigned to each newly created virtual machine. Each virtual machine can have one (or more) virtual disks that will contain the guest OS and the applications. Section 3.10, ”Virtual machine considerations,” on page 222 provides the configuration consideration to create virtual machines. NAS datastores offer support for virtual disks, virtual machine configuration files, snapshots, disk extension, vMotion, and disaster recovery services. Additionally, Celerra provides support for replication, local snapshots, NDMP backups, and virtual provisioning of the file systems used for ESX. To create a NAS datastore on a Celerra file system that was exported to ESX: Note: This configuration also applies to VMware Infrastructure.

1. Log in to the vSphere Client (or VMware Infrastructure Client) and select the server from the Inventory area. 2. Click Configuration and click Storage from the left pane.

Figure 50

Add Storage Using NFS storage

133

3. Click Add Storage. The Add Storage wizard appears.

Figure 51

Add Storage - Select Storage Type

4. Select Network File System, and then click Next. The Locate Network File System dialog box appears.

134

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

Figure 52

Add Storage - Locate Network File System

5. In the Server field, type the Celerra network interface IP. In the Folder field, type the name of the file system with NFS Export. Section 3.7.1, ”Add a Celerra file system to ESX,” on page 128 provides details about how to create the NFS export. Click Next. The Network File System dialog box appears.

Using NFS storage

135

Figure 53

Add Storage - Network File System

6. Click Finish to complete the creation of the NAS datastore.

136

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

3.8 Using iSCSI storage The configuration of Celerra iSCSI with VMware vSphere and VMware Infrastructure includes three primary steps: ◆

Add a Celerra iSCSI LUN/device to ESX – In Celerra and ESX, configure a Celerra iSCSI LUN and present it to the ESX server.



Create a VMFS datastore on ESX – In ESX, configure a VMFS datastore over the iSCSI LUNs that were provisioned from Celerra.

VMware ESX, the key component of the VMware vSphere and VMware Virtual Infrastructure virtualization platforms, supports iSCSI storage provisioning. iSCSI is a transport protocol for sending SCSI packets over TCP/IP networks. The iSCSI architecture is based on the client/server model in which an iSCSI host system (client) encapsulates SCSI packets through an iSCSI initiator and sends them to a storage device (server) through an iSCSI target. Similar to FC storage, storage provisioning of iSCSI storage in ESX servers is accomplished by creating a VMFS datastore that contains iSCSI LUNs that are configured on the iSCSI storage system. A dynamic virtualized environment requires changes to the storage infrastructure. This may include the addition and removal of storage devices presented to an ESX server. Both of these functions can be performed when ESX is online. However, since the removal of storage from an existing environment poses a high level of risk, extreme care is recommended if storage is removed from an ESX server. Adding or removing EMC Celerra iSCSI devices to and from an ESX server requires two steps: ◆

Configuration changes must be made to the Celerra storage array



Configuration changes made to the VMware ESX server

The configuration changes on the EMC Celerra storage array can be made using Celerra Manager. Subsequently, steps must be taken to make the VMkernel discover the new configuration.

Using iSCSI storage

137

3.8.1 Configuration considerations for Celerra iSCSI with VMware vSphere and VMware Infrastructure This section describes items to review when configuring Celerra iSCSI. 3.8.1.1 iSCSI HBA and NIC VMware vSphere and VMware Infrastructure support iSCSI hardware initiators and Ethernet NICs with Celerra iSCSI. An iSCSI hardware initiator reduces the load on the ESX host CPU because of its own I/O processor. Host bus adapters (HBAs) over iSCSI allow virtual machines access to logical SCSI devices, just as a physical HBA allows access to physical storage devices. The NICs over iSCSI that connect to the Ethernet network help in NIC teaming, which in turn helps in NIC failover and iSCSI traffic load balancing across the NICs to multiple iSCSI targets. For information on supported HBA to be used with Celerra iSCSI, refer to EMC E-Lab Navigator on Powerlink. 3.8.1.2 ESX iSCSI initiator and guest OS iSCSI initiator ESX iSCSI initiator provides an interface for an iSCSI client (ESX host) to access storage on the iSCSI target (storage device). Two implementations of iSCSI initiators are available in ESX: software initiator and hardware initiator. A software initiator is a driver that interacts with the ESX host to connect to the iSCSI target through an Ethernet adapter that is attached to the ESX host. A hardware initiator is an adapter card that is installed on the ESX host that implements connectivity from the iSCSI client (ESX host) to the iSCSI target. A third implementation of iSCSI initiator is through a guest OS iSCSI initiator. Guest OS iSCSI initiators are thirty-party software iSCSI initiators available for download and can be successfully installed to a supported guest operating system running in a virtual machine. Tests conducted by VMware have shown that the performance of the Microsoft software initiator running inside a virtual machine is almost equal to running the software initiator within a physical server The "Running a Third-Party iSCSI initiator in the Virtual Machine" section in the SAN System - Design and Deployment Guide provides more information about how to use a third-party initiator in a VMware environment.

138

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

3.8.2 Add a Celerra iSCSI device/LUN to ESX This is the first primary step to configure Celerra iSCSI with ESX. This section provides details on how to add an iSCSI LUN in Celerra and how to present it to ESX for iSCSI-based connectivity. Before adding an iSCSI device in Celerra, install and configure an iSCSI initiator. There are three types of initiators that can be installed based on the preferred iSCSI configuration alternative — ESX software initiator, ESX hardware initiator, and Microsoft software initiator. Each type has a distinct installation method. Section 3.8.2.1, ”ESX iSCSI software initiator,” on page 139, Section 3.8.2.2, ”ESX iSCSI hardware initiator,” on page 154, and Section 3.8.2.4, ”Microsoft iSCSI software initiator,” on page 163 provide further details. The configuration of iSCSI devices requires the ESX host to have a network connection configured for IP storage and to have the iSCSI service enabled. Section 3.6.2, ”ESX iSCSI HBA and NIC driver configuration,” on page 120 provides details about how to configure the VMkernel. 3.8.2.1 ESX iSCSI software initiator To configure an iSCSI LUN using the ESX software initiator: Note: This configuration also applies to VMware Infrastructure.

1. Log in to the vSphere Client and select the server from the Inventory pane. 2. Click Configuration and click Security Profile from the left pane. The Security Profile page appears.

Using iSCSI storage

139

Figure 54

Security Profile

3. Click Properties. The Firewall Properties page appears.

140

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

Figure 55

Firewall Properties

4. Select Software iSCSI Client, and then click OK. 5. In the vSphere Client, click Configuration and select Storage Adapters from the Hardware pane. The Storage Adapters page appears.

Using iSCSI storage

141

Figure 56

Storage Adapters

6. Click Properties The iSCSI Initiator Properties dialog box appears. It displays the iSCSI properties such as Name, Alias, Target discovery methods, and Software Initiator Properties.

Figure 57

142

iSCSI Initiator Properties

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

7. Click Configure. The General Properties dialog box appears. It displays the iSCSI Properties and Status.

Figure 58

General Properties

8. Select Enabled. At this point, it is recommended to change the iSCSI Name to a name that is more user-friendly such as iqn.1998-01.com.vmware:. For example: iqn.1998-01.com.vmware:esx001-lab-corp-emc-com. Click OK to save the changes. 9. In the iSCSI Initiator Properties dialog box, click Dynamic Discovery, and then click Add. The Add Send Target Server dialog box appears.

Using iSCSI storage

143

Note: Target discovery addresses are set up so that the iSCSI initiator can determine which storage resource on the network is available for access. VMware vSphere support two discovery methods: Dynamic Discovery and Static Discovery. With Dynamic Discovery, each time the initiator contacts a specified iSCSI server, a Send Targets request is sent to the server by the initiator. The server responds by supplying a list of available targets to the initiator.

Figure 59

Add Send Target Server

10. Type the IP addresses of the Data Mover interfaces using which the iSCSI initiator communicates, and then click OK. After the host establishes the Send Targets session with this system, any newly discovered targets appear in the Dynamic Discovery list.

144

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

Figure 60

iSCSI Initiator Properties - Dynamic Discovery

11. Configure the iSCSI LUNs on Celerra and mask them to the IQN of the software initiator defined for this ESX server host. Note: The IQN can be identified in the iSCSI Initiator Properties dialog box as shown in Figure 29 on page 102. The default device name for the software initiator is vmhba32. The IQN name can also be obtained by using the vmkiscsi-iname command from the service console. The management interface is used to enable the iSCSI service and to define the network portal that is used to access the Celerra iSCSI target.

The Celerra Manager iSCSI wizard can be used to configure an iSCSI LUN.

Using iSCSI storage

145

Figure 61

Wizards - Select a Wizard

12. Click New iSCSI Lun. The New iSCSI Lun Wizard appears.

146

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

Figure 62

New iSCSI Lun Wizard

13. Select the Data Mover information, and then click Next. The Select/Create Target dialog box appears.

Using iSCSI storage

147

Figure 63

Select/Create Target

14. Select the target for the new iSCSI LUN, and then click Next. The Select/Create File System dialog box appears.

148

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

Figure 64

Select/Create File System

15. Select the file system to create the new LUN, and then click Next. The Enter LUN Info dialog box appears.

Using iSCSI storage

149

Figure 65

Enter LUN Info

16. Type the new LUN number and the size of the new LUN, and then click Next. The LUN Masking dialog box appears. Click Next. Note: If the IQN of the ESX server software is known, the LUN can be masked to the host for further configuration. LUNs are provisioned through Celerra Manager from the Celerra file system and masked to the IQN of the ESX server host iSCSI software initiator. Like NFS, the VMkernel network interface is used to establish the iSCSI session with the Celerra target.

150

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

Figure 66

LUN Masking

17. If the IQN of the ESX server software is known, the LUN can be masked to the host for further configuration. If there are multiple ESX servers, click Enable Multiple Access and add the IQNs of the remaining ESX servers in the VMware DRS Cluster. Click Next. The CHAP Access (Optional) dialog box appears. Note: LUNs are provisioned through Celerra Manager from the Celerra file system and masked to the IQN of the ESX server host iSCSI software initiator. Like NFS, the VMkernel network interface is used to establish the iSCSI session with the Celerra target.

Using iSCSI storage

151

Figure 67

Overview/Results

18. Click Next. A final summary of the LUN creation appears. Click Finish. 19. After the configuration steps are complete on the ESX server and Celerra, return to the Storage Adapters page of the vCenter Server as shown in Figure 68 on page 153. Scan the iSCSI bus to identify the LUNs that have been configured for this ESX server host. In the Hardware area of the Configuration tab, select Storage Adapters. In the Storage Adapters page, click Rescan or right-click an individual adapter and click Rescan to rescan just that adapter.

152

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

Figure 68

Storage Adapters

To discover new disks or LUNs, select Scan for New Storage Devices. To discover new datastores or to update a datastore after its configuration has been changed, select Scan for New VMFS Volumes.

Using iSCSI storage

153

Figure 69

Rescan

If new VMFS datastores are discovered, they appear in the datastore list. New storage devices (devices without an existing VMFS datastore) will need to be named and formatted. 3.8.2.2 ESX iSCSI hardware initiator To configure an iSCSI LUN using an ESX iSCSI hardware initiator: 1. Log in to the vSphere Client or VMware Infrastructure Client and select the server from the Inventory area. 2. Click Configuration, and then select Storage Adapters from the left pane. The Storage Adapters page appears.

154

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

Figure 70

Storage Adapters

3. Verify if the iSCSI HBA is successfully installed on the ESX host and functioning correctly. Note: The HBA appears in the Storage Adapters section of the Configuration page.

Figure 71

Storage Adapters - Properties

Using iSCSI storage

155

4. Select the hardware adapter, and then click Properties. The iSCSI Initiator Properties dialog box appears.

Figure 72

iSCSI Initiator Properties

5. Click Configure. The General Properties dialog box appears.

156

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

Figure 73

General Properties

6. Type the required details such as IP Address of the hardware initiator, Subnet Mask, and Default Gateway, and then click OK. 7. In the iSCSI Initiator Properties dialog box, click Dynamic Discovery.

Using iSCSI storage

157

Figure 74

iSCSI Initiator Properties - Dynamic Discovery

8. Click Add. The Add Send Target Server dialog box appears.

158

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

Figure 75

Add Send Target Server

9. Type the IP address of the Data Mover interfaces using which the iSCSI initiator communicates, and then click OK. 10. Configure the iSCSI LUNs on Celerra and mask them to the IQN of the hardware initiator defined for this ESX server host. Note: The IQN can be identified in the iSCSI Initiator Properties dialog box.

Note: The Celerra Manager iSCSI Wizard can be used to configure an iSCSI LUN. If the IQN of the ESX server is known, the LUN can be masked to the host for further configuration. LUNs are provisioned through Celerra

Using iSCSI storage

159

Manager from the Celerra file system and masked to the IQN of the ESX server host iSCSI hardware initiator. Similar to NFS, the VMkernel network interface is used to establish the iSCSI session with the Celerra target.

11. To create a new iSCSI LUN, use the New ISCSI Lun wizard available in Celerra Manager as shown in Figure 76.

Figure 76

160

Wizards - Select a Wizard

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

The procedure to create a new iSCSI LUN using the New iSCSI Lun Wizard is explained in Section 3.8.2.1, ”ESX iSCSI software initiator,” on page 139 starting from step 10 onwards. 12. After the configuration steps have been completed on the ESX server and the Celerra, return to the Storage Adapters page as shown in Figure 76 on page 160, and scan the iSCSI bus to identify the LUNs that have been configured for this ESX server host. 3.8.2.3 Remove iSCSI devices from the ESX server Extreme care must be taken before removing iSCSI devices from an existing environment. A datastore can be removed by using the VMware Infrastructure Client. To remove iSCSI devices from the ESX server: Note: This configuration also applies to VMware Infrastructure.

1. Power off or migrate all virtual machines that use the datastore to be removed. Virtual machines that are still required should be migrated to another VMFS datastore without disruption using Storage vMotion. 2. For each virtual machine remaining in the affected datastore, select it from the inventory. Right-click the virtual machine, and then select Remove from Inventory.

Using iSCSI storage

161

Figure 77

Remove from Inventory option

3. Select the ESX server, and then click Configuration. 4. Click Storage from the Hardware area. The Datastores page appears.

162

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

Figure 78

Datastores

5. Select the datastore to be removed, and then click Delete. The datastore is removed from the list of datastores.

Figure 79

Datastores - Delete

6. Mask or remove the LUN from the Celerra storage array and rescan it to prevent the ESX server from discovering the LUN again. 3.8.2.4 Microsoft iSCSI software initiator VMware vSphere and VMware Infrastructure support the ability to run an iSCSI software initiator on the guest OS that runs inside the virtual machine. This is needed in cases where the virtual machine should leverage the iSCSI features available in the Windows guest OS and bypass the iSCSI features of ESX. An example is using MSCS for a virtual to physical Windows-based clustering. This configuration is fully supported with Celerra. In this case, the iSCSI software initiator must be supported to run in the guest OS. For Windows, Microsoft is providing such a software initiator. Using iSCSI storage

163

Configure Celerra to use Microsoft iSCSI To configure Celerra to use Microsoft iSCSI: 1. Provision the iSCSI LUN using Celerra Manager on a file system and add it to an iSCSI target.

Figure 80

iSCSI Target Properties - Target

2. From Celerra Manager, click iSCSI followed by Targets. Right-click the appropriate iSCSI target, and then select Properties to display the iSCSI Target Properties as shown in Figure 80. 3. To grant the iSCSI LUN to the ESX software initiator, which is connected to the target, click the LUN Mask tab to display the list of configured LUN masks for the selected iSCSI target as shown in Figure 81.

164

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

Figure 81

iSCSi Target Properties - LUN Mask

4. Select the respective iqn. Right-click the iqn. Select Properties. Edit Grant LUNs by typing in the corresponding LUN and click Apply. Configure ESX and virtual machines To configure ESX and virtual machines: 1. Add vSwitches and add physical NICs to the vSwitches.

Using iSCSI storage

165

Figure 82

Networking

2. Add a virtual NIC to the virtual machine and connect the NIC to a different virtual machine network. Configure Microsoft iSCSI initiator in the Windows guest OS using Celerra iSCSI To configure Microsoft iSCSI initiator in the Windows guest OS using Celerra iSCSI: 1. Install the latest Microsoft iSCSI initiator. 2. From the Control Panel, start iSCSI Initiator Properties. The iSCSI Initiator Properties dialog box appears. 166

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

Figure 83

iSCSi Initiator Properties

3. Click Discovery. The Discovery tab appears.

Using iSCSI storage

167

Figure 84

iSCSi Initiator Properties - Discovery

4. Click Add Portal. The Add Target Portal dialog box appears.

168

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

Figure 85

Add Target Portal

5. Type the Celerra target portal IP address from two different subnets (from two different switches for network redundancy), and then click OK. The target portals are added.

Figure 86

iSCSI Initiator Properties - Target portal added Using iSCSI storage

169

6. In the iSCSI Initiator Properties dialog box, click Targets. The list of targets appears.

Figure 87

iSCSI Initiator Properties - Targets

7. Select the appropriate Celerra target, and then click Log on. The Log On to Target dialog box appears.

170

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

Figure 88

Log On to Target

8. Select Automatically restore this connection when the computer starts. 9. Click Advanced. The Advanced Settings dialog box appears.

Using iSCSI storage

171

Figure 89

Advanced Settings

10. Select the Source IP and Target portal for the session, and then click OK. A new session is created. 11. Similarly, create another session by specifying another Source IP and Target portal.

172

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

Figure 90

iSCSI Initiator Properties - Targets

12. In the iSCSI Initiator Properties dialog box, verify that the initiator status in the targets list is in a connected stage to complete the process. Note: If a Windows virtual machine is configured to use Microsoft iSCSI initiator, the virtual machine can access Celerra iSCSI LUNs directly without going through the virtualization layer. Refer to the Microsoft website (www.microsoft.com) to download the software initiator. Using iSCSI storage

173

3.8.3 Create VMFS datastores on ESX A VMFS datastore can be created on ESX after the Celerra iSCSI device is added using one of the methods explained in Section 3.8.2, ”Add a Celerra iSCSI device/LUN to ESX,” on page 139. To create a VMFS datastore using the VMware vSphere Client or the VMware Infrastructure Client: 1. Log in to the vSphere Client and select the server from the Inventory area. 2. Click Configuration, and then click Storage from the left pane. All available datastores on ESX are displayed.

Figure 91

Datastores

3. To create a datastore, click Add Storage. The Add Storage wizard appears.

174

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

Figure 92

Add Storage - Select Storage Type

4. Select Disk/LUN, and then click Next. The Select Disk/LUN dialog box appears. Note: The wizard presents all available iSCSI or SCSI attached devices. Devices that have existing VMware file systems are not presented. This is independent of whether or not that device contains free space. However, devices with existing non-VMFS formatted partitions but with free space are visible in the wizard.

Using iSCSI storage

175

Figure 93

Add Storage - Select Disk/LUN

5. Select the appropriate device in the list, and then click Next. The iSCSI device can be selected based on the LUN number. 6. Based on whether the LUN is blank or not, the following dialog box appears: • If the LUN is blank, the Current Disk Layout dialog box appears as shown in Figure 95 on page 179. • If the LUN presented has a VMFS volume, the Select VMFS Mount Options dialog box appears (Figure 94 on page 177).

176

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

Figure 94

Add Storage - Select VMFS Mount Options

7. To resignature a VMFS datastore copy, select Assign a new signature. In VMware Infrastructure, administrators must configure the LVM advanced configuration parameters, LVM.DisallowsnapshotLun and LVM.EnableResignature, to control the clone behavior. To discover the storage to the ESX server, select the appropriate combination of the LVM advanced configuration parameters. The LVM.DisallowsnapshotLun and LVM.EnableResignature parameters have the following combinations. • Combination 1 (default combination): EnableResignature=0, DisallowSnapshotLUN=1 In this combination, snapshots of VMFS volumes cannot be discovered into the ESX server regardless of whether the ESX server has access to the original LUN or not. The VMFS formatted LUNs must have the same ID for each ESX server. • Combination 2: EnableResignature=1, DisallowSnapshotLUN=1 (default value)

Using iSCSI storage

177

In this combination, the snapshots of the VMFS volumes are discovered into the same ESX servers without any VMFS formatting. The LUNs are resignatured automatically. • Combination 3: EnableResignature=0, DisallowSnapshotLUN=0 In this combination, only one snapshot of a given LUN is available to the ESX server and this LUN will not be resignatured. To modify the ESX LVM parameters: a. From the VMware Infrastructure Client, connect to the vCenter Server. b. Select the ESX server. c. Click Configuration and click Advanced Settings. The Advanced Settings dialog box appears. d. Select LVM. e. Change the values of LVM.EnableResignature to 1, and retain the default value (1) of the LVM.DisallowSnapshotLUN parameter (as per combination 2). f. Click OK. The values are accepted. g. Rescan the HBAs. The promoted LUNs are added to the storage without the VMFS formatting. 8. Click Next. The Current Disk Layout dialog box appears. Note: Use the datastore resignature to retain the data stored on the VMFS datastore. In VMware Infrastructure, to add the LUN to the storage without resignature, the LVM advanced configuration parameter, LVM.EnableResignature must be set to 1. To configure the virtual machine advanced parameters, select the server from the Inventory area, select Configuration, and select Advanced Settings from the Software area.

178

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

Figure 95

Add Storage - Current Disk Layout

9. The entire disk space for the storage configuration is automatically presented. Click Next. The Properties dialog box appears.

Using iSCSI storage

179

Figure 96

Add Storage - Properties

10. Type the datastore name, and then click Next. The Disk/LUN Formatting dialog box appears.

Figure 97

180

Add Storage - Disk/LUN - Formatting

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

11. Maintain the default selections, and then click Next. The Ready to Complete dialog box appears. Note: The block size of the VMware file system influences the maximum size of a single file on the file system. The default block size (1 MB) should not be changed unless a virtual disk larger than 256 GB has to be created on that file system. However, unlike other file systems, VMFS-3 is a self-tuning file system that changes the allocation unit based on the size of the file that is being created. This approach reduces wasted space commonly found in file systems with an average file size smaller than the block size.

Figure 98

Add Storage - Ready to Complete Using iSCSI storage

181

12. Click Finish. The VMFS datastore is now created on the selected device.

3.8.4 Create RDM volumes on ESX servers RDM is a mapping file in a separate VMFS volume that acts as a proxy for a raw physical device, which is a SCSI device used directly by a virtual machine. The RDM contains metadata to manage and redirect disk access to the physical device. An RDM volume provides a virtual machine with direct access to a LUN in Celerra. This is unlike a VMFS datastore where the virtual machine really accesses a file in the VMFS datastore. With RDM, the virtual machine and the applications running in it can be configured knowing that its virtual disk is a one-to-one mapping to a physical LUN in Celerra. An RDM volume enables applications that need access to a physical LUN, such as SAN management applications and virtual-to-physical clustering, to be run in a virtual machine. Previously, it was regarded that an RDM volume will have better performance than a VMFS datastore. However, VMware has shown that there is no significant performance difference from using an RDM volume. VMware, therefore, does not recommend using an RDM volume unless it is required by the applications, when access to the storage would be configured and managed centrally within ESX for all virtual machines. RDM volumes are created on ESX servers by presenting LUNs to the ESX server and then adding the raw LUN through the virtual machine's Edit Settings option. To create a RDM volume on ESX: 1. Create a virtual machine. 2. Configure RDM on the virtual machine. 3.8.4.1 Create a virtual machine To create and configure a virtual machine: 1. Log in to the VMware vSphere Client or the VMware Infrastructure Client.

182

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

Figure 99

New Virtual Machine option

2. Right-click the host and select New Virtual Machine. The Create New Virtual Machine wizard appears.

Using iSCSI storage

183

Figure 100 Create New Virtual Machine

3. Select Custom, and then click Next. The Name and Location dialog box appears.

184

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

Figure 101 Create New Virtual Machine - Name and Location

4. Type the name and select the location for the new virtual machine, and then click Next. The Datastore dialog box appears.

Using iSCSI storage

185

Figure 102 Create New Virtual Machine - Datastore

5. Select the datastore where the new virtual machine must be created, and then click Next. The Virtual Machine Version dialog box appears.

186

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

Figure 103 Create New Virtual Machine - Virtual Machine Version

6. Select Virtual Machine Version: 7 if the following conditions are fulfilled: • If the virtual machine created will run on ESX server version 4 and later and VMware Server 2.0. • If the virtual machine should have any of the latest VMware vSphere virtual machine features. • If the virtual machine does not need to be migrated to ESX 3. Note: This option is not available in VMware Infrastructure.

7. Click Next. The Guest Operating System dialog box appears.

Using iSCSI storage

187

Figure 104 Create New Virtual Machine - Guest Operating System

8. Choose the desired compatibility mode for the RDM volume: virtual or physical: • Virtual compatibility mode for an RDM specifies full virtualization of the mapped device. It appears to the guest operating system exactly the same as a virtual disk file in a VMFS volume. The real hardware characteristics are hidden. Virtual mode allows customers using raw disks to realize the benefits of VMFS, such as advanced file locking for data protection and snapshots for streamlining development processes. Virtual mode is also more portable across storage hardware than physical mode, presenting the same behavior as a virtual disk file. • Physical compatibility mode for an RDM specifies minimal SCSI virtualization of the mapped device, allowing the greatest flexibility for SAN management software. In physical mode, the VMkernel passes all SCSI commands to the device, with one exception: the REPORT LUNs command is virtualized, so that 188

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

the VMkernel can isolate the LUN for the owning virtual machine. Otherwise, all physical characteristics of the underlying hardware are exposed. Physical mode is useful to run SAN management agents or other SCSI target based software in the virtual machine. Physical mode also allows virtual-to-physical clustering for cost-effective high availability. 9. Select the Guest Operating System, and then click Next. The CPUs dialog box appears.

Figure 105 Create New Virtual Machine - CPUs

10. Select the number of processors in the virtual machine, and then click Next. The Memory dialog box appears.

Using iSCSI storage

189

Figure 106 Create New Virtual Machine - Memory

11. Type or select the virtual machine's memory size, and then click Next. The Network dialog box appears.

190

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

Figure 107 Create New Virtual Machine - Network

12. Select the type of network connection the virtual machine will use. Select the number of NICs to connect. Click Next. The SCSI Controller dialog box appears.

Using iSCSI storage

191

Figure 108 Create New Virtual Machine - SCSI Controller

13. Select the SCSI controller, and then click Next. VMware Infrastructure only has Bus Login Parallel and LSI Logic Parallel SCSI controllers available. Click Next. The Select a Disk dialog box appears.

192

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

Figure 109 Create New Virtual Machine - Select a Disk

14. Select the type of disk based on the following: • If the virtual machine is booted from the RDM volume, select Do not create disk, and then click Next. The Ready to Complete dialog box appears (Step 16). • If the boot disk is created in the datastore and the RDM volume is added as an additional disk, select Create a new virtual disk (Figure 110 on page 194).

Using iSCSI storage

193

Figure 110 Create New Virtual Machine - Select a Disk

15. Click Next. The Create a Disk dialog box appears.

194

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

Figure 111 Create New Virtual Machine - Create a Disk

16. Type or enter the Disk Size and Disk Provisioning policy, and then click Next. The Advanced Options dialog box appears.

Using iSCSI storage

195

Figure 112 Create New Virtual Machine - Advanced Options

17. Maintain the default values, and then click Next. The Ready to Complete dialog box appears.

196

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

Figure 113 Create New Virtual Machine - Ready to Complete

18. Click Finish to create the virtual machine. 3.8.4.2 Configure an RDM volume To configure an RDM volume on a virtual machine: Note: This configuration also applies to VMware Infrastructure. Using iSCSI storage

197

1. Log in to the VMware vSphere Client or the VMware Infrastructure Client. 2. Create a virtual machine using the Create New Virtual Machine wizard. Section 3.8.4.2, ”Configure an RDM volume,” on page 197 provides details about the procedure.

Figure 114 Edit Settings 198

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

3. Right-click the virtual machine, and then select Edit Settings. The Virtual Machine Properties dialog box appears.

Figure 115 Virtual Machine Properties

4. Click Add. The Add Hardware wizard appears. This wizard allows users to add raw device mapping to a virtual machine.

Using iSCSI storage

199

Figure 116 Add Hardware

5. Select Hard Disk, and then click Next. The Select a Disk dialog box appears. Note: Because these are raw devices, VMware file systems do not exist on these LUNs.

200

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

Figure 117 Add Hardware - Select a Disk

6. Select Raw Device Mappings, and then click Next. The Select and Configure a Raw LUN dialog box appears.

Using iSCSI storage

201

Figure 118 Add Hardware - Select and Configure a Raw LUN

7. Select the target LUN, and then click Next. The Select a Datastore dialog box appears.

Figure 119 Add Hardware - Select a Datastore

202

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

8. Select the VMware file system that hosts the mapping file for the RDM, and then click Next. The Advanced Options dialog box appears.

Figure 120 Add Hardware - Advanced Options

9. Maintain the default value for the Virtual Device Node, and then click Next. The Ready to Complete dialog box appears.

Using iSCSI storage

203

Figure 121 Add Hardware - Ready to Complete

10. Verify the settings, and then click Finish to create the volume. Ensure that the RDM volumes are aligned for application data volumes.

204

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

3.9 Introduction to using Fibre Channel storage Note: The following section includes an introduction to provisioning Fibre Channel storage from Celerra unified storage to VMware vSphere and VMware Infrastructure. As such, this section highlights the steps to provision such storage. Using EMC CLARiiON Storage with VMware vSphere and VMware Infrastructure TechBook available on Powerlink provides further details and considerations for using Celerra Fibre Channel storage with VMware vSphere and VMware Infrastructure.

EMC Celerra unified storage is based on a CLARiiON storage consisting of two storage processors (SPs). To leverage the FC storage, the front-end ports on the CLARiiON SPs can be connected to an FC SAN switch or directly connected to FC HBAs on the ESX host. The prerequisite to configure the FC LUN with EMC Celerra and VMware vSphere or VMware Infrastructure is that FC zoning process must be completed Zoning is required for security and management of the FC fabric. The host can access the storage only after zoning. The zoning described in this section, called Name Zoning, is based on the World Wide Number (WWN) of ports.

3.9.1 Create LUNs and add them to a storage group To create a storage device and add it to ESX: ◆

Create a RAID group



Create LUNs from a RAID group



Create a storage group



Connect hosts



Add LUNs to a storage group

3.9.2 Create a RAID group Note: RAID 1/0 is used for FC configuration.

To create a RAID group: 1. In Navisphere Manager, right-click the RAID Group, and then click Create RAID Group.

Introduction to using Fibre Channel storage

205

Figure 122 Create a Storage Pool

The Create Storage Pool dialog box appears.

206

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

Figure 123 Create Storage Pool

2. Select the Storage Pool ID and RAID Type. Select Manual, and then click Select. The Disk Selection dialog box appears.

Introduction to using Fibre Channel storage

207

Figure 124 Disk Selection

3. From the Available Disks area, select the required disks for the RAID type. It is recommended to use the disks on the same bus in consecutive order. Click OK to complete the disk selection. The selected disks appear in the Selected Disks area. 4. Click Apply. The RAID group is created.

208

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

3.9.2.1 Create LUNs from a RAID group To create LUNs from a RAID group: 1. In Navisphere Manager, right-click the RAID group on which LUNs must be created, and then click Create LUN.

Figure 125 Create LUNs from a RAID group

Introduction to using Fibre Channel storage

209

The Create LUN dialog box appears.

Figure 126 Create LUN

2. Select the RAID Type, User Capacity, LUN ID, and Number of LUNs to create, and then click Apply. A confirmation message appears.

Figure 127 Confirm: Create LUN

3. Click Yes. A confirmation message appears. 210

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

Figure 128 Message: Create LUN - LUN created successfully

4. Click OK. A LUN is created.

Introduction to using Fibre Channel storage

211

3.9.2.2 Create a storage group To create a storage group that is connected to an ESX host: 1. In Navisphere Manager, right-click Storage Groups, and then click Create Storage Group.

Figure 129 Create Storage Group

The Create Storage Group dialog box appears.

Figure 130 Create Storage Group 212

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

2. Type an appropriate name for the storage group, and then click OK. A confirmation dialog box appears. Note: The name refers to the domain name of the ESX server.

Figure 131 Confirm: Create Storage Group

3. Click Yes. A confirmation message appears.

Figure 132 Success: Create Storage Group

4. Click OK.

Introduction to using Fibre Channel storage

213

3.9.2.3 Connect hosts To add the host, which will access the LUNs, to the storage group: 1. In Navisphere Manager, right-click the storage group created, and then click Connect Hosts.

Figure 133 Connect Hosts

The Storage Group Properties dialog box appears.

214

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

Figure 134 Select a host for the storage group

2. From the Available Hosts area, select the appropriate host and click the arrow to move the host into the Hosts to be Connected area. Click OK. A confirmation message appears.

Figure 135 Confirm the connected host

3. Click Yes. A confirmation message appears.

Introduction to using Fibre Channel storage

215

Figure 136 Connect Host operation succeeded

4. Click OK. Another confirmation message appears when the hosts are added successfully to the storage group. 3.9.2.4 Add LUNs to the storage group The host can access the required LUNs only when the LUN is added to the storage group that is connected to the host. To add LUNs to the storage group: 1. In Navisphere Manager, right-click the storage group, and then click Select LUNs.

216

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

Figure 137 Select LUNs for the storage group

The Storage Group Properties dialog box appears.

Introduction to using Fibre Channel storage

217

Figure 138 Select LUNs

2. From the Available LUNs area, select the LUNs that must be added, and then click Apply. The selected LUNs are added to the Selected LUNs area.

218

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

3. Click OK. A confirmation message appears.

Figure 139 Confirm addition of LUNs to the storage group

4. Click Yes. Another confirmation message appears.

Figure 140 Successful addition of LUNs to the storage group

5. Click OK. The LUNs are added successfully.

3.9.3 Present the LUN to VMware vSphere or VMware Infrastructure To present the LUN to VMware vSphere or VMware Infrastructure, complete the configuration steps on Celerra and then go to the Storage Adapter on vCenter Server. Scan the FC adapter to identify the LUN that is configured for this ESX server host. Subsequently, complete the following steps: 1. In the Hardware area of the Configuration tab, select Storage Adapters, and then click Rescan in the Storage Adapters area.

Introduction to using Fibre Channel storage

219

Note: Alternatively, you can right-click the selected storage adapter, and then click Rescan.

Figure 141 Rescan FC adapter

The Rescan dialog box appears. 2. To discover new LUNs, select Scan for New Storage Devices. To discover new datastores or to update a datastore after its configuration has been changed, select Scan for New VMFS Volumes.

Figure 142 Rescan dialog box

220

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

After rescan is completed, the FC LUN is added to the storage.

Figure 143 FC LUN added to the storage

Introduction to using Fibre Channel storage

221

3.10 Virtual machine considerations When using Celerra NFS and iSCSI storage, consider the following items to help achieve optimal performance and functionality in virtual machines: ◆

Virtual machine disk partitions alignment



Virtual machine swap file location



Guest operating system SCSI timeout settings



Paravirtual virtual SCSI adapter (PVSCSI)

3.10.1 Virtual machine disk partitions alignment The alignment of disk partitions in a virtual machine can greatly affect its performance. A misaligned disk partition in a virtual machine may lead to degraded overall performance for the applications running in the virtual machine. The best practice from VMware is to align virtual machine partitions. Recommendations for Aligning VMFS Partitions — VMware Performance Study, available on the VMware website, provides more information about alignment. Furthermore, Microsoft recently recommended aligning disk partitions to 1 MB track boundaries for most Windows systems in cases that are applicable when using shared storage such as Celerra (refer to Microsoft TechNet article 929491). For optimal overall performance, EMC also recommends aligning virtual machines that are deployed over Celerra NFS and iSCSI. The following recommendations should therefore be considered: ◆

Create the datastore by using the VMware vSphere Client or the VMware Infrastructure Client instead of using CLI.



The benefits of aligning boot partitions are generally marginal. It is more important to align the app/data disk partitions that sustain the heaviest I/O workload. If there is only a single virtual disk, consider adding an app/disk partition.



Align application/data disk partitions to a 1 MB disk boundary in both Windows and Linux.

Note: This step is not required for Windows 2008, Windows Vista, or Windows 7 where disk partitions are aligned to 1 MB by default.

222

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure



For Windows app/data partitions, use the allocation unit size recommended by the application. Use a multiple of 8 KB if no allocation unit size is recommended.



For NFS, consider using the uncached option on Celerra (Section 3.6.1.1, “Celerra uncached write mechanism,” on page 109). This can be particularly helpful with random workloads that contains writes. The uncached option can also help with Linux data partitions and with Windows data partitions that were formatted with a 4 KB allocation unit size.

Note: The procedures in this section are also applicable to VMware Infrastructure.

3.10.1.1 Datastore alignment Datastore alignment refers to aligning the datastore in the storage location on which it is configured. A NAS datastore that is configured on a Celerra file system is aligned by default. A VMFS datastore that is created using the VMware vSphere Client or the VMware Infrastructure Client is also aligned by default. 3.10.1.2 Virtual machine alignment Virtual machine alignment refers to aligning the virtual machine disk partitions to 64 KB track boundaries. VMware recommends this for VMFS data partitions to reduce latency and increase throughput. Furthermore, recently Microsoft recommended aligning virtual machine partitions to 1 MB track boundaries for most Windows systems in some cases that are applicable when using shared storage such as Celerra (see Microsoft TechNet article 929491). 3.10.1.3 Align virtual machines provisioned from Celerra storage To use with Celerra storage, Windows virtual machines and Linux virtual machines must be aligned. This section explains the aligning procedures. Aligning Windows virtual machines Note: This step is not required for Windows 2008, Windows Vista, or Windows 7 because on these newer operating systems, partitions are created on 1 MB boundaries by default for disks larger than 4 GB (64 KB for disks smaller than 4 GB).

Virtual machine considerations

223

To create an aligned data partition, use the diskpart.exe command. This example assumes that the data disk to be aligned is disk 1: 1. At the command prompt, type diskpart.

Figure 144 Command prompt - diskpart

2. Type select disk 1.

Figure 145 Select the disk

3. Type the create partition primary align=1024 command to create a partition to align to a 1 MB disk boundary.

Figure 146 Create a partition with a 1 MB disk boundary

4. Type Exit.

224

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

Set the allocation unit size of a Windows partition Windows Disk Manager is used to format an NTFS data partition with an allocation unit size of 8 KB. If the application recommends another value, use that value instead. To set the allocation unit size of a Windows partition: 1. Right-click My Computer on the desktop, and then select Manage. The Computer Management dialog box appears. 2. In the left pane, select Disk Management. The disks are displayed in the right pane. 3. Select the unformatted disk, right-click and then select Format. The Format dialog box appears.

Figure 147 Computer Management

4. Select the allocation unit size as 8192 (8 KB), and then click OK. A confirmation message appears. 5. Click OK.

Virtual machine considerations

225

Align Linux virtual machines Use the fdisk command to create an aligned data partition: 1. At the command prompt, type fdisk /dev/sd where is the device suffix. 2. Type n to create a new partition. 3. Type p to create a primary partition. 4. Type 1 to create partition Number 1. 5. Select the defaults to use the complete disk. 6. Type t to set the partition's system ID. 7. Type fb to set the partition system ID to fb. 8. Type x to go into expert mode. 9. Type b to adjust the starting block number. 10. Type 1 to choose partition 1. 11. Type 2048 to set the starting block number to 2048 for a 1 MB disk partition alignment. 12. Type w to write label and partition information to disk. 3.10.1.4 Identify the alignment of virtual machines The following section explains the procedures to identify the Windows and Linux virtual machine alignment. To check whether the virtual machine is aligned: 1. From the Start menu, select Programs > Accessories > System Tools > System Information. The System Information dialog box appears. 2. Select Components > Storage > Disks. The right pane lists information about all the configured disks.

226

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

Figure 148 NTFS data partition alignment (Windows system Information)

3. Scroll the list to locate the information for the data disk. The Partition Starting Offset information for the data disk should display 1,048,576 bytes to indicate alignment to a 1 MB disk boundary. An alternative command line based method to check if the virtual machine is aligned is to type wmic partition get StartingOffset, Name at the command prompt. The partition starting offset is displayed.

Virtual machine considerations

227

Figure 149 NTFS data partition alignment (wmic command)

3.10.1.5 Partition allocation unit size To identify the allocation unit size of an existing data partition, use the fsutil command. In the following example, E drive is the NTFS data partition that is formatted with an allocated unit size of 8 KB. At the command prompt, type fsutil fsinfo ntfsinfo . The details appear. The Bytes Per Cluster mentions the value in bytes of the allocation unit size of the data partition.

Figure 150 Allocation unit size of a formatted NTFS data partition

3.10.1.6 Identify Linux virtual machine alignment To identify the current alignment of an existing Linux data partition, use the fdisk command. In the following example, /dev/sdb is the data partition that was configured on a Linux virtual machine. In the terminal session, type fdisk -lu . The results are displayed.

228

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

Figure 151 Output for a Linux partition aligned to a 1 MB disk boundary (starting

sector 2048)

The unaligned disk shows the starting sector as 63.

Figure 152 Output for an unaligned Linux partition (starting sector 63)

3.10.2 Virtual machine swap file location When a virtual machine is powered on, a corresponding swap file is created. The virtual machine can power on only when the swap file is available. With both VMware Infrastructure and VMware vSphere, the swap file of a virtual machine is placed by default in the same location as the virtual machine configuration file (.vmx file). Nevertheless, ESX provides the option to place the swap file in another datastore or in the local storage of the ESX host. For optimum performance, an ESX server uses the balloon approach whenever possible possible to reclaim memory that is considered least valuable by the guest operating system. However, swapping is used when the balloon driver is temporarily unable to re-declare memory quickly to satisfy present system demands. However, swapping is used when the balloon driver is temporarily unable to reclaim memory quickly enough to satisfy current system demands. The balloon driver, also known as the vmmemctl driver, collaborates with the ESX server to reclaim memory that is considered least valuable by the guest operating system. The balloon driver may be unavailable either because VMware tools was not installed or because the driver has been disabled or not running (for example, while the guest operating system is booting). The balloon driver essentially acts like a native program in the operating Virtual machine considerations

229

system that requires memory. The driver uses a proprietary ballooning technique that provides predictable performance, which closely matches the behavior of a native system under similar memory constraints. This technique effectively increases or decreases memory pressure on the guest operating system, causing the guest to invoke its own native memory management algorithms. Swapping is a reliable mechanism of a last resort that a host uses only when necessary to reclaim memory. Standard-demand paging techniques swap pages back in when the virtual machine needs them. Such swapping is done for each virtual machine on a specific swap file. The recommended configuration for the swap file location is placing the virtual machine's swap file on a high-speed/high-bandwidth storage system that results in minimal performance impact. It is important to note that placing the swap file in the local storage does not limit the ability to perform vMotion on the virtual machine. Furthermore, because this file contains only dynamic information that is only relevant to the current run of the virtual machine, there is no need to protect this file. Also, the network usage while the swap is placed in the local storage reduces by 6 percent to12 percent. Therefore, it is recommended to use the virtual machine swap space on the local storage because it offers backup and replication storage savings. Virtual machine swap data is the part of the virtual machine that does not need to be backed up or replicated. Using the local storage of the ESX host for placing the swap file can affect DRS load balancing and HA failover in certain situations. While designing an ESX environment by placing the swap file on the local storage of the ESX host, some areas must be focused on to guarantee HA and DRS functionality. Moreover, vMotion performance may get affected because of copying swap files from one host local storage to another host local storage. When using host local storage swap setting to store the virtual machine swap files, the following factors must be considered:

230



Amount of ESX hosts inside the cluster



HA configured host failover capacity



Amount of active virtual machines inside the cluster



Consolidation ratio (virtual machine per host)



Average swap file size



Free disk space local VMFS datastores

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

Using host local swap can be a valid option for some environments, but additional calculation of the factors mentioned above is necessary to ensure sustained HA and DRS functionality To adjust the swap file location for virtual machines running on a specific ESX host: 1. Log in to the VMware vSphere Client or the VMware Infrastructure Client and select the server from the Inventory area. 2. Click Configuration on the ESX server host.

Figure 153 Edit Virtual Machine Swapfile Location

3. Click Virtual Machine Swapfile Location, and then click Edit. The Virtual Machine Swapfile Location dialog box appears.

Virtual machine considerations

231

Figure 154 Virtual Machine Swapfile Location

4. Select Store the swapfile in a swapfile datastore selected below. A list of datastores is presented.

232

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

Figure 155 List of datastores

5. Select the local storage of the ESX host where the virtual machine swap files must be placed, and then click OK. 6. Restart the virtual machines. The location of the virtual machine swap file can also be adjusted from the virtual machine advanced parameters tab: To configure the virtual machine advanced parameters, select the server from the Inventory area, select the Configuration tab, and then select Advanced Settings from the Software area.

Virtual machine considerations

233

Figure 156 Advanced Settings

The Advanced Settings dialog box appears.

234

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

Figure 157 Mem.Host.LocalSwapDirEnabled parameter

7. Select Mem from the left pane. The parameters are displayed in the right pane. ◆

Mem.Host.LocalSwapDirEnabled This parameter helps to enable the use of the host-local swap directory. The values for this parameter are 0 for Min and 1 for Max. If the Mem.HostLocalSwapDir is set with the directory path of the local storage, the parameter value must be set to1. For performance testing, the value of this parameter must be 1 to place the swap file on the local storage and 0 to place the swap file along with the virtual machine on the deployed NFS storage.



Mem.HostLocalSwapDir This parameter allows specifying the host-local directory for the virtual machine swap file. Updating this parameter allows the administrator to set the swap file location manually and also control settings for the swap memory directory location. Changing this parameter value can help finetune the running of virtual machines. For example, the parameter value for Mem.HostLocalSwapDir for local storage can be /vmfs/volumes/48c935cc-9fae65b6-3e7d-001ec9e34ca0. Virtual machine considerations

235

Figure 158 Mem.Host.LocalSwapDir parameter

3.10.3 Guest OS SCSI timeout settings For virtual machines with Windows guest OS, the disk SCSI timeout registry parameter controls the I/O wait time for completion of I/Os. This parameter setting should be tuned to help the virtual machines survive SCSI timeout errors (such as disk, symmpi) and to sustain the guest OS and applications for an extended time during Celerra failure events. To modify the disk SCSI timeout registry parameter setting: 1. From the Start menu, click Run and type regedit. The Registry Editor appears. 236

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

2. In the left pane, double-click HKEY_LOCAL_MACHINE > SYSTEM > CURRENT CONTROL SET > SERVICES > DISK. 3. Right-click TimeOutValue. The Edit DWORD Value dialog box appears.

Figure 159 Edit DWORD Value

4. In the Value data field, type 360. Select Decimal, and then click OK. The DWORD values are updated. 5. Restart the virtual machine. The new parameter is applied. 6. Section 3.14, ”VMware Resiliency,” on page 315 provides more information about VMware resiliency with EMC Celerra.

3.10.4 Paravirtual SCSI (PVSCI) adapters PVSCSI adapters are high-performance storage adapters that can result in greater throughput and lower CPU utilization. It is best suited for environments, especially SAN environments, where hardware or applications drive a very high amount of I/O throughput. PVSCSI Virtual machine considerations

237

adapters are recommended because it offers improved I/O performance, as much as 18 percent reduction in the ESX 4 host CPU usage. A PVSCSI adapter also reduces the cost of virtual interrupts and batches the processing of I/O requests. With vSphere Update 1, the PVSCSI adapter is supported for both boot and data virtual disks. With Windows 2003 and Windows 2008 guest OS, the PVSCSI adapter was found to improve the virtual machine resiliency during Celerra and storage network failure events. Paravirtual SCSI adapters are supported on the following guest operating systems: ◆

Windows Server 2008



Windows Server 2003



Red Hat Enterprise Linux (RHEL) 5

Paravirtual SCSI adapters have the following limitations: ◆

Hot-add or hot-remove requires a bus rescan from within the guest.



Disks with snapshots might not experience performance gains when used on Paravirtual SCSI adapters or if memory on the ESX host is overcommitted.



If RHEL 5 is upgraded to an unsupported kernel, data might not be accessible from the virtual machine's PVSCSI disks. Run vmware-config-tools.pl with the kernel-version parameter to regain access.



Because the default type of newly hot-added SCSI adapter depends on the type of primary (boot) SCSI controller, hot-adding a PVSCSI adapter is not supported.



Booting a Linux guest from a disk attached to a PVSCSI adapter is not supported. A disk attached using PVSCSI can be used as a data drive, not a system or boot drive. Booting a Microsoft Windows guest from a disk attached to a PVSCSI adapter is not supported in versions of ESX prior to ESX 4.0 Update 1.

Refer to the VMware KB article #1010398 for further details. To configure a disk to use a PVSCSI adapter: Note: This PVSCSI configuration does not apply to VMware Infrastructure.

1. Launch a vSphere Client and log in to an ESX host. 238

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

2. Select a virtual machine, or create a new one. Section 3.8.4, “Create RDM volumes on ESX servers,” on page 182 provides information about creating a virtual machine. 3. Ensure that a guest operating system that supports PVSCSI is installed on the virtual machine. Note: Booting a Linux guest from a disk attached to a PVSCSI adapter is not supported. Booting a Microsoft Windows guest from a disk attached to a PVSCSI adapter is not supported in versions of ESX before ESX 4.0 Update 1. In these situations, the system software must be installed on a disk attached to an adapter that supports the bootable disk.

4. In the vSphere Client, right-click the virtual machine, and then click Edit Settings.

Figure 160 Edit Settings for the virtual machine

The Virtual Machine Properties dialog box appears.

Virtual machine considerations

239

Figure 161 Virtual Machine Properties

5. Click Hardware, and then click Add. The Device Type dialog box appears.

Figure 162 Add Hardware 240

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

6. Select Hard Disk, and then click Next. The Select a Disk dialog box appears.

Figure 163 Select a Disk

Virtual machine considerations

241

7. Select Create a new virtual disk, and then click Next. The Create a Disk dialog box appears.

Figure 164 Create a Disk

8. Select the Disk Size, Disk Provisioning, and Location for the disk. Click Next. The Advanced Options dialog box appears.

242

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

Figure 165 Advanced Options

9. Select a Virtual Device Node between SCSI (1:0) to SCSI (3:15). Select the Mode, and then click Next. The Ready to Complete dialog box appears.

Virtual machine considerations

243

Figure 166 Ready to Complete

10. Click Finish. A new disk and controller are created. 11. Select the new controller, and then click Change Type.

Figure 167 Change the SCSI controller type 244

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

The Change SCSI Controller Type dialog box appears.

Figure 168 Change SCSI Controller Type

12. Select VMware Paravirtual, and then click OK. The SCSI controller type changes.

Virtual machine considerations

245

Figure 169 Virtual Machine Properties

13. Click OK. The Virtual Machine Properties dialog box closes. 14. Power on the virtual machine. 15. Install VMware Tools. VMware Tools includes the PVSCSI driver. 16. Power on the virtual machine and select Start > Programs > Administrative Tools > Computer Management. 17. From the right pane, select Disk Management from the Storage menu. 18. Right-click the newly added disk in the left pane. 19. Scan and format the hard disk.

246

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

Figure 170 Disk Management

Virtual machine considerations

247

3.11 Monitor and manage storage When using Celerra storage with VMware vSphere and VMware Infrastructure, monitor the storage resource utilization at the Celerra and vCenter Server level. When Celerra Virtual Provisioning is used in conjunction with vSphere thin provisioning, it is very critical to monitor the storage utilization using Celerra notification and datastore alarms to prevent an accelerated out-of-space condition. Datastore alarms are only available in VMware vSphere. At the vCenter level, create datastore alarms for the corresponding datastores to monitor the utilization. This section explains how to configure Celerra notifications and vSphere alarms.

3.11.1 Celerra file system notification Use Celerra notifications to monitor Celerra file systems used for NAS or VMFS datastores. Celerra notifications can be used for storage usage and for storage projections. Notifications are actions that the Control Station takes to respond to particular events. Some examples of notifications are e-mail messages or an SNMP trap after a hardware failure. Resource notifications are based on resource usage parameters that the user specifies to receive notifications at various stages of usage problems. The user defines the conditions or threshold for an event that triggers a notification. The three types of resource notifications are: ◆

Storage usage



Storage projections



Data Mover load

Users must monitor the space utilization in file systems, storage pools, and virtually provisioned file systems because these may get filled up and may possibly result in a denial of write access. Notifications can be configured and customized based on the file system, storage pool usage, and time-to-fill predictions. These predictions (also known as projection notifications) can take into account automatic file system extension (with and without specified maximum sizes) and automatic storage pool extension.

248

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

Notifications are particularly important as they provide a warning about overprovisioned resources. When an overprovisioned resource reaches its limit, it is considered to be more critical than a regular resource reaching its limit. For example, if an overprovisioned file system that uses a virtually provisioned LUN runs out of space, disk errors occur. If a storage pool is overprovisioned, an automatically extending file system will not be able to automatically extend. 3.11.1.1 Configure storage usage notifications To configure a notification based on the percentage of the maximum automatically extending file system size used: 1. In Celerra Manager, select Notifications in the left pane. The Notifications page appears.

Figure 171 Notifications Monitor and manage storage

249

2. Click Storage Usage, and then click New. The New Notification: Storage Usage page appears.

Figure 172 New Notification: Storage Projection

3. Complete the following steps: a. In the Storage Type field, select File System. b. In the Storage Resource list box, select the name of the file system. Note: Notifications can be added for all file systems.

c. In the Notify On field, select Maximum Size.

250

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

Note: Maximum Size is the auto-extension maximum size and is valid only for auto-extending file systems.

d. In the Condition field, type the percentage of storage (% Used), and select the % Used from the list. The notification is sent when the file system reaches this value. Note: Select Notify Only If Over-Provisioned to trigger the notification only if the file system is overprovisioned. If this checkbox is not selected, a notification is always sent when the condition is met.

e. Type the e-mail or SNMP address, which consists of an IP address or host name and community name. Separate multiple e-mail addresses or trap addresses with commas. f. Click OK. The configured notification is displayed in the Storage Usage page.

Figure 173 Storage Usage

Monitor and manage storage

251

Configure Celerra notifications - Storage projections To configure notifications based on the projected time to fill the maximum automatically extending file system size: 1. In Celerra Manager, select Notifications in the left pane. The Notification page appears.

Figure 174 Notifications page

252

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

2. Click Storage Usage, and then click New. The New Notification: Storage Usage page appears.

Figure 175 New Notification: Storage Projection

3. Complete the following steps: a. In Storage Type field, select File System. b. In the Storage Resource list box, select the name of the file system. Note: Notifications can be added for all file systems.

c. In the Warn Before field, type the number of days to send the warning notification before the file system is projected to be full.

Monitor and manage storage

253

Note: Select Notify Only If Over-Provisioned checkbox to trigger this notification only if the file system is overprovisioned. If this checkbox is not selected, a notification is always sent when the condition is met.

d. Specify optional e-mail or SNMP addresses, which consist of an IP address or host name and community name. Multiple e-mail addresses or trap addresses must be separated by commas. e. Click OK.

Figure 176 Notifications page

3.11.2 vCenter Server storage monitoring and alarms Alarms are notifications that occur in response to selected events, conditions, and states with the objects in the inventory. A vSphere Client connected to a vCenter Server can be used to create and modify alarms. Note: This is applicable only for VMware vSphere.

3.11.2.1 Create datastore alarms Datastore alarms can be set for an entire data center, host, or a single datastore. To create a datastore alarm: 1. From vCenter Server, select the host, click Configuration, and then click Storage. The list of datastores appears.

254

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

Figure 177 List of Datastores

2. Right-click the required datastore, and select Alarm > Add Alarm. The Alarm Settings dialog box appears.

Figure 178 General tab Monitor and manage storage

255

3. Click General and complete the following steps: a. In the Alarm name field, type the name of the alarm. b. In the Description field, type the description of the alarm. c. In the Monitor list box, select Datastore. d. Select Monitor for specific conditions or state, for example CPU usage, power stage. Note: To disable the alarm, clear this option.

4. Click Triggers and complete the following steps: a. Click Add. b. Select Datastore Disk Usage (%) as the trigger type and set the warning and alert percentages. Note: Multiple trigger types can be added to the same alarm. The Trigger if any of the conditions are satisfied option is selected by default.

Figure 179 Alarm settings 256

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

5. Click Actions and complete the following steps: Note: Alarm actions are operations that occur in response to triggered alarms.

a. Click Add. A notification, trap, or a command is added. b. In the third column, select Once or Repeat (to repeat actions) when the alarm changes from normal to warning and from warning to alert.

Figure 180 Actions tab

Note: Similarly, an action can be added when the alarm changes from warning or alert to normal.

Monitor and manage storage

257

3.12 Virtually provisioned storage Celerra Manager can be used to set up Virtual Provisioning on a file system. To enable Virtual Provisioning on a file system, the Auto Extend Enabled and the Virtual Provisioning Enabled checkboxes must both be selected, as shown in Figure 181. Note that High Water Mark (HWM) is the trigger point at which Celerra Network Server extends the file system. The default is 90 percent. Maximum Capacity (MB) is the maximum size that the file system can grow and it is the virtual size seen by the ESX server.

Figure 181 Create virtually provisioned NFS file system

After the virtually provisioned file system is created, it is presented to the ESX server with its maximum capacity.

258

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

3.12.1 Configure a NAS datastore on a virtually provisioned NFS file system NAS datastore can be created on a virtually provisioned Celerra file system. The virtually provisioned NFS export appears on the ESX server as a datastore. The NAS datastore capacity that appears on the vCenter Server is the file system's maximum capacity assigned when creating the Celerra virtually provisioned file system.

Figure 182 NAS datastore in vCenter Server

Note: The ESX server is unaware of the file system's allocated capacity.

3.12.2 Considerations to use Virtual Provisioning over NFS Consider the following points when using Virtual Provisioning with VMware vSphere or VMware Infrastructure over NFS: ◆

Additional virtual machines can be created on the datastore even when the aggregated capacity of all their virtual disks exceeds the datastore size. Therefore, it is important to monitor the utilization of the Celerra file system to address, in advance, any upcoming storage shortage.



The Virtual Provisioning characteristics of the virtual disk of an affected virtual machine are preserved for the following operations: • Virtual machine creation and Windows/Linux guest OS installation • Virtual disk extension using the vmkfstools CLI utility • Virtual machine cloning and virtual disk extension using VMware Converter

Virtually provisioned storage

259



All Celerra-based operations to manipulate the virtual machine storage also preserve the virtual-provisioning characteristics of the virtual disk of an affected virtual machine. These operations are: • NAS datastore extension using Celerra file system extension • NAS datastore cloning using Celerra SnapSure • NAS datastore replication using Celerra Replicator



For VMware vSphere and VMware Infrastructure (with vCenter Server v2.5 Update 6 and ESX v3.5 Update 5), Virtual Provisioning characteristics of the virtual disk of an affected virtual machine are preserved for the following VMware operations: • Virtual machine cloning using vCenter Server (including cloning from a virtual machine and from a template) • Offline virtual machine migration (such as cold migration) using vCenter Server • Online virtual machine migration (such as hot migration) using Storage vMotion

Note: For VMware Infrastructure, it is important to note that even with the above VMware Infrastructure release installed, this will not be the default behavior. To ensure that the Virtual Provisioning characteristics of the virtual disk will be preserved, some manual configuration is required on ESX (for using zeroedthick formatting policy). Refer to VMware KB article #1017666 for details.

However, for previous VMware Infrastructure releases, this is not the case. These operations will result in the virtual disk becoming fully provisioned or thick (the allocated capacity of the virtual disk will become equal to the maximum capacity specified during the creation of the virtual disk). With these previous VMware Infrastructure releases, the VMware vCenter Converter can be used for virtual machine cloning instead of native cloning. To do this, extend or shrink the disk by at least 1 MB to preserve the virtual provisioning characteristics of the virtual disk.

260

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

3.12.3 Create a virtually provisioned iSCSI LUN A virtually provisioned iSCSI LUN should be created on a virtually provisioned file system. Section 3.12.1, “Configure a NAS datastore on a virtually provisioned NFS file system,” provides more information about how to create a virtually provisioned file system. However, instead of using Celerra Manager or the Celerra Manager New iSCSI LUN wizard to create the LUNs, use the following Celerra CLI command to create a virtually provisioned iSCSI LUN on a virtually provisioned file system: $server_iscsi -lun -number -create -size -fs -vp yes

Figure 183 Creating the virtually provisioned iSCSI LUN

In this example, a virtually provisioned iSCSI LUN is created on the file system file_vp with a maximum size of 100 GB. After a iSCSI LUN is created, it is presented to the ESX server with its maximum capacity. The iSCSI LUN must be provisioned using Celerra Manager on a file system and added to an iSCSI target. Grant access to the iSCSI LUN to the ESX server, which is connected to the target. Refer to Section 3.8.2, “Add a Celerra iSCSI device/LUN to ESX,” on page 139 to complete the configuration of the iSCSI LUNs.

3.12.4 Configure a VMFS datastore on a virtually provisioned iSCSI LUN A VMFS datastore can be created on a virtually provisioned iSCSI LUN. Section 3.8.3, “Create VMFS datastores on ESX,” on page 174 provides more information about creating a VMFS datastore.

Virtually provisioned storage

261

The VMFS datastore capacity that appears on the vCenter Server is the iSCSI LUN capacity assigned while it was created. The ESX server is unaware of the iSCSI LUN allocated capacity.

Figure 184 iSCSI VMFS datastore in vCenter Server

3.12.5 Considerations to use Virtual Provisioning over iSCSI/VMFS Consider the following points when using Virtual Provisioning with VMware vSphere or VMware Infrastructure over iSCSI/VMFS: ◆

Additional virtual machines cannot be created on the datastore when the aggregated capacity of all their virtual disks exceeds the datastore size. But the file system on which the iSCSI LUN is created can become full, preventing further allocation to the iSCSI LUN that contains the VMFS datastore. Therefore, to avoid potential data loss, monitor the file system utilization to ensure that the file system has enough space for LUN growth.



The Virtual Provisioning characteristics of the virtual disk of an affected virtual machine are preserved for the following VMware operations: • Virtual machine creation and Windows/Linux guest OS installation • Virtual disk extension using the vmkfstools CLI utility • Virtual machine cloning and virtual disk extension using



All Celerra-based operations to manipulate the virtual machine storage also preserve the virtual-provisioning characteristics of the virtual disk of an affected virtual machine. These operations are: • VMFS datastore extension using Dynamic iSCSI LUN Extension • VMFS datastore cloning iSCSI snapshots • VMFS datastore replication using Celerra Replicator

262

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure



For VMware vSphere and VMware Infrastructure (with vCenter Server v2.5 Update 6 and ESX v3.5 Update 5), Virtual Provisioning characteristics of the virtual disk of an affected virtual machine are preserved for the following VMware operations: • Virtual machine cloning using vCenter Server (including cloning from a virtual machine and from a template) • Offline virtual machine migration (for example, cold migration) using vCenter Server • Online virtual machine migration (for example, hot migration) using Storage vMotion

Note: For VMware Infrastructure, it is important to note that even with this VMware Infrastructure release installed, this will not be the default behavior. To ensure that the Virtual Provisioning characteristics of the virtual disk will be preserved, some manual configuration is required on ESX (for using zeroedthick formatting policy). Refer to VMware KB article #1017666 for details.

However, for previous VMware Infrastructure releases, this is not the case. These operations will result in the virtual disk becoming fully provisioned or thick (the allocated capacity of the virtual disk will become equal to the maximum capacity specified during the creation of the virtual disk). With these previous VMware Infrastructure releases, the VMware vCenter Converter can be used for virtual machine cloning instead of native cloning. To do this, extend or shrink the disk by at least 1 MB to preserve the virtual provisioning characteristics of the virtual disk.

3.12.6 Leverage ESX thin provisioning and Celerra Virtual Provisioning With VMware vSphere 4, thin provisioning is supported at the virtual disk level for virtual machines.

Virtually provisioned storage

263

Virtual machines that are provisioned on a Celerra NAS datastore with a thin provisioned or a normal file system will have thin virtual disks by default. Virtual disk provisioning policy setting for NFS is shown in Figure 185.

Figure 185 Virtual machines provisioned

Virtual machines that are provisioned on a Celerra thin provisioned file system or on a standard iSCSI LUN can be configured to have thin virtual disks as shown in Figure 186. It should be noted that fault tolerance cannot be used on virtual machines with VMFS-based thin virtual disks.

264

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

Figure 186 Create a Disk

Thin provisioned virtual disks on thin provisioned Celerra NFS and iSCSI is beneficial because it maximizes the storage utilization from using both layers of virtual provisioning (VMFS and Celerra). However, even more than before, such datastores should be monitored for usage by using vCenter datastore alarms and Celerra notifications to prevent an accelerated out-of-space condition. Section 3.11.2, “vCenter Server storage monitoring and alarms,” on page 254 provides more information about how to configure datastore alarms and Celerra notifications.

3.12.7 Virtual storage expansion using Celerra storage This section describes how to expand a virtual datastore provisioned on a Celerra system. This includes two parts: ◆

Celerra storage expansion (NFS file system and iSCSI LUN)



VMware datastore expansion (NAS datastore, VMFS datastore)

The section also covers cases where it is possible to perform this expansion nondisruptively.

Virtually provisioned storage

265

3.12.7.1 Celerra storage expansion For both NFS file system and iSCSI LUNs, the load on the affected datastore can continue while the Celerra storage is expanded. All virtual machines that are running from this datastore can remain powered on throughout this operation. NFS file system expansion To extend a Celerra NFS file system: 1. Select the NFS file system to be extended, and then click Extend.

Figure 187 File Systems

The Extend File System dialog box appears.

266

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

Figure 188 Extend File System

2. In the Extend Size by (MB) field, type the new size for the file system. Alternatively, the Auto Extend Enabled option can also be selected when creating the file system. When this option is selected, the automatic file system extension is enabled on a file system created with AVM. This option is disabled by default. If enabled, the file system automatically extends when the high water mark is reached (default is 90 percent). Figure 189 shows the New File System dialog box where this option is available.

Virtually provisioned storage

267

Figure 189 Auto Extend Enabled

iSCSI LUN expansion To expand an iSCSI LUN: 1. From Celerra Manager, click the iSCSI link on the left panel, select the LUN tab, and then select LUN to Extend. Click Extend as shown in Figure 190.

268

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

Figure 190 iSCSI LUN expansion

The Extend iSCSI LUN dialog box appears.

Virtually provisioned storage

269

Figure 191 Extend iSCSI LUN

2. If the underlying file system does not have enough space, expand the file system and then extend the LUN. 3. Type the size to extend the iSCSI LUN, and then click OK. 3.12.7.2 VMware datastore expansion After the NFS file system or iSCSI LUN is expanded, the VMware datastore can be expended. NAS datastore expansion and VMFS datastore expansion is explained in the following section. NAS datastore expansion with Celerra To expand the NAS datastore with Celerra: 1. In Celerra Manager, extend the underlying file system as described in Section 3.12.7.1, “Celerra storage expansion,” on page 266.

270

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

2. From vCenter Server, review the datastore capacity. For this, select the ESX host from the Inventory area, and click the Configuration tab. Click Storage from the Hardware area, and review the datastore capacity.

Figure 192 Configuration tab

3. Click Refresh. The datastore capacity is updated. Note: In the figure, the capacity is extended by 10 GB.

Figure 193 Data capacity

Note: The load on the NAS datastore can be maintained as before. The virtual machines running on the expanded NAS datastore can also remain powered on.

Virtually provisioned storage

271

VMFS datastore expansion with Celerra iSCSI With VMware vSphere, VMFS volumes provide a new way to increase the size of a datastore that resides on it. If a LUN is increased in size, VMFS Volume Grow enables the VMFS volume to dynamically increase in size as well. With VMFS Volume Grow, the process of increasing the size of the VMFS volume is integrated through the vCenter Server GUI, where the size can be entered in the VMFS Volume Properties dialog box. Provided that the additional capacity on the existing extent is there or has been recently increased in capacity, the VMFS volume can now be expanded dynamically up to 2 TB limit per LUN. For VMFS volumes that might already span multiple extents, the VMFS Volumes Grow can be used to grow each of those extents up to 2 TB as well. Add extend In Celerra Manager, extend the underlying iSCSI LUN as described in the Section , “iSCSI LUN expansion,” on page 268. To extend the VMFS datastore: 1. Rescan the HBA.

Figure 194 iSCSI datastore expansion 272

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

The additional space on the LUN is displayed in the vCenter Server after the rescan.

Figure 195 Additional available space

2. After the rescan, select the ESX host from the Inventory area, and then click Configuration. Select Storage from the Hardware pane. All the available datastores are listed in the right pane.

Figure 196 iSCSI datastore expansion Virtually provisioned storage

273

3. Select the appropriate datastore that must be extended, and then click Properties. The test Properties page appears.

Figure 197 Test Properties

4. Click Increase. The Increase Datastore Capacity dialog box appears.

Figure 198 Increase Datastore Capacity

Note: To add a new extent, select the device for which the expandable column reads No. To expand an existing extent, select the device for which the expandable column reads Yes.

274

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

5. Select a device from the list of storage devices, and then click Next. The Current Disk Layout dialog box appears.

Figure 199 Disk Layout

6. Review the disk layout, and then click Next. The Extent Size dialog box appears.

Figure 200 Extent Size

7. Clear Maximize capacity, and then select the capacity for the extent. 8. Click Next. The Ready to Complete dialog box appears.

Virtually provisioned storage

275

Note: Select the Maximize capacity checkbox to take the maximum extended size in the case of the extended existing device. When extending the LUN with a new added LUN, the maximum available space in the new LUN appears. By default, the Maximize capacity checkbox is selected.

Figure 201 Ready to complete page

9. Review the proposed layout and the new configuration of the datastore, and then click Finish. Note: After growing an extent in a shared VMFS datastore, refresh the datastore on each host that can access this datastore.

276

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

However, in VMware Infrastructure, select the appropriate datastore that must be extended, and then click Properties. The iscsidatastore Properties page appears. Click Add Extent. Review the datastore expansion, and then click the Finish button. Also, in VMware Infrastructure, power off or suspend all virtual machines running in the affected VMFS datastore if the same LUN is extended. VMFS datastores can also be extended using another LUN. In that case, the virtual machines do not need to be powered off to complete the Add Extent operation.

Figure 202 Add Extent in VMware Infrastructure

Virtually provisioned storage

277

3.13 Storage multipathing Celerra iSCSI SAN provides multiple paths between the ESX host and the Celerra storage. This protects against single-point failures and enables load balancing. Pluggable Storage Architecture (PSA) is introduced in vSphere. PSA is a VMkernel layer that coordinates the simultaneous operation of multiple multipathing plug-ins (MPP). The default plug-in shipped with vSphere is VMware Native Multipathing plug-in (NMP). The two NMP sub plug-ins are Storage Array Type plug-ins (SATPs) and Path Selection plug-ins (PSPs). The specific details to handle the path failover for a given storage array are delegated to a Storage Array Type Plug-in (SATP). A PSP handles the specific details to determine which physical path is used to issue an I/O request to a storage device. SATPs and PSPs are provided by VMware, and additional plug-ins are provided by third-party vendors. Also, for additional multipathing functionality, a third-party MPP can be used in addition to or as a replacement for NMP. EMC PowerPath/VE for vSphere is the industry's first MPP that supports both EMC arrays and third-party arrays. With VMware Infrastructure, only VMware NMP can be used for using multiple paths between the Celerra storage and the ESX host. For Celerra NFS, it is possible to design high-availability configurations with multiple paths for scaling the bandwidth with both VMware vSphere and VMware Infrastructure.

3.13.1 Configure VMware NMP with Celerra iSCSI and the ESX iSCSI software initiator VMware vSphere and VMware Infrastructure have inbuilt native multipathing (NMP) capabilities for iSCSI. This section explains the process to configure VMware NMP with Celerra iSCSI and the ESX iSCSI software initiator. VMware NMP has three inbuilt PSPs:

278



Fixed — This is the policy that gets selected by default for Celerra. The preferred path is used for I/O. In case of preferred path failure, I/O reverts to another available path. I/O will revert to the preferred path after it is restored.



Most Recently Used (MRU) — All I/O use the available active path. In case of failure, the I/O moves to another available path. The I/O continues in this path even after the original path is restored.

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure



Round-Robin — Active path is used for a number of I/O operations. Other paths are used for a number of I/O operations. The paths are rotated after reaching the specified I/O operations. Round-Robin PSP can be used for Celerra iSCSI to use multiple active paths simultaneously and that will effectively increase the available bandwidth between the Celerra iSCSI LUN and the ESX host.

Storage multipathing

279

Using vSphere NMP with Celerra iSCSI To use vSphere NMP with Celerra iSCSI: 1. Create a new iSCSI LUN. Refer step 10 onwards in Section 3.8.2.1, “ESX iSCSI software initiator,” on page 139. Note: Make the iSCSI target available on two network interfaces that are on two different subnets.

Figure 203 iSCSI Target Properties

Note: Grant the iSCSI LUN to the ESX software initiator that is connected to the target.

Figure 204 LUN Mask

280

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

2. Create two vSwitches and one VMkernel port for each vSwitch. Section 3.6.3, “VMkernel port configuration in ESX,” on page 120 provides more information. Note: Ensure that the VMkernel port is on two different subnets as those of Celerra. One physical NIC should be connected to each vSwitch.

Figure 205 vSwitch configuration

Storage multipathing

281

3. Rescan the ESX iSCSI software initiator to find the Celerra iSCSI LUN and to ensure that the two paths are available.

Figure 206 Rescan

Figure 207 Properties

282

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

4. Click Add Storage. The Add Storage wizard appears. 5. Add the new iSCSI LUN to the ESX host. 6. Select the datastore, and then click Properties. The iSCSI_ppve Properties page appears.

Figure 208 iSCSI_ppve Properties

7. Click Manage Paths. The iSCSI Disk Manage Paths dialog box appears.

Storage multipathing

283

Figure 209 iSCSI Disk Manage Paths

8. From the Path Selection list, select Round Robin (VMware). The path changes after the specified I/O operations. 3.13.1.1 Use hardware iSCSI initiators with vSphere NMP Hardware iSCSI initiators can also be used with vSphere NMP. The procedure is similar to the ESX software iSCSI initiator configuration as explained in Section 3.8.1.1, “iSCSI HBA and NIC,” on page 138. 3.13.1.2 Change the default number of I/O operations per path in Round Robin The default number of I/O operations per path before the next path is used is 1000. The following command can be used to increase the speed of switching between paths. The value can be set to 1 for Celerra, and the paths will be switched for each I/O operation. If the switching value is changed to 1, multiple paths are made use of at the same time. However, this results in some CPU overheard on the ESX host. esxcli --server nmp roundrobin setconfig --device --iops --type iops

3.13.2 Multipathing using Microsoft iSCSI Initiator and Celerra iSCSI inside a Windows guest OS This method allows the Windows guest OS to directly access and manage Celerra iSCSI LUNs. Microsoft iSCSI initiator provides guest-based multipathing and load balancing through MPIO (or MC/S). Some applications such as virtualized databases, e-mail systems, and clusters benefit from host-based multipathing. This method can be used with both vSphere and VMware Infrastructure. 284

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

Multiple connections per session (MCS) and MPIO are the two technologies that support Microsoft iSCSI initiator to enable redundancy and load balancing. EMC Celerra supports multiple sessions using Microsoft MPIO and MCS. MCS enables multiple TCP/IP connections from the Microsoft iSCSI initiator to the target for the same iSCSI session. Microsoft MPIO enables the Microsoft iSCSI initiator to log in multiple sessions to the same target and aggregate duplicate devices into a single device exposed to Windows. Each session to the target can be established using different NICs, network infrastructure, and target ports. If one session fails, another session can continue the I/O processing without interrupting the application. For MCS, the load-balancing policies apply to connections in a session and to all LUNs exposed in the session. For Microsoft MPIO, the load-balancing policies apply to each LUN individually. Microsoft MPIO must be used when different load-balancing policies are required for different LUNs. Note: Microsoft does not support the layering of MPIO and MCS, although it is technically possible.

3.13.2.1 Configure Celerra to use Microsoft iSCSI To configure Celerra to use Microsoft iSCSI: 1. Create a new iSCSI LUN. Refer step 10 onwards in Section 3.8.2.1, “ESX iSCSI software initiator,” on page 139. Note: Make the iSCSI target available on two network interfaces that are on two different subnets.

Storage multipathing

285

Figure 210 iSCSI Target Properties

2. Grant the iSCSI LUN to the ESX software initiator, which is connected to the target.

Figure 211 LUN Mask

3.13.2.2 Configure ESX and virtual machines To configure ESX and virtual machines: 1. Add two vSwitches and add physical NICs to the vSwitches.

286

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

Note: Ensure that the physical NICs are connected to two different subnets similar to Celerra.

Figure 212 vSwitches

2. Add two virtual NICs to the virtual machine and connect each NIC to different virtual machine networks. 3.13.2.3 Configure Microsoft iSCSI MPIO inside a Windows guest OS using Celerra iSCSI To configure Microsoft iSCSI MPIO inside a Windows guest OS: 1. Install the latest Microsoft iSCSI initiator along with multipathing support.

Storage multipathing

287

2. From the Control Panel, start iSCSI Initiator Properties. The iSCSI Initiator Properties dialog box appears.

Figure 213 iSCSI Initiator Properties

3. Click Discovery. The Discovery dialog box appears. 4. Click Add. The Add Target Portal dialog box appears. 5. Type the Celerra target portal IP address from two different subnets (from two different switches for network redundancy), and then click OK. The target portals are added.

288

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

Figure 214 Discovery

6. Click Targets. The Targets dialog box appears. 7. Select the appropriate Celerra target, and then click Log On. The Log on to Target dialog box appears.

Storage multipathing

289

Figure 215 Log On to Target

8. Select Automatically restore this connection when the system boots and Enable multi-path, and then click Advanced. The Advanced Settings dialog box appears.

Figure 216 Advanced Settings 290

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

9. Select the Source IP and Target Portal for the session, and then click OK. A new session is created. 10. Similarly, create another session by specifying another Source IP and Target Portal.

Figure 217 Advanced Settings

The selected target will have two sessions.

Figure 218 Target Properties Storage multipathing

291

11. Click Devices, and then select the appropriate Celerra disk and click Advanced. The Device Details dialog box appears.

Figure 219 Device Details

12. Click MPIO, and then select the load-balancing policy to view details on the paths available for the device.

3.13.3 Scaling bandwidth of NAS datastores on Celerra NFS A single NAS datastore uses a single TCP session. Even though multiple links are used, a single NAS datastore still uses a single physical link for the data traffic to that datastore. This is because the data flow uses only a single TCP session. Therefore, higher throughput can be achieved by using multiple NAS datastores. This method can be used with both vSphere and VMware Infrastructure. The use of link aggregation on Celerra and the network switch enables fault tolerance of NIC failures and also enables load balancing between multiple paths. Cross stack Etherchannel (link aggregation across physical switches) support is required in the physical switch. The switch can be configured for static or dynamic LACP for Data Mover ports and static LACP for ESX NIC ports. The load balancing on the switch must be set to route based on IP hash for Etherchannel. 292

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

Configure multiple paths for a NAS datastore To configure multiple paths for a NAS datastore: 1. In Celerra Manager, click Network from the left pane. The Network page appears in the right pane.

Figure 220 Network page

2. Click Devices, and then click New. The New Network Device dialog box appears.

Storage multipathing

293

Figure 221 New Network Device

3. In the Type field, select Link Aggregation. 4. In the 10/100/1000 ports field, select two Data Mover ports (Link Aggregation must be done on the switches, for the corresponding Celerra Data Mover interfaces and ESX host network ports). 5. Click Apply, and then click OK. 6. In the Network page, click Interfaces, and then click New.

294

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

Figure 222 Interfaces

The New Network Interface page appears. Note: Ensure that the interface IP addresses are on the same subnet.

Figure 223 New Network Interface

Storage multipathing

295

7. Create two Celerra file systems and export as NFS mounts on the same server where the interfaces are created. Section 3.7.2, “Create a NAS datastore on an ESX server,” on page 133 provides more information about NFS export. 8. Create a single VMkernel port on the same subnet in a vSwitch. Add two physical NICs to the vSwitch, which must be located on the same subnet.

Figure 224 Create a VMkernel port

9. Click Properties. The vSwitch3 Properties dialog box appears.

296

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

Figure 225 vSwitch3 Properties

10. Select vSwitch, and then click Edit. The vSwitch3 Properties page appears.

Figure 226 vSwitch3 Properties

Storage multipathing

297

11. Click NIC Teaming, and select Route based on ip hash from the Load Balancing list box. 12. Add the NAS datastore using two different Celerra Data Mover interfaces.

Figure 227 Celerra Data Mover interfaces

Virtual machines can be distributed between the two datastores and both the physical links will be used.

3.13.4 VMware vSphere configuration with Celerra iSCSI using PowerPath/VE 3.13.4.1 PowerPath/VE introduction PowerPath/VE contains advanced features to streamline and automate the I/O performance of VMware vSphere with EMC Celerra, CLARiiON, Symmetrix, and non-EMC arrays. vSphere native multipathing supports basic failover and manual load-balancing policies, whereas PowerPath/VE automates path utilization to dynamically optimize performance and availability. In addition to this, PowerPath/VE offers dynamic load balancing, auto-restore of paths, automated performance optimization, dynamic path failover, and path recovery. The following figure shows the architecture of PowerPath/VE.

298

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

Figure 228 PowerPath architecture

3.13.4.2 PowerPath/VE setup requirements ESX software iSCSI initiators with NICs and hardware iSCSI initiators can be used with PowerPath/VE and Celerra iSCSI. PowerPath/VE for VMware vSphere Installation and Administration Guide 5.4, available on EMC Powerlink, provides information to install and license PowerPath/VE. The prerequisites are that PowerPath remote CLI (rpowermt) and vSphere vCLI must be installed as part of the PowerPath/VE installation. Both NMP and PowerPath/VE can exist on the same ESX host to manage the storage available to it. NMP and PowerPath/VE cannot simultaneously manage the same storage device connected to the ESX host. Claim rules are used to assign storage devices to either NMP or to PowerPath/VE. PowerPath/VE claims all Symmetrix, CLARiiON, and supported third-party array devices by default after installation. For Celerra, users must add a new claim rule using esxcli command from a remote client where vSphere vCLI is installed. Storage multipathing

299

3.13.4.3 Claim Celerra iSCSI LUNs from PowerPath/VE To claim Celerra iSCSI LUNs from PowerPath/VE: 1. From vSphere vCLI, add the following claim rule to the ESX server: esxcli corestorage claimrule add --plugin="PowerPath" --type=vendor --rule --vendor="EMC" --model="Celerra

Figure 229 Claim rule to ESX server

2. Type the following command to update the kernel and esx.conf: esxcli corestorage claimrule load

Figure 230 Kernel and esx conf

3. Type the following command to verify whether the claim rule is successfully loaded. esxcli corestorage claimrule list

4. Reboot the ESX host.

300

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

Figure 231 Rescan the ESX host

3.13.4.4 Configure PowerPath/VE multipathing for Celerra iSCSI using hardware iSCSI initiators 1. Configure the hardware iSCSI initiator as given in Section 3.8.2.2, “ESX iSCSI hardware initiator,” on page 154. 2. To create a new iSCSI LUN, refer to step 10 onwards in Section 3.8.2.1, “ESX iSCSI software initiator,” on page 139.

Storage multipathing

301

Note: The iSCSI target must be made available on two network interfaces, which are on two different subnets.

Figure 232 iSCSI Target Properties

Note: Grant the iSCSI LUN to both hardware initiators (on two different subnets), which are connected to the target.

Figure 233 LUN Mask

302

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

Figure 234 Storage Adapters

3. From vCenter Server, select the HBA and right-click it, and then click Rescan. Note: Rescan both hardware HBAs to discover the iSCSI LUN and to make sure a path is available for each HBA port.

Storage multipathing

303

4. Click Storage in the left pane. The Storage page appears.

Figure 235 Storage

5. Click Add Storage. The Add Storage wizard appears.

Figure 236 Add Storage wizard

304

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

6. Select Disk/LUN, and then click Next. The Select Disk/LUN dialog box appears.

Figure 237 Select Disk/LUN

7. Select the appropriate iSCSI LUN from the list, and then click Next. The Current Disk Layout dialog box appears.

Figure 238 Current Disk Layout

8. Review the current disk layout, and then click Next. The Ready to Complete dialog box appears.

Storage multipathing

305

Figure 239 Ready to Complete

9. Review the layout, and then click Finish. The vCenter Server storage configuration page appears.

306

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

Figure 240 vCenter Server storage configuration

10. Select the datastore, and then click Properties. The iSCSI_ppve Properties dialog box appears.

Figure 241 iSCSI_ppve Properties

Storage multipathing

307

11. Click Manage Paths. The iSCSI Disk Manage Paths dialog box appears.

Figure 242 iSCSI Disk Manage Paths

Note: The vSphere NMP policy cannot be selected because the Path Selection list is not available. Both paths listed in the Paths area are used for active I/O by PowerPath/VE.

PowerPath is managed through a remote CLI using rpowermt. Using rpowermt, the state of the paths and devices can be monitored. The default PowerPath policy for the Celerra iSCSI LUN is adaptive. The other load-balancing policies available are Round Robin, Streaming I/O, Least Block, and Least I/O.

Figure 243 PowerPath

Using rpowermt, the load-balancing and failover policy for devices can be managed. The PowerPath/VE for VMware vSphere Installation and Administration Guide 5.4 available on Powerlink provides more information. 308

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

3.13.4.5 Configure PowerPath/VE for Celerra iSCSI using an ESX software iSCSI initiator and NICs To configure PowerPath/VE for Celerra iSCSI using an ESX software iSCSI initiator and NICs: 1. Create a new iSCSI LUN. Refer to step 10 onwards in Section 3.8.2.1, “ESX iSCSI software initiator,” on page 139. Note: Make the iSCSI target available on two network interfaces.

Figure 244 iSCSI Target Properties

2. Create two vSwitches and one VMkernel port for each vSwitch. Note: Ensure that the VMkernel port is on two different subnets as that of Celerra. One physical NIC should be connected to each vSwitch.

Storage multipathing

309

Figure 245 vSwitches

3. Grant the Celerra LUN to the ESX iSCSI software initiator connected to the target.

Figure 246 iSCSI Target Properties

310

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

Figure 247 Rescan

4. Rescan the ESX iSCSI software initiator to discover the Celerra iSCSI LUN and ensure that the two paths are available. 5. Add the new iSCSI LUN to the ESX host using the Add Storage wizard as mentioned in Section 3.13.4.4, “Configure PowerPath/VE multipathing for Celerra iSCSI using hardware iSCSI initiators,” on page 301.

Storage multipathing

311

Figure 248 Add a new iSCSI LUN to the ESX host

312

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

6. Select the datastore, and then click Properties. The iSCSI_ppve Properties page appears.

Figure 249 iSCSI_ppve Properties

7. Click Manage Paths. The iSCSI Disk Manage Paths dialog box appears.

Storage multipathing

313

Note: The active paths and path owner are visible.

Figure 250 iSCSI Disk Manage Paths

314

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

3.14 VMware Resiliency With Celerra Data Mover outages, customers may face several challenges in production virtual environments such as application unavailability, guest operating system crash, data corruption, and data loss. In production environments, the availability of virtual machines is the most important factor to ensure that the data is available when required. Several factors affect the availability of virtual machines such as Data Mover panic due to software errors, Data Mover failover due to connectivity issues that cause disruption to clients by restarting the services on the standby Data Mover, or Data Mover reboot due to operations such as Celerra DART upgrades, which results in a Data Mover downtime and makes the application unavailable for the duration of the operation.

3.14.1 The rationale for VMware Resiliency During Celerra failure events, the guest operating system (OS) loses connection to the NAS datastore created on the Celerra file system, and the datastore becomes inactive and unresponsive to user I/O. Meanwhile, virtual machines hosted on the NAS datastore start experiencing Disk SCSI timeout errors in the OS system event viewer. To avoid these errors, EMC recommends several best practices to be followed on ESX and guest operating systems to keep the application and virtual machines available during Celerra Data Mover outage events.

3.14.2 EMC recommendations for VMware Resiliency with Celerra To avoid the downtime caused during the Celerra Data Mover outage events, EMC recommends tuning the VMware environments as various components of systems, such as ESX, guest operating system, disk adapters, and so on. These best practices for VMware resiliency with Celerra apply to both VMware Infrastructure 3.5 and VMware vSphere 4 environments: ◆

Configure customer environments with at least one standby Data Mover to avoid a guest OS crash and the unavailability of applications.



Increase ESX NFS Heartbeat parameters to keep the NAS datastore active during failure events.

VMware Resiliency

315



Increase the disk timeout parameter on the guest operating system to keep the virtual machines accessible.

3.14.2.1 Calculate the effective timeout period for a Celerra NFS volume on ESX The formula to calculate the time taken for the ESX server to mark a NAS datastore as unavailable is: RoundUp (NFS.HeartbeatDelta, NFS.HeartbeatFrequency) + (NFS.HeartbeatFrequency * (NFS.HeartbeatMaxFailures - 1)) + NFS.HeartbeatTimeout Let A= RoundUp (NFS.HeartbeatDelta, NFS.HeartbeatFrequency) B= (NFS.HeartbeatFrequency * (NFS.HeartbeatMaxFailures - 1) C= NFS.HeartbeatTimeout Total (A+B+C) = Effective timeout period for ESX to mark the NAS datastore as unavailable. The following is an example to calculate the NAS datastore unavailability timings as per the recommended values: If NFS.HeartbeatDelta = 5 s NFS.HeartbeatFrequency = 12 NFS.HeartbeatMaxFailures = 10 NFS.HeartbeatTimeout = 5 s Then, the effective timeout period for a ESX NFS volume can be evaluated as, A=Roundup (5,12) = 5, B = [12*(10-1)]=12*9=108, and C=5 Total A+B+C = 5+108+5 = 118 seconds

3.14.3 Install appropriate SCSI drivers Increasing the Disk SCSI timeout and NFS Heartbeat parameters does not help Windows guest operating systems to survive from timeout errors caused during extended Celerra Data Mover outage events such as Data Mover panic and Data Mover reboot. Virtual machines may experience I/O failures, and disk and symmpi event errors are logged in the systems event viewer.

316

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

Figure 251 Windows virtual machines system event viewer

To avoid such I/O failures, EMC provides several workarounds using VMware virtual SCSI drivers with guest operating systems: ◆

LSI Storport SCSI driver (Windows virtual machines)



VMware Paravirtual SCSI adapter (VMware vSphere–based virtual machines)



The correct Linux guest operating system version (Linux virtual machines)

The following section provides details about each of these workarounds. 3.14.3.1 LSI Storport SCSI driver For Windows Server 2003 virtual machines, EMC recommends the LSI Storport SCSI driver instead of the native SCSI driver that is used with ESX. The LSI Storport SCSI driver has architectural enhancements that

VMware Resiliency

317

provide performance improvements on large server systems with many storage adapters. LSI Storport drivers are third-party drivers and can be downloaded from the following link: http://www.lsi.com/storage_home/products_home/host_bus_adapte rs/scsi_hbas/lsi20320r/index.html With ESX 3.5 update 3 and update 4 versions, LSI Storport SCSI drivers are used to avoid SCSI timeout errors in Windows 2003 guest operating systems. Section 3.14.7, “Upgrade LSI Logic Parallel drivers to LSI Logic Storport drivers,” on page 321 shows how to upgrade from the native LSI Logic parallel drivers to LSI Storport drivers in Windows Server 2003 guest operating systems in VMware ESX 3.5 and VMWare vSphere 4 environments. For versions earlier than ESX 3.5 Update 3, use LSI Logic Storport driver version 1.20.18 (or older) or LSI Logic SCSI port driver for the Windows guests operating system. With ESX 3.5 update 3 and update 4 versions, use the LSI Storport SCSI drivers version 1.26.05 or later for the Windows guest operating system. 3.14.3.2 VMware Paravirtual SCSI Adapter (PVSCSI) Paravirtualization is a virtualization technique that presents a software interface to virtual machines that are similar but not identical to that of the underlying hardware. For VMware vSphere–based virtual machines, EMC recommends to configure virtual machines to use the VMware Paravirtual SCSI adapter. Introduced with VMware vSphere 4, PVSCSI adapter can be used with virtual disks. VMware Paravirtual SCSI (PVSCSI) is an enhanced and optimized special-purpose driver for high-performance storage adapters that offer greater throughput and lower CPU utilization for virtual machines. They are best suited for environments where guest applications are very I/O intensive. PVSCSI adapters avoid guest operating systems SCSI timeout errors caused during the Celerra Data Mover failure events. PVSCSI with VMware vSphere 4 initial release In the initial release of VMware vSphere 4, VMware Paravirtual SCSI drivers are not supported for the OS boot disk of a virtual machine. Hence, use LSI Storport drivers for the system disk and Paravirtual drivers for the other virtual data disks of the virtual machine. Section 3.14.8, “Using paravirtual drivers in vSphere 4 environments,” on page 333 describes how to configure a PVSCSI adapter for virtual disks in VMware vSphere environments. 318

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

PVSCSI with VMware vSphere 4 update 1 and later In VMware vSphere 4 update 1 (and later), Paravirtual drivers are supported on the OS boot disk and virtual data disks in Windows 2003, 2008, and RHEL guest operating systems. Paravirtual drivers are available as floppy disk images that can be used during the Windows installation by selecting the F6 option during the guest operating system setup. The floppy images for the PVSCSI driver are available in the /vmimages/floppies/ folder in the ESX 4 host. The floppy images for different operating systems versions are: ◆

Windows 2003 guest operating systems: pvscsi-1.0.0.5-signed-Windows2003.flp



Windows 2008 guest operating systems: pvscsi-1.0.0.5-signed-Windows2008.flp

Note: VMware recommends creating a primary adapter for the disk that hosts the system software (boot disk) and a separate PVSCSI adapter for the disk that stores user data (data disks). The primary adapter is the default for the guest operating system on the virtual machine.

3.14.4 Summary for VMware Resiliency with Celerra This section provides the summary of the considerations that were presented for the improved resiliency of virtual machines with VMware vSphere or VMware Infrastructure and Celerra. The ESX NFS Heartbeat Parameter settings must be increased on the ESX hosts to ensure that the NAS datastore is available during the Celerra Data Mover outage, as discussed in Section 3.6.1.5, “ESX host timeout settings for NFS,” on page 118. The following sections provide further resiliency considerations specific to Windows and Linux based virtual machines.

3.14.5 Considerations for Windows virtual machines Consider the following resiliency aspects with Windows virtual machines: ◆

Increase the Disk SCSI timeout registry parameter setting from default to survive Celerra Data Mover outages. Involve EMC Professional Services to determine the appropriate Disk SCSI timeout parameter setting for the customer environment, as discussed in Section 3.10.3, “Guest OS SCSI timeout settings,” on page 236. VMware Resiliency

319



Table 2

Install the SCSI driver recommended for Windows guest operating systems. Table 2 shows the SCSI driver recommendations for Windows guest operating systems in varied VMware ESX environments.

SCSI driver recommendations for Windows guest OSs Guest OS versions

ESX 3.5 U3 and later versions

Windows Server 2003 Enterprise Edition R2 (32-bit,64-bit)

VMware vSphere 4

VMware vSphere 4 U1

LSI Storport drivers

LSI Storport drivers Guest OS disk Paravirtual drivers additional disks

Paravirtual drivers

Windows Server 2008 Enterprise Edition Service Pack 2

Default drivers

Default drivers - Guest OS disk Paravirtual drivers - additional disks

Default drivers or Paravirtual drivers

Windows XP Professional

Default drivers

Default drivers

Default drivers

3.14.6 Considerations for Linux virtual machines During Celerra Data Mover outage events, NAS datastore becomes inactive and the Linux partitions on guest operating systems become read-only while continuously retrying I/O operations. VMware has identified the problem with several versions of Linux guest operating systems such as RHEL4 Update3, RHEL4 Update4, RHEL5, SLES10, and SLES9 SP3. The Data Mover outage causes virtual machine unavailability in all Linux distributions based on early 2.6 kernels. Detailed information is available in the VMware KB article 51306. The resiliency considerations for Linux virtual machines are: ◆

320

To avoid virtual machine unavailability, VMware recommends to upgrade Linux guest operating systems to the recommended versions as listed in Table 3.

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

Table 3

Linux guest OS recommendations Guest OS versions

Recommended OS versions

Red Hat Enterprise Linux 4 U4

Red Hat Enterprise Linux 4 U6

Red Hat Enterprise Linux 5

Red Hat Enterprise Linux 5 U1 or later

SUSE Linux Enterprise Server 10

SUSE Linux Enterprise Server 10 SP2

SUSE Linux Enterprise Server 9 SP3

SUSE Linux Enterprise Server 9 SP4

Ubuntu 7.04

Ubuntu 7.10



The disk timeout registry settings are increased to a larger value to protect the guest operating systems from outage events. The Linux command to increase the timeout value for the virtual machines: echo "360" > /sys/block/sda/device/timeout



The default SCSI drivers are recommended on RHEL guest operating systems.

3.14.7 Upgrade LSI Logic Parallel drivers to LSI Logic Storport drivers Upgrading the Windows Server 2003 guest OS from the LSI Logic parallel to LSI Logic Storport driver for the guest OS boot disk causes Windows to crash with the blue screen of death (BSOD). To avoid this, additional parameters are needed in the virtual machines configuration file (*.vmx) of the guest OS. Detailed information on these parameters is available in the VMware KB article 1006224. To upgrade the LSI Logic parallel drivers to LSI Storport drivers in ESX 3.5 and vSphere 4: 1. Log in to the Windows guest OS that runs in the virtual machine as Administrator, and right-click My Computer. 2. Select Manage > Device Manager > SCSI and RAID Controllers.

VMware Resiliency

321

3. Right-click LSI Logic PCI-X Ultra320 SCSI Host Adapter, and select Update Driver.

Figure 252 Upgrade the LSI Logic PCI-X Ultra 320 driver

The Hardware Update Wizard appears.

322

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

Figure 253 Hardware Update Wizard

4. Select Yes, this time only, and then click Next. Note: Download the driver from the LSI website and store the driver in the specific location in the guest operating system.

VMware Resiliency

323

Figure 254 Install software

5. Select Install from a list or specific location (Advanced), and then click Next.

324

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

Figure 255 Select device driver from a list

6. Select Don't search. I will choose the driver to install, and then click Next. The Select the device driver you want to install for this hardware dialog box appears.

VMware Resiliency

325

Figure 256 Select the device driver

7. Click Have Disk. The Install from Disk dialog box appears.

Figure 257 Install from Disk

326

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

8. Click Browse. The Locate File dialog box appears.

Figure 258 Locate File

9. Browse to the path of the device driver to upgrade the hardware, and then click Open. The path of the file is displayed. 10. Click OK. The selected StorPort driver is available to upgrade the hardware.

VMware Resiliency

327

Figure 259 Select device driver

11. Click Next. The Completing the Hardware Update Wizard dialog box appears.

328

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

Figure 260 Completing the Hardware Update Wizard

12. Click Finish. 13. Restart the virtual machine. The newly installed StorPort drivers are applied to the guest OS. Note: To resolve the Windows BSOD in ESX 3.5, complete the following steps in addition to the steps described earlier. 14. Power off the virtual machines.

VMware Resiliency

329

15. In the ESX host, click Configuration in the right pane, and then click Storage in the left pane.

Figure 261 ESX host

16. Right-click the required datastore, and then select Browse Datastore. The Datastore Browser dialog box appears.

Figure 262 Datastore Browser

330

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

17. From the left pane, select the required virtual machine hosted on the datastore. The components of the virtual machines are listed in the right pane. 18. Right-click the virtual machine configuration file, and then click Download. The configuration file downloads to the specified location. 19. Open the configuration file and add the parameter lsilogic.iobar256="true".

Figure 263 Configuration file

20. Upload the updated virtual machine configuration file to the datastore browser. VMware Resiliency

331

Figure 264 Update virtual machine file configuration

21. Power on the virtual machines. The LSI StorPort Adapters are successfully installed in the virtual machine.

332

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

Figure 265 LSI Storport drivers are upgraded successfully

3.14.8 Using paravirtual drivers in vSphere 4 environments The procedure to configure the paravirtual SCSI adapters as a system boot disk for VMware vSphere 4 environments and to add the paravirtual SCSI disks to the existing virtual machines is discussed in detail in this section. 3.14.8.1 Configure a PVSCSI adapter as a system boot disk in VMware vSphere 4.1 environments To configure a disk as PVSCSI adapter as system boot disk in VMware vSphere 4.1 environments: 1. Launch the vSphere Client and log in to the ESX host system and create a new virtual machine. 2. Ensure that a guest operating system that supports PVSCSI is installed on the virtual machine. VMware Resiliency

333

3. Right-click the virtual machine, and then click Edit Settings. The Virtual Machine Properties dialog box appears. Note: These drivers are loaded during the installation of the guest operating system in the form of floppy disk images, which are available in the [Datastore]/vmimages/floppies folder.

Figure 266 Virtual Machine Properties

4. Select Use existing floppy image in datastore, and then click Browse. The Browse Datastore dialog box appears. Note: Connect the floppy disk image after the Windows CD-ROM is booted so that the system does not boot from the floppy drive.

334

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

Figure 267 Browse Datastores

5. Browse to vmimages > floppies and select the floppy images of the appropriate guest OS, and then click OK. The floppy image of the guest OS is displayed, and the device status of the floppy image is connected at power on.

VMware Resiliency

335

Figure 268 Virtual Machine Properties

6. Power on the virtual machine. Note: The virtual machine boots from the CD-ROM drive.

7. Press F6. Windows Setup appears. Note: This is required to instruct the operating system that third-party SCSI drivers are used.

336

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

Figure 269 Install the third-party driver

8. In the Device Status area of the Virtual Machine Properties dialog box, select Connect at Power on and click OK. The newly created virtual machine points to the PVSCSI SCSI driver.

VMware Resiliency

337

Figure 270 Select VMware PVSCSI Controller

9. Press ENTER. The third-party paravirtual SCSI drivers are successfully installed. 10. Continue the Window guest OS setup. Note: Booting a Microsoft Windows guest from a disk attached to a PVSCSI adapter is not supported in versions of ESX prior to ESX 4.0 Update 1. In these situations, install the system software on a disk attached to an adapter that does support a bootable disk.

3.14.8.2 Add Paravirtual SCSI (PVSCSI) adapters To add a hard disk with a paravirtual SCSI adapter: 1. Start a vSphere Client and log in to an ESX host system. 2. Select an existing virtual machine or create a new one.

338

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

3. Ensure that a guest operating system that supports PVSCSI is installed on the virtual machine. Note: The guest operating system currently supports the paravirtual drivers Windows Server 2008, Windows Server 2003, and Red Hat Enterprise Linux (RHEL) 5. If the guest operating system does not support booting from a disk attached to a PVSCSI adapter, install the system software on a disk attached to an adapter that supports a bootable disk In the vSphere Client, right-click the virtual machine, and then click Edit Settings.

In the vCenter Server, right-click the virtual machine and then click Edit Settings. The Virtual Machine Properties dialog box appears.

Figure 271 Virtual Machine Properties

4. Click Hardware, and then click Add. The Add Hardware wizard appears.

VMware Resiliency

339

Figure 272 Select Hard Disk

5. Select Hard Disk, and then click Next. The Select a Disk dialog box appears.

340

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

Figure 273 Select a Disk

6. Select Create a new virtual disk, and then click Next. The Create a Disk dialog box appears.

VMware Resiliency

341

Figure 274 Create a Disk

7. Specify the virtual disk size and provisioning policy, and then click Next. The Advanced Options dialog box appears.

342

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

Figure 275 Advanced Options

8. Select a Virtual Device Node between SCSI (1:0) to SCSI (3:15). Click Next. The Ready to Complete page appears.

VMware Resiliency

343

Figure 276 Ready to Complete

9. Click Finish. A new disk and controller are created. 10. In the Virtual Machine Properties dialog box, select the newly created controller, and then click Change Type.

344

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

Figure 277 Virtual Machine Properties

The Change SCSI Controller Type dialog box appears.

VMware Resiliency

345

Figure 278 Change SCSI Controller Type

11. Click VMware Paravirtual, and then click OK. 12. Power on the virtual machine. 13. Install VMware Tools. VMware Tools includes the PVSCSI driver. 14. Scan and format the hard disk.

346

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

4 Cloning Virtual Machines

This chapter presents these topics: ◆ ◆ ◆ ◆ ◆

4.1 Introduction .................................................................................... 348 4.2 Cloning methodologies ................................................................. 349 4.3 Cloning virtual machines by using Celerra-based technologies 353 4.4 Celerra-based cloning with Virtual Provisioning ...................... 359 4.5 Conclusion....................................................................................... 363

Cloning Virtual Machines

347

4.1 Introduction Cloning a virtual machine is the process of creating an exact copy of an existing virtual machine in the same or a different location. By cloning virtual machines, administrators can quickly deploy a group of virtual machines based on a single virtual machine that was already created and configured. To clone a virtual machine, copy the data on the virtual disk of the source virtual machine and transfer that data to the target virtual disk, which is the new cloned virtual disk. System reconfiguration, also known as system customization, is the process of adjusting the migrated operating system to avoid any possible network and software conflicts, and enabling it to function on the virtual hardware. Perform this adjustment on the target virtual disk after cloning. It is not mandatory to shut down virtual machines before they are cloned. However, ideally, administrators should shut down the virtual machines before copying the metadata and the virtual disks associated with the virtual machines. Copying the virtual machines after they are shut down ensures that all the data from memory has been committed to the virtual disk. Hence, the virtual disk will contain a fully consistent copy of the virtual machines, which can be used to back up or to quickstart cloned virtual machines. This chapter explains the primary methods available in VMware vSphere and VMware Infrastructure to clone virtual machines. It also explains Celerra-based technologies that can be used to clone virtual machines.

348

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

4.2 Cloning methodologies VMware vSphere and VMware Infrastructure provide two primary methods to clone virtual machines — VMware vCenter Converter and the Clone Virtual Machine wizard in vCenter Server.

4.2.1 Clone Virtual Machine wizard in vCenter Server To clone a virtual machine by using the Clone Virtual Machine wizard: 1. Right-click the virtual machine in the inventory and select Clone. The Clone Virtual Machine wizard appears. Note: For VMware Infrastructure 3.5, it is recommended that administrators shut down the virtual machine before cloning. For VMware vSphere and Celerra-based cloning, the state of the virtual machine does not matter.

Figure 279 Clone Virtual Machine wizard

Cloning methodologies

349

2. Type the name of the virtual machine, select the inventory location, and then click Next. The Host/Cluster dialog box appears.

Figure 280 Host/Cluster

3. Select the host for running the cloned virtual machine and click Next. The Datastore dialog box appears.

Figure 281 Datastore

350

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

4. Select the datastore to store the virtual machine and click Next. The Disk Format dialog box appears.

Figure 282 Disk Format

5. Select the format to store the virtual machine disk and click Next. The Guest Customization dialog box appears.

Figure 283 Guest Customization

6. Select the option to use in customizing the guest operating system of the new virtual machine and click Next. The Ready to Complete dialog box appears.

Cloning methodologies

351

Note: Select Do not customize if no customization is required.

Figure 284 Ready to Complete

7. Click Finish. The cloning is initiated. Note: After the clone operation is completed, a cloned virtual machine is created with an exact copy of the source virtual machine. The Clone Virtual Machine wizard can also handle system reconfiguration of the cloned virtual machine.

4.2.2 VMware vCenter Converter VMware vCenter Converter is a tool integrated with vCenter Server that enables administrators to convert any type of a physical or virtual machine, which runs on the Windows operating system, into a virtual machine that runs on an ESX server. VMware vCenter Converter can also be used to clone an existing virtual machine. VMware vCenter Converter uses its cloning and system reconfiguration features to create a virtual machine that is compatible with an ESX server. Section 3.7, “Using NFS storage,” on page 128 provides more details about VMware vCenter Converter.

352

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

4.3 Cloning virtual machines by using Celerra-based technologies Note: The following section includes applicable Celerra technologies available with Celerra versions earlier than 5.6.48. However, starting from this release, the Celerra Data Deduplication technology was enhanced to also support virtual machines cloning. EMC Celerra Plug-in for VMware—Solution Guide provides more information on this technology and how it can be used in this case.

Celerra provides two technologies that can be used to clone virtual machines — Celerra SnapSure™ for file systems when using the NFS protocol, and iSCSI snapshot for iSCSI LUNs when using the iSCSI protocol. When using Celerra-based technologies for cloning, the virtual machine data is not passed on the wire from Celerra to ESX and back. Instead, the entire cloning operation is performed optimally within the Celerra with no ESX cycles. If the information stored on the snapshot or checkpoint needs to be application-consistent (recoverable), administrators should either shut down or quiesce the applications that are running on the virtual machines involved in the cloning process. This must be done before a checkpoint or snapshot is created. Otherwise, the information on the snapshot or checkpoint will only be crash-consistent (restartable). This means that although it is possible to restart the virtual machines and the applications in them from the checkpoint or snapshot, some of the most recent data will be missing because it is not yet committed by the application (data in flight). When virtual machines are cloned by using Celerra SnapSure or iSCSI snapshots, the cloned virtual machines will be exact copies of the source virtual machines. Administrators should manually customize these cloned virtual machines to avoid any possible network or software conflicts. To customize a Windows virtual machine that was cloned by using Celerra SnapSure or iSCSI snapshots, install the Windows customization tool, System Preparation (Sysprep), on the virtual machine. Sysprep will resignature all details associated with the new virtual machine and assign new system details. Sysprep also avoids possible network and software conflict between the virtual machines. Appendix B, “ Windows Customization,”provides information on Windows customization with Sysprep.

Cloning virtual machines by using Celerra-based technologies

353

4.3.1 Clone virtual machines over NAS datastores using Celerra SnapSure Celerra SnapSure makes cloning of virtual machines, which are provisioned over an NAS datastore, easier. SnapSure creates a logical point-in-time copy of the production file system called a checkpoint file system. The production file system contains a NAS datastore that contains the metadata and virtual disks associated with the virtual machines that must be cloned. For cloning virtual machines by using Celerra SnapSure, the writeable checkpoint file system must be in read/write mode. The writeable checkpoint file system is created using Celerra Manager as shown in Figure 285 on page 354.

Figure 285 Create a writeable checkpoint for NAS datastore

Alternatively, writeable checkpoint file systems can be created by using Celerra CLI: # fs_ckpt

-Create -readonly n

Similar to a standard NAS file system, it is mandatory to grant the VMkernel read/write access in addition to root access to the checkpoint file system. Section 3.7.1, “Add a Celerra file system to ESX,” on page 128 explains how to provide VMkernel the required access permissions. To clone one or more virtual machines that reside on a checkpoint file system, add the writeable checkpoint file system to the ESX server as a new NAS datastore, browse for the new datastore, and add the VMX files of the virtual machines to the vCenter inventory. This creates new virtual machines with the help of the Add to Inventory wizard.

354

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

Section 2.5.2, “Celerra SnapSure,” on page 76 provides more details about Celerra SnapSure.

4.3.2 Clone virtual machines over iSCSI/vStorage VMFS datastores using iSCSI snapshots iSCSI snapshots on Celerra offer a logical point-in-time copy of the iSCSI LUN. Virtual machines are created on the vStorage VMFS over iSCSI. A Celerra iSCSI LUN is presented to the ESX server and formatted as a VMFS datastore. Because each snapshot needs the same amount of storage as the iSCSI LUN (when virtual provisioning is not used), ensure that the file system that stores the production LUN unit and its snapshot has enough free space to store the snapshot. Section 2.5.4, “Celerra iSCSI snapshots,” on page 77 provides more details about iSCSI snapshots. 4.3.2.1 Create a temporary writeable snap Promoting a snapshot creates a temporary writeable snap (TWS). Mounting a TWS on an iSCSI LUN makes the snapshot visible to the iSCSI initiator. After mounting a TWS on an iSCSI LUN, it can be configured as a disk device and used as a production LUN. Note: Only a snapshot can be promoted.

Use the following CLI command to promote the snapshot. #server_iscsi -snap -promote -initiator

Figure 286 shows how to promote a snapshot in CLI.

Figure 286 Promote a snapshot

Cloning virtual machines by using Celerra-based technologies

355

The mounted LUN is assigned the next available number that is greater than 127. If this number is not available, the LUN is assigned the next available number in the range 0 through 127. After the LUN is promoted, the TWS becomes visible to the ESX server as a new iSCSI LUN. With VMware Infrastructure, administrators must configure the advanced configuration parameters, LVM.DisallowsnapshotLun and LVM.EnableResignature, to control the clone behavior. To add the promoted LUN to the storage without VMFS formatting, set the LVM.EnableResignature parameter to 1, set LVM.DisallowsnapshotLun to the default parameter value, which is 1. Refer step 7 onwards in Section 3.8.3, “Create VMFS datastores on ESX,” on page 174 for more details on the LVM parameter combination. With VMware vSphere, the configuration is much simpler because there is no need to configure any advanced configuration parameters. To resignature a vStorage VMFS datastore copy, select the Assign a new signature option when adding the LUN as a datastore. Datastore resignaturing must be used to retain the data stored on the vStorage VMFS datastore copy. The prerequisites for datastore resignaturing are: ◆

Unmount the mounted datastore copy.



Rescan the storage on the ESX server so that it updates its view of LUNs presented to it and discovers any LUN copies.

To resignature a vStorage VMFS data copy: 1. Log in to vSphere Client and select the host from the Inventory area. 2. Click Configuration and click Storage in the Hardware area. 3. Click Add Storage. 4. Select the Disk/LUN storage type and click Next. 5. From the list of LUNs, select the LUN that has a datastore name displayed in the VMFS Label column and click Next. The Select VMFS Mount Options dialog box appears. Note: The name present in the VMFS Label column indicates that the LUN is a copy that contains a copy of an existing vStorage VMFS datastore.

356

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

6. Select Assign a new signature and click Next.

Figure 287 Assign a new signature option

The Ready to Complete page appears. 7. Review the datastore configuration information and click Finish. The promoted LUN is added and is visible to the host. 8. Browse for the virtual machine's VMX file in the newly created datastore, and add it to the vCenter inventory. The virtual machine clone is created. Although a promoted snapshot LUN is writeable, all changes made to the LUN are allocated only to the TWS. When the snapshot is demoted, the LUN is unmounted and its LUN number is unassigned. After the snapshot demotion, data that was written to the promoted LUN becomes irretrievable because it is lost. Therefore, back up the cloned virtual machines before the promoted LUN is demoted.

4.3.3 Clone virtual machines over iSCSI or RDM volumes by using iSCSI snapshots RDM allows a special file in a vStorage VMFS datastore to act as a proxy for a raw device, the RDM volume. iSCSI snapshots can be used to create a logical point-in-time copy of the RDM volume, which can be used to clone virtual machines. Multiple virtual machines cannot be cloned on the same RDM volume because only a single virtual machine can use an RDM volume. To clone a virtual machine that is stored on an RDM volume, create a snapshot of the iSCSI LUN that is mapped by the RDM volume. Cloning virtual machines by using Celerra-based technologies

357

Section 2.5.4, “Celerra iSCSI snapshots,” on page 77 provides more details about iSCSI snapshots. The procedure to create a TWS of the RDM volume is the same as the procedure to create a vStorage VMFS volume. To clone a virtual machine over RDM, create a virtual machine over the local datastore by using the Virtual Machine Creation wizard. After a virtual machine is created, select and edit the virtual machine settings by using the Edit Settings menu option. Using this option, remove the hard disk created on the local datastore. Add the newly promoted iSCSI LUN as the hard disk that contains the original virtual machine VMX files and power on the virtual machine. Section 3.8.4, “Create RDM volumes on ESX servers,” on page 182 provides detailed information about creating a virtual machine over an RDM volume.

358

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

4.4 Celerra-based cloning with Virtual Provisioning To optimize the utilization of the file system, administrators can combine Celerra Virtual Provisioning technology and virtual machine cloning by using Celerra-based technologies. Celerra Virtual Provisioning includes two technologies that are used together — automatic file system extension and file system/LUN virtual provisioning. Section 2.5.1, “Celerra Virtual Provisioning,” on page 76 provides more information about Celerra Virtual Provisioning.

4.4.1 Clone virtual machines over NAS using SnapSure and Virtual Provisioning Virtual Provisioning provides the advantage of presenting the maximum size of the file system to the ESX server, of which only a portion is actually allocated. To create a NAS datastore, a virtually provisioned file system must be selected. Cloning virtual machines on a virtually provisioned file system is similar to cloning virtual machines on a fully provisioned file system. The advantage of cloning virtual machines on a virtually provisioned file system is that the administrators can initially allocate a minimum amount of storage space required for the virtual machines, and as the data grows, they can automatically allocate additional space to the NAS datastore. When using a virtually provisioned file system during virtual machine cloning, it is important to monitor the file system utilization to ensure that enough space is available. The storage utilization of the file system can be monitored by checking the size of the file system using Celerra Manager. Figure 288 on page 360 shows the file system usage on Celerra Manager.

Celerra-based cloning with Virtual Provisioning

359

Figure 288 File system usage on Celerra Manager

Section 4.3.1, “Clone virtual machines over NAS datastores using Celerra SnapSure,” on page 354 explains the procedure to clone virtual machines from a file system.

4.4.2 Clone virtual machines over VMFS or RDM using iSCSI snapshot and Virtual Provisioning To maximize overall storage utilization, ensure that virtually provisioned iSCSI LUNs are created to deploy the virtual machines. The iSCSI LUNs take advantage of the automatic file system extension when cloning virtual machines. Virtually provisioned iSCSI LUNs can be created only through CLI. When using a virtually provisioned iSCSI LUN during the virtual machine cloning, it is crucial to monitor the file system space to ensure that enough space is available. Monitor the LUN utilization by using the following CLI command: #server_iscsi -lun -info

360

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

A virtually provisioned LUN does not reserve space on the file system. To avoid data loss or corruption, ensure that the file system space is available for allocation when the data is added to the LUN. Setting a conservative high water mark provides an added advantage when enabling the automatic file system extension. Cloning virtual machines on the virtually provisioned iSCSI LUN is the same as cloning virtual machines on normal iSCSI LUNs. Section 4.3.2, “Clone virtual machines over iSCSI/vStorage VMFS datastores using iSCSI snapshots,” on page 355 explains the procedure to clone virtual machines from the iSCSI LUN. Section 3.12, “Virtually provisioned storage,” on page 258 provides further information on deploying virtual machines over Celerra virtually provisioned file systems. 4.4.2.1 Celerra Data Mover parameter setting for TWS To further maximize the storage utilization, an extra step is required to ensure that the TWS will also be virtually provisioned in all cases. Set the sparseTws Celerra Data Mover parameter to 1 to ensure that the TWS of an iSCSI LUN will be virtually provisioned. If the sparseTws parameter is set to 1, the TWS created will always be virtually provisioned. The default value of the sparseTws is 0 and its possible values are 0 and 1. The value 0 indicates that a fully provisioned TWS is created if the production LUN is not virtually provisioned. The sparseTws parameter can be modified by using Celerra Manager (Figure 289 on page 362) or CLI.

Celerra-based cloning with Virtual Provisioning

361

Figure 289 Parameter setting using Celerra Manager

In CLI, the following command updates the parameter: $ server_param server_2 -facility nbs -modify sparseTws -value 1

Section 3.12, “Virtually provisioned storage,” on page 258 provides further information on deploying virtual machines over Celerra virtually provisioned iSCSI LUNs.

362

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

4.5 Conclusion Celerra-based virtual machine cloning is an alternative that can be used instead of the conventional VMware-based cloning. The advantage of cloning virtual machines using Celerra-based technologies is that the cloning of virtual machines can be performed on the storage layer in a single operation for multiple virtual machines. The Celerra methodologies used to clone virtual machines are Celerra SnapSure and iSCSI snapshot. Celerra SnapSure creates a checkpoint of the NAS file system. Adding the checkpoint file system as a storage to the ESX server provides the advantage of creating clones of the original virtual machines on the ESX server. The iSCSI snapshot creates an exact snap of the LUN that can be used as a datastore to clone the original virtual machine. Enabling Virtual Provisioning provides the advantage of efficiently managing the storage space used for virtual machines cloning on the file system and the LUN. Table 4 summarizes when to consider VMware-based cloning and Celerra-based cloning. Table 4

Virtual machine cloning methodology comparison Virtual machine cloning category

Consider when

VMware-based

• The VMware administrator has limited access to the storage system. • Only a few virtual machines from a datastore must be cloned.

Celerra-based

• Most of the virtual machines from a datastore must be cloned. • Using VMware Infrastructure, the production virtual machines should not be shut down during the cloning process.

Conclusion

363

364

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

5 Backup and Restore of Virtual Machines

This chapter presents these topics: ◆ ◆ ◆ ◆ ◆ ◆ ◆ ◆ ◆ ◆ ◆ ◆

5.1 Backup and recovery options ....................................................... 366 5.2 Recoverable as compared to restartable copies of data ............ 367 5.3 Virtual machines data consistency............................................... 369 5.4 Backup and recovery of a NAS datastore ................................... 371 5.5 Backup and recovery of a vStorage VMFS datastore over iSCSI . 382 5.6 Backup and recovery of an RDM volume over iSCSI ............... 388 5.7 Backup and recovery using VCB ................................................. 389 5.8 Backup and recovery using VCB and EMC Avamar ................ 395 5.9 Backup and recovery using VMware Data Recovery ............... 398 5.10 Virtual machine single file restore from a Celerra checkpoint . 401 5.11 Other file-level backup and restore alternatives ...................... 404 5.12 Summary ....................................................................................... 406

Backup and Restore of Virtual Machines

365

5.1 Backup and recovery options EMC Celerra combines with VMware vSphere or VMware Infrastructure to offer many possible ways to perform backup and recovery of virtual machines. This is regardless of whether an ESX server uses an NAS datastore, a vStorage VMFS datastore over iSCSI, or an RDM volume over iSCSI. It is critical to determine the customer RPO or RTO so that an appropriate method is used to meet the Service Level Agreements (SLAs) and minimize downtime. At the storage layer, two types of backup are discussed in the context of this chapter: logical backup and physical backup. A logical backup does not provide a physically independent copy of production data. It offers a view of the file system or iSCSI LUN as of a certain point in time. A logical backup can occur very rapidly and requires very little space to store. Therefore, a logical backup can be taken very frequently. Restoring from a logical backup can be quick as well, depending on the data changes. This dramatically reduces the mean time to recovery. However, a logical backup cannot replace a physical backup. The logical backup protects against logical corruption of the file system or iSCSI LUN, accidental deletion of files, and other similar human errors. However, it does not protect the data from hardware failures. Also, loss of the PFS or iSCSI LUN renders the checkpoints or snapshots unusable. A physical backup takes a full and complete copy of the file system or iSCSI LUN to a different physical media. Although the backup and recovery time may be longer, a physical backup protects the data from hardware failure.

366

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

5.2 Recoverable as compared to restartable copies of data The Celerra-based replication technologies can generate a restartable or recoverable copy of the data. The difference between the two types of copies can be confusing. A clear understanding of the differences between the two is critical to ensure that the recovery goals for a virtual infrastructure environment can be met.

5.2.1 Recoverable disk copies A recoverable (also called application-consistent) copy of the data is one that allows the application to apply logs and roll the data forward to an arbitrary point in time after the copy was created. This is only possible if recoverable disk copies are supported by the application. The recoverable copy is most relevant in the database realm where database administrators use it frequently to create backup copies of a database. It is critical to business applications that a database failure can be recovered to the last backup and that it can roll forward subsequent transactions. Without this capability, a failure may cause an unacceptable loss of all transactions that occurred since the last backup. To create a recoverable image of an application, either shut down the application or suspend writes when the data is copied. Most database vendors provide the functionality to suspend writes in their RDBMS engine. This functionality must be invoked inside the virtual machine when EMC technology is deployed to ensure that a recoverable copy of the data is generated on the target devices.

5.2.2 Restartable disk copies When a copy of a running virtual machine is created by using EMC consistency technology when no action is taking place inside the virtual machine, the copy is normally a restartable (also called crash-consistent) image of the virtual machine. This means that when the data is used on cloned virtual machines, the operating system or the application goes into crash recovery. The exact implications of crash recovery in a virtual machine depends on the application that the virtual machine supports. These implications could be: ◆

If the source virtual machine is a file server or it runs an application that uses flat files, the operating system performs a file system check and fixes inconsistencies in the file system, if any. Modern file systems such as Microsoft NTFS use journals to accelerate the process.

Recoverable as compared to restartable copies of data

367



When the virtual machine is running a database or application with a log-based recovery mechanism, the application uses the transaction logs to bring the database or application to a point of consistency. The deployed process varies depending on the database or application, and is beyond the scope of this document.

Most applications and databases cannot perform roll-forward recovery from a restartable copy of the data. Therefore, it is inappropriate to use a restartable copy of data created from a virtual machine that is running a database engine for performing backups. However, applications that use flat files or virtual machines that act as file servers can be backed up from a restartable copy of the data. This is possible because none of the file systems provide a logging mechanism that enables roll-forward recovery. Note: Without additional steps, VCB creates a restartable copy of virtual disks associated with virtual machines. The quiesced copy of the virtual disks created by VCB is similar to the copy created by using EMC consistency technology.

368

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

5.3 Virtual machines data consistency In environments where EMC Celerra is deployed to provide storage to the ESX server, crash consistency is generally offered by the Celerra backup technologies that are described in this chapter. In a simplified configuration where a virtual machine’s guest OS, application, application data, and application log are encapsulated together in one datastore, crash consistency is achieved by using one of the Celerra technologies. However, many applications, especially database applications, strongly recommend separating data and log files in different file systems or iSCSI LUNs. By following this best practice, a virtual machine will have multiple virtual disks (vmdk files) spread across several datastores. It is therefore critical to maintain data consistency across these datastores when backup or replication occurs. VMware snapshots can be leveraged together with the Celerra technologies to provide crash consistency in such complicated scenarios. VMware snapshot is a software-based technology that operates on a per-virtual machine basis. When a VMware snapshot is taken, it quiesces all I/Os and captures the entire state of a virtual machine including its settings, virtual disks, and optionally the memory state, if the virtual machine is up and running. The virtual machine ceases to write to the existing virtual disks while subsequently writing changed blocks to the newly created virtual disks, which essentially are the .vmdk delta files. Because I/Os are frozen to the original virtual disks, the virtual machine can revert to the snapshot by discarding the delta files. On the other hand, the virtual disks merge together if the snapshot must be deleted. As soon as the VMware snapshot is taken, a virtual machine backup can be completed by initiating a SnapSure checkpoint if the virtual disk resides on an NAS datastore, or by taking an iSCSI snapshot if the virtual disk resides on vStorage VMFS/iSCSI or RDM/iSCSI. Snapshots of all datastores containing all virtual disks that belong to the virtual machines constitute the entire backup set. All the files related to a particular virtual machine must be restored together to revert to the previous state when the VMware snapshot was taken. Carefully isolate the placement of .vmdk files of multiple virtual machines in the same datastore so that a snapshot restore does not affect other virtual machines.

Virtual machines data consistency

369

As long as the backup set is intact, crash consistency can be maintained even across protocols or storage types such as vStorage VMFS/iSCSI, RDM/iSCSI, or, except RDM (physical mode), which is not supported by VMware snapshot technology. To perform backup operations, do the following: 1. Initiate a VMware snapshot and capture the memory state if the virtual machine is up and running. 2. Take Celerra checkpoints or snapshots of all datastores that contain virtual disks that belong to the virtual machine. Note: Replicate the datastores to a local or remote Celerra. This is optional.

3. Delete the VMware snapshot to allow virtual disks to merge after deltas are applied to the original virtual disks. To perform restore operations, do the following: 1. Power off the virtual machine. 2. Perform checkpoint or snapshot restore of all datastores containing virtual disks that belong to the virtual machine. 3. Execute the service console command service mgmt-vmware restart to restart the ESX host agent, which updates the virtual machine status reported in the vSphere GUI. Note: Wait for 30 seconds for the refresh and then proceed.

4. Open the VMware Snapshot Manager and revert to the snapshot taken in step 1 and delete the snapshot. 5. Power on the virtual machine. Replication Manager, which is described later in this chapter, supports the creation of replicas and vStorage VMFS datastores containing virtual machines in a VMware ESX server environment. It also provides a point-and-click backup and recovery of virtual machine-level images. It automates and simplifies the management of virtual machine backup and replication by leveraging VMware snapshots. This is to create virtual machine consistent replicas of vStorage VMFS and NAS datastores that are ideal for creating image-level backups and instant restores of virtual machines.

370

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

5.4 Backup and recovery of a NAS datastore The backup and recovery of virtual machines residing on NAS datastores can be performed in various ways. These are described in the following sections.

5.4.1 Logical backup and restore using Celerra SnapSure Celerra SnapSure can be used to create and schedule logical backups of the file systems exported to an ESX server as NAS datastores. This is accomplished by using the Celerra Manager as shown in Figure 290.

Figure 290 Checkpoint creation in Celerra Manager GUI 5.6

Alternatively, this can also be accomplished by using the following two Celerra commands: # /nas/sbin/fs_ckpt -name Create –readonly y # /nas/sbin/rootfs_ckpt -Restore

For Celerra version 5.5, use the following command to create and restore checkpoints: # /nas/sbin/fs_ckpt -name -Create # /nas/sbin/rootfs_ckpt -name -Restore

Backup and recovery of a NAS datastore

371

In general, this method works on a per-datastore basis. If multiple virtual machines share the same datastore, they can be backed up and recovered simultaneously and consistently, in one operation. To recover an individual virtual machine: 1. Change the Data Mover parameter cfs.showChildFsRoot from the default value of 0 to 1 as shown in Figure 291.

Figure 291 ShowChildFsRoot Server Parameter Properties in Celerra Manager

Note: A virtual directory is created for each checkpoint that is created with Celerra SnapSure. By default, these directories will be under a virtual directory named .ckpt. This virtual directory is located in the root of the file system. By default, the .ckpt directory is hidden. Therefore, the datastore viewer in vCenter Server will not be able to view the .ckpt directory. Changing the Data Mover parameter enables each mounted checkpoint of a PFS to be visible to clients as subdirectories of the root directory of the PFS as shown in Figure 292.

Figure 292 Datastore Browser view after checkpoints are visible

2. Power off the virtual machine. 372

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

3. Browse to the appropriate configuration and virtual disk files of the specific virtual machine as shown in Figure 292 on page 372. 4. Manually copy the files from the checkpoint and add it to the datastore under the directory /vmfs/volumes//VM_dir. 5. Power on the virtual machine.

5.4.2 Logical backup and restore using Replication Manager Replication Manager can also be used to protect NAS datastores that reside on an ESX server managed by a VMware vCenter Server and attached to a Celerra system. Replication Manager uses Celerra SnapSure to create local replicas of VMware NAS datastores. VMware snapshots are taken for all the virtual machines that are online and that reside on the NAS datastore just prior to creating local replicas to ensure operating system consistency of the resulting replica. Operations are sent from a Linux proxy host, which is either a physical host or a separate virtual host. The Replication Manager Job Wizard (Figure 293) can be used to select the replica type and expiry options. Replication Manager version 5.2.2 must be installed for datastore support.

Figure 293 Job Wizard

Backup and recovery of a NAS datastore

373

Select the Restore option in Replication Manager (Figure 294) to restore the entire datastore.

Figure 294 Restoring the datastore replica from Replication Manager

Before restoring the replica, do the following: 1. Power off the virtual machines that are hosted within the datastore. 2. Remove those virtual machines from the vCenter Server inventory. 3. Restore the replica from Replication Manager. 4. After the restore is complete, add the virtual machines to the vCenter Server inventory. 5. Revert to the VMware snapshot taken by Replication Manager to obtain an operating system consistent replica and delete the snapshot. 6. Manually power on each virtual machine. Note: Replication Manager creates a rollback snapshot for every Celerra file system that has been restored. The name of each rollback snapshot can be found in the restore details as shown in Figure 295 on page 375. The rollback snapshot may be deleted manually after the contents of the restore have been verified and the rollback snapshot is no longer needed. Retaining these snapshots beyond their useful life can cause resource issues.

374

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

Figure 295 Replica Properties in Replication Manager

A single virtual machine can be restored by using the Mount option in Replication Manager. Using this option, it is possible to mount a datastore replica to an ESX server as a read-only or read-write datastore. To restore a single virtual machine, do the following: 1. Mount the read-only replica as a datastore in the ESX server as shown in Figure 296 on page 376. 2. Power off the virtual machine residing in the production datastore. 3. Remove the virtual machine from the vCenter Server inventory. 4. Browse to the mounted datastore. 5. Copy the virtual machine files to the production datastore. 6. Add the virtual machine to the inventory again. 7. Revert to the VMware snapshot taken by Replication Manager to obtain an operating system consistent replica and delete the snapshot. 8. Unmount the replica through Replication Manager.

Backup and recovery of a NAS datastore

375

9. Power on the virtual machine.

Figure 296 Read-only copy of the datastore view in the vSphere client

5.4.3 Physical backup and restore using the nas_copy command The Celerra command /nas/bin/nas_copy can be used for full or incremental physical backup. It can be typically used to back up a file system to a volume on the Celerra that consists of ATA drives or another Celerra. Although using nas_copy for backup is convenient, it has some limitations during recovery. The nas_copy command cannot be used to copy data back to the source file system directly. The destination must be mounted and the files must be copied back to the source file system manually. This could unnecessarily prolong the recovery time. Therefore, using nas_copy to back up datastores is not encouraged. Note: Use the fs_copy command to perform a full physical backup in versions earlier than Celerra version 5.6.

5.4.4 Physical backup and restore using Celerra NDMP and NetWorker One of the recommended methods for physical backup and recovery is to use Network Data Management Protocol (NDMP) by utilizing Celerra Backup along with the Integrated Checkpoints feature and EMC NetWorker®, or any other compatible third-party backup software in the following manner: 1. Create a Virtual Tape Library Unit (VTLU) on Celerra if the performance needs to be improved by backing up on disks instead of tapes. 2. Create a library in EMC NetWorker. 3. Configure NetWorker to create bootstrap configuration, backup group, backup client, and so on. 376

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

4. Run NetWorker Backup. 5. Execute NetWorker Recover. The entire datastore or individual virtual machine can be selected for backup and recovery. Figure 297 shows NetWorker during the process.

Figure 297 NDMP recovery using EMC NetWorker

To utilize Celerra backup with integrated checkpoints, set the environment variable SNAPSURE=y. This feature automates the checkpoint creation, management, and deletion activities by entering the environmental variable in the qualified vendor backup software. The setting of the SNAPSURE variable for creating a backup client with EMC NetWorker is illustrated in Figure 298.

Figure 298 Backup with integrated checkpoint

Backup and recovery of a NAS datastore

377

When the variable is set in the backup software, each time a particular job is run, a checkpoint of the file system is automatically created (and mounted as read-only) before the NDMP backup starts. The checkpoint is automatically used for the backup, allowing production activity to continue uninterrupted on the file system. During the backup process, the checkpoint is automatically managed (for example, SavVol is auto-extended if needed, and if space is available). When the backup completes, the checkpoint is automatically deleted, regardless of whether it succeeds or fails.

5.4.5 Physical backup and restore using Celerra Replicator Celerra Replicator can be used for the physical backup of the file systems exported to ESX servers as datastores. This is accomplished by using the Celerra /nas/bin/nas_replicate command or by using the Celerra Manager. Multiple virtual machines can be backed up together if they reside in the same datastore. If further granularity is required at an image level for an individual virtual machine, move the virtual machine in its own datastore. The backup can either be local or remote. After the file system is completely backed up, stop the replication to make the target file system a stand-alone copy. If required, this target file system can be made read-writeable. After the target file system is attached to an ESX server, an individual virtual machine can be restored by copying its folder from the target file system to the PFS. If VMware snapshots already exist at the time of the backup, the Snapshot Manager in the VI client might not report all VMware snapshots correctly after a virtual machine restore. One way of updating the GUI information is to remove the virtual machine from the inventory and add it again. If an entire file system is to be recovered, a replication session can be established in the reverse direction from the target file system to the production file system with the nas_replicate command. Note: For versions earlier than Celerra version 5.6, use the /nas/bin/fs_replicate command for physical backup of datastores.

5.4.6 Physical backup and restore using Replication Manager Another method to take backups is to use Replication Manager to provide physical backup of the datastores. Replication Manager uses Celerra Replicator technology to create remote replicas in this scenario. These replicas are actually snapshots that represent a crash-consistent

378

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

replica of the entire datastore. Similar to a logical backup and restore, Replication Manager version 5.2.2 must be installed for datastores support. Before creating replicas on a target Celerra, create a read-only file system on the target Celerra to which the data will be transferred, and create a Celerra Replicator session between the source and target file systems by using Celerra Manager. While creating a replication session, it is recommended to use a Time out of Sync value of 1 minute. VMware snapshots are taken for all virtual machines that are online and reside on the datastore just prior to creating replicas to ensure the operating system consistency of the resulting replica. The entire datastore can be restored by selecting the Restore option in Replication Manager. Replication Manager creates a rollback snapshot for a remote Celerra file system during restore. Before restoring a crash-consistent remote replica, do the following: 1. Power off the virtual machines that are hosted within the datastore. 2. Remove those virtual machines from the vCenter Server inventory. 3. Restore the remote replica from Replication Manager. 4. After the restore is complete, add the virtual machines into the vCenter Server inventory. 5. Revert to the VMware snapshot taken by Replication Manager to obtain an operating system consistent replica, and then delete the snapshot. 6. Manually power on each virtual machine.

Backup and recovery of a NAS datastore

379

A single virtual machine can be restored by using the Mount option in the Replication Manager. Using this option, it is possible to mount a datastore remote replica to an ESX server as a datastore as shown in Figure 299.

Figure 299 Mount Wizard - Mount Options

To restore a single virtual machine: 1. Mount the read-only remote replica as a datastore in the ESX server. 2. Power off the virtual machine that resides in the production datastore. 3. Remove the virtual machine from the vCenter Server inventory. 4. Browse the mounted datastore. 5. Copy the virtual machine files to the production datastore. 6. Add the virtual machine to the inventory again to report the VMware snapshot taken by Replication Manager. 380

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

7. Revert to the VMware snapshot taken by Replication Manager to obtain an operating system consistent replica and delete the snapshot. 8. Unmount the replica by using Replication Manager. 9. Power on the virtual machine.

Backup and recovery of a NAS datastore

381

5.5 Backup and recovery of a vStorage VMFS datastore over iSCSI The backup and recovery of virtual machines residing on vStorage VMFS datastores over iSCSI can be done in many ways. A brief description of the methods is given here.

5.5.1 Logical backup and restore using Celerra iSCSI snapshots When using vStorage VMFS over iSCSI, a Celerra iSCSI LUN is presented to the ESX server and formatted as type vStorage VMFS. In this case, users can create iSCSI snapshots on the Celerra to offer point-in-time logical backup of the iSCSI LUN. Use the following command to create and manage iSCSI snaps directly on Celerra Control Station: # server_iscsi -snap –create –target -lun # server_iscsi -snap –restore Note: To create and manage iSCSI snapshots in versions earlier than Celerra 5.6, a Linux host that contains the Celerra Block Management Command Line Interface (CBMCLI) package is required. The following command is used to create snapshots and restore data on the Linux host: # cbm_iscsi --snap --create # cbm_iscsi --snap --restore

In general, this method works on a per-vStorage VMFS basis, unless the vStorage VMFS spans multiple LUNs. If multiple virtual machines share the same vStorage VMFS, back up and recover them together in one operation. When multiple snapshots are created from the PLU, restoring an earlier snapshot will delete all newer snapshots. Furthermore, ensure that the file system that stores the PLU and its snapshots has enough free space to create and restore from a snapshot. An individual virtual machine can be restored from a snapshot when the snapshot is made read-writeable and attached to the ESX server. With VMware vSphere, as part of the Select VMFS Mount Options screen, select Assign a new signature (Figure 300 on page 383) to enable disk re-signature if the snapped LUN is attached to the same ESX server.

382

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

Figure 300 VMFS mount options to manage snapshots

With VMware Infrastructure, however, this step is somewhat more complex. To present the snapshot correctly to ESX, administrators must set the advanced configuration parameters, LVM.DisallowsnapshotLun and LVM.EnableResignature, to control the clone behavior. Use a proper combination of the LVM advanced configuration parameters to discover the storage to ESX. To add the promoted LUN to the storage without vStorage VMFS formatting, set the LVM.EnableResignature parameter to 0. To add the promoted LUN to the storage, set LVM.DisallowsnapshotLun to the default parameter value, which is 1.

Backup and recovery of a vStorage VMFS datastore over iSCSI

383

When the snapped vStorage VMFS is accessible from the ESX server, the virtual machine files can be copied from the snapped vStorage VMFS to the original vStorage VMFS to recover the virtual machine.

5.5.2 Logical backup and restore using Replication Manager Replication Manager protects the vStorage VMFS datastore over iSCSI that resides on an ESX server managed by a VMware vCenter Server and attached to a Celerra. It uses Celerra iSCSI snapshots to create replicas of vStorage VMFS datastores. VMware snapshots are taken for all virtual machines, which are online and reside on the vStorage VMFS datastore, just prior to creating local replicas to ensure operating system consistency of the resulting replica. Operations are sent from a Windows proxy host, which is either a physical host or a separate virtual host. The entire vStorage VMFS datastore can be restored by choosing the Restore option in Replication Manager. Before restoring a crash-consistent vStorage VMFS replica, do the following: 1. Power off the virtual machines that are hosted within the vStorage VMFS datastore. 2. Remove these virtual machines from the vCenter Server inventory. 3. Restore the replica from the Replication Manager. 4. After the restore is completed, add the virtual machines to the vCenter Server inventory. 5. Revert to the VMware snapshot to obtain an operating system consistent replica, and delete the snapshots. 6. Manually power on each virtual machine. A single virtual machine can be restored by using the Mount option in Replication Manager. Using this option, it is possible to mount a vStorage VMFS datastore replica to an ESX server as a vStorage VMFS datastore. To restore a single virtual machine: 1. Mount the replica as a vStorage VMFS datastore in the ESX server. 2. Power off the virtual machine residing in the production datastore. 3. Remove the virtual machine from the vCenter Server inventory. 4. Browse for the mounted datastore. 5. Copy and paste the virtual machine files to the production datastore. 384

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

6. Add the virtual machine to the inventory again to report the VMware snapshot taken by Replication Manager. 7. Revert to the VMware snapshot taken by Replication Manager to obtain an operating system consistent replica and delete the snapshot. 8. Unmount the replica through Replication Manager. 9. Power on the virtual machine.

5.5.3 Physical backup and restore using Celerra Replicator For a physical backup, use the following nas_replicate command to create and manage iSCSI clones by using Celerra Replicator V2 from the CLI on the Control Station or from the Celerra Manager in Celerra version 5.6: # nas_replicate –create -source –lun -target -destination –lun -target -interconnect

Figure 301 shows the new Replication Wizard in the Celerra Manager, which allows you to replicate an iSCSI LUN:

Figure 301 Celerra Manager Replication Wizard

Backup and recovery of a vStorage VMFS datastore over iSCSI

385

Note: To create a physical backup in versions earlier than Celerra version 5.6, the Celerra iSCSI Replication-Based LUN Clone feature can be used. A target iSCSI LUN of the same size as the production LUN must be created on Fibre Channel or ATA disks to serve as the destination of a replication session initiated by the following command: # cbm_replicate --dev --session --create --alias --dest_ip --dest_name --label

The backup can be either local or remote. After the PLU is completely replicated, stop the replication session to make the target LUN a stand-alone copy. If required, this target LUN can be made read-writeable. The target LUN can be attached to the same or different ESX server. If the target LUN is attached to the same server, disk re-signature must be enabled. After the target LUN is attached to an ESX server, an individual virtual machine can be restored by copying its folder from the target LUN to the PLU. If VMware snapshots already exist at the time of backup and VMware snapshots are added or deleted later, the Snapshot Manager in the VI client might not report all VMware snapshots correctly after a virtual machine restore. One way to update the GUI information is to remove the virtual machine from the inventory and add it again. If an entire vStorage VMFS must be recovered, a replication session can be established in the reverse direction from the target LUN back to the PLU with the cbm_replicate command or the nas_replicate command. Storage operations, such as snapshot restore, can cause the vSphere client GUI to be out of sync with the actual state of the ESX server. For example, if VMware snapshots already exist at the time of backup and VMware snapshots are added or deleted later, the Snapshot Manager in the vSphere client may not report all VMware snapshots correctly after a LUN restore. One way of updating the GUI information is executing the following command in the service console to restart the ESX host agent: # service mgmt-vmware restart

Restore and refresh all VMware snapshots existing prior to the backup when the Snapshot Manager is reopened. However, VMware snapshots taken after the backup are lost following an iSCSI LUN restore.

386

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

5.5.4 Physical backup and restore using Replication Manager Replication Manager also provides a physical backup of the vStorage VMFS datastore over iSCSI that resides on an ESX server managed by VMware vCenter Server and attached to a Celerra. It uses Celerra Replicator to create remote replicas of vStorage VMFS datastores. For a single virtual machine recovery, the Mount option in Replication Manager can be used. To restore the entire vStorage VMFS datastore, use the Restore option as described in Section 5.5.2, “Logical backup and restore using Replication Manager,” on page 384.

Backup and recovery of a vStorage VMFS datastore over iSCSI

387

5.6 Backup and recovery of an RDM volume over iSCSI The iSCSI LUNs presented to an ESX server as RDM are normal raw devices just like they are in a non-virtualized environment. RDM provides some advantages of a virtual disk in the vStorage VMFS file system while retaining some advantages of direct access to physical devices. For example, administrators can take full advantage of storage array-based data protection technologies regardless of whether the RDM is in a physical mode or virtual mode. For logical backup and recovery, point-in-time, Celerra-based iSCSI snapshots can be created. To back up an RDM volume physically, administrators can use the Celerra iSCSI Replication-Based LUN Clone feature to create clones for versions earlier than Celerra version 5.6. When using RDM, it is recommended that an RDM volume is not shared among different virtual machines or different applications, except in the case of being used as the quorum disk of a clustered application. With RDM, administrators can create snapshots or clones in one of the following ways: ◆

Use the nas_replicate command or the Celerra Manager Replication Wizard. Alternatively, for Celerra version 5.5, administrators can install the CBMCLI package and use the cbm_iscsi and cbm_replicate commands as described in Section 5.5, “Backup and recovery of a vStorage VMFS datastore over iSCSI,” on page 382.



Install and use Replication Manager. Replication Manager offers customers a simple interface to manipulate and manage the disk-based snaps and replicas for Celerra and other platforms and integrate with Windows applications to provide application-level consistency.

Note: Only RDM volumes in the physical compatibility mode are supported at this time. Only RDM volumes formatted as NTFS can be recognized by Replication Manager. Therefore, Microsoft Windows guest machines can be backed up this way. Virtual machines of other OS types still require CBMCLI for crash-consistent backup.

388

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

5.7 Backup and recovery using VCB VCB allows a virtual machine backup at any time by providing a centralized backup facility that leverages a centralized proxy server and reduces the load on production ESX server hosts. VCB integrates with existing backup tools and technologies to perform full and incremental file backups of virtual machines. VCB can perform full image-level backup for virtual machines running any OS, as well as file-level backups for virtual machines running Microsoft Windows without requiring a backup agent in the guest hosts. Figure 302 on page 390 illustrates how VCB works. In addition to the current LAN and SAN mode, VMware introduced the Hot-Add mode in the VCB 1.5 release. This mode allows administrators to leverage VCB for any datastore by setting up one of the virtual machines as a VCB proxy and using it to back up other virtual machines residing on storage visible to the ESX server that hosts the VCB proxy. VCB creates a snapshot of the virtual disk to be protected and hot-adds the snapshot to the VCB proxy, allowing it to access virtual machine disk data. The VCB proxy reads the data through the I/O stack of the ESX host. In contrast to the LAN mode, which uses the service console network to perform backups, the Hot-Add mode uses the hypervisor I/O stack. In the LAN mode, the IP network can potentially be saturated. The testing proved that the Hot-Add mode is more efficient than the LAN mode.

Backup and recovery using VCB

389

Figure 302 VCB

The Celerra array-based solutions for backup and recovery operate at the datastore level, or more granularly at the virtual machine image level. If individual files residing inside a virtual machine must be backed up, other tools will be required. VCB is a great tool for file-level and image-level backup. A VCB proxy must be configured on a Windows system that requires third-party backup software such as EMC NetWorker or EMC Avamar®, the VCB integration module for the backup software, and the VCB software itself. VMware provides the latter two components that are downloadable at no cost. However, the VCB licenses must be purchased and enabled on the ESX or vCenter Server. After all three components are installed, the configuration file config.js located in the directory \config must be modified before the first backup can be taken. This file contains comments that define each parameter. 390

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

It is recommended to follow the README file in the integration module, which contains step-by-step instructions to prepare and complete the first VCB backup successfully. When a backup is initiated through EMC NetWorker, it triggers the scripts provided in the integration module, which in turn starts the executable vcbMounter.exe (included in the VCB software) to contact the vCenter Server or the ESX server directly to locate the virtual machine to be backed up. The arguments passed to vcbMounter.exe come from config.js and the Save set syntax in EMC NetWorker. VCB image-level backup supports virtual machines that run any type of OS. For NetWorker versions earlier than 7.4.1, the Save set in EMC NetWorker must include the keyword FULL and the name or IP of the target virtual machine. Starting with release 7.4.1, each virtual machine to be backed up must be added as a client to NetWorker. Specify FULL in the Save set for full machine backup as shown in Figure 303 on page 392. VCB first retrieves the virtual machine configuration files as well as its virtual disks in a local directory before NetWorker takes a backup of the directory. During a restore, NetWorker restores the directory on the VCB proxy. The administrator must take the final step to restore the virtual machine onto an ESX server by using the vcbRestore command or VMware vCenter Server Converter tool. Because the command vcbRestore is unavailable in the VCB proxy, it must be run directly from the ESX service console. VCB file-level backup only supports the Windows guest OS. For versions earlier than NetWorker version 7.4.1, the Save set in EMC NetWorker must include the name or IP of the target virtual machine and a colon-separated list of paths that must be backed up. Starting with the NetWorker release 7.4.1, each virtual machine that must be backed up must be added as a client to NetWorker. In the Save set, specify the colon-separated list of paths that must be backed up or ALLVMFS to back up all the files and directories on all drivers of the target machine. VCB first takes a VMware snapshot and uses mountvm.exe to mount the virtual disk on the VCB proxy before NetWorker backs up the list of paths provided in the Save set. During a restore, expose the target directory of the virtual machine as a CIFS share to the backup proxy. Use NetWorker User on the VCB proxy to restore the desired file to this network share.

Backup and recovery using VCB

391

Figure 303 NetWorker configuration settings for VCB

While planning to use VCB with vSphere, consider the following guidelines and best practices:

392



Ensure that all virtual machines that must be used with VCB have the latest version of VMware tools installed. Without the latest version of VMware tools, the snapshots that VCB creates for backups are crash-consistent only. This means that no virtual machine-level file system consistency is performed.



Image-level backup can be performed on virtual machines running any OS. File-level backup can be done only on Windows virtual machines.



RDM physical mode is not supported for VCB.



When an RDM disk in a virtual mode is backed up, it is converted to a standard virtual disk format. Hence, when it is restored, it will no longer be in the RDM format.



When using the LAN mode, each virtual disk cannot exceed 1 TB.

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure



The default backup mode is SAN. To perform LAN-based backup, modify TRANSPORT_MODE to either nbd or nbdssl or hotadd in file config.js.



Even though Hot-Add transport mode is efficient, it does not support the backup of virtual disks belonging to different datastores.



vcbMounter and vcbRestore commands can be executed directly on the ESX server without the need for a VCB license. However, there will be a performance impact on the ESX server because additional resources are consumed during backup/restore.



vcbRestore is not available on VCB proxy. It has to be run directly on the ESX server or a VMware vCenter Server Converter must be installed to restore a VCB image backup.



Mountvm.exe on VCB proxy is a useful tool to mount a virtual disk that contains NTFS partitions.



Before taking a file-level backup, VCB creates a virtual machine snapshot named _VCB-BACKUP_. An EMC NetWorker job will hang if the snapshot with the same name already exists. This default behavior can be modified by changing the parameter PREEXISTING_VCB_SNAPSHOT to delete in config.js.



If a backup job fails, virtual machines can remain mounted in the snapshot mode. Run vcbCleanup to clean up snapshots and unmount virtual machines from the directory specified in BACKUPROOT of config.js.



Because VCB by default searches for the target virtual machines by IP address, the virtual machine has to be powered on the first time it is backed up so that VMware tools can relay the information to the ESX or VC server. This information is then cached locally on the VCB proxy after the first backup. A workaround is to switch to the virtual machine lookup by name setting VM_LOOKUP_METHOD=”name” in config.js. Note: The backup would fail if there are duplicated virtual machine names.



Beginning with release 7.4.1 of NetWorker, each virtual machine to be backed up must be added as a client to the NetWorker. However, installing the NetWorker client software on the virtual machine itself is not required. It is recommended that with NetWorker

Backup and recovery using VCB

393

release 7.4.1 or later, the VCB method to find virtual machines should be based on the virtual machine IP address (default method). ◆

If vcbMounter hangs, NetWorker will also hang waiting for it to complete. To troubleshoot this issue, download and run a copy of the Process Explorer utility from sysinternals.com, right-click the vcbMounter process, and select Properties. The Command line textbox on the Image tab displays the full syntax of the vcbMounter command. Copy the command, terminate the hung process, then paste and run the command manually in a DOS window to view the output and determine the cause.



vcbRestore by default restores the image to its original location. An alternate location can be specified by editing the paths listed in the catalog file.

When using Security Support Provider Interface (SSPI) authentication, ensure that the HOST in the config.js configuration file points to the vCenter Server. The NetWorker integration module that calls the VCB Framework must use the user credentials that reside on both the VCB and the vCenter Servers with identical passwords, or must use the domain account. The user account must have administrator privileges on the VCB proxy and at least VCB user privileges in the vCenter Server.

394

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

5.8 Backup and recovery using VCB and EMC Avamar EMC Avamar is a backup and recovery software product. Avamar’s source-based global data deduplication technology eliminates unnecessary network traffic and data duplication. By identifying redundant data at the source, this deduplication minimizes backup data before it is sent over the network, thereby slowing the pace of the data growth in the core data centers and at remote offices. Avamar is very effective in areas where traditional backup solutions are inadequate such as virtual machines, remote offices, and large LAN-attached file servers. Avamar solves traditional backup challenges by: ◆

Reducing the size of backup data at the source.



Storing only a single copy of sub-file data segments across all sites and servers.



Performing full backups that can be recovered in just one step.



Verifying backup data recoverability.

Avamar Virtual Edition for VMware integrates with VCB for virtual environments by using the Avamar VCB Interoperability Module (AVIM). The AVIM is a series of .bat wrapper scripts that leverage VCB scripts to snap/mount and unmount running virtual machines. These scripts are called before and after an Avamar backup job. There are some scripts for full virtual machine backup (for all types of virtual machines) and some scripts for file-level backup (for Windows virtual machines only). These scripts can be used regardless of whether a NFS datastore or vStorage VMFS over iSCSI is used. Figure 304 on page 396 illustrates the full virtual machine backup and file-level backup process.

Backup and recovery using VCB and EMC Avamar

395

Figure 304 VCB backup with EMC Avamar Virtual Edition

The Avamar agent, AVIM, and the VCB software must be installed on the VCB proxy server. After all the three software components are installed, the VCB configuration file (config.js), which is located in the \config directory, must be modified before the first backup can be taken. The VCB configuration file contains comments that define each parameter for Avamar backups. After initiating a backup job from Avamar, VCB retrieves configuration files as well as virtual disks to its local directory. Then Avamar copies the files to the backup destination. After the job is successful, Avamar removes the duplicate copy on the VCB proxy server. This type of backup can be performed on any guest OS and the deduplication occurs at the .vmdk level. VCB file-level backup with Avamar is similar to the VCB image-level backup with Avamar. When a backup is initiated through Avamar, it triggers the scripts provided in the integration module, which in turn starts the executable vcbMounter.exe (included in the VCB software) to contact the vCenter Server or the ESX server directly to locate the virtual machine to be backed up. The arguments passed to vcbMounter.exe come from config.js and the Dataset syntax in EMC 396

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

Avamar. In this case, data deduplication happens at the file level. However, presently, VCB file-level backup works only for virtual machines that run the Windows OS.

Backup and recovery using VCB and EMC Avamar

397

5.9 Backup and recovery using VMware Data Recovery In the VMware vSphere 4 release, VMware introduced VMware Data Recovery, which is a disk-based backup and recovery solution. It is built on the VMware vStorage API for data protection and uses a virtual machine appliance and a client plug-in to manage and restore backups. VMware Data Recovery can be used to protect any kind of OS. It incorporates capabilities such as block-based data deduplication and performs only incremental backups after the first full backup to maximize storage efficiency. Celerra-based CIFS and iSCSI storage can be used as destination storage for VMware Data Recovery. Backed-up virtual machines are stored on a target disk in a deduplicated store.

Figure 305 VMware Data Recovery

398

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

During the backup, VMware Data Recovery takes a snapshot of the virtual machine and mounts the snapshot directly to the VMware Data Recovery virtual appliance. After the snapshot is mounted, VMware Data Recovery begins streaming the blocks of data to the destination storage as shown in Figure 305 on page 398. During this process, VMware Data Recovery deduplicates the stream of data blocks to ensure that redundant data is eliminated prior to the backup data being written to the destination disk. VMware Data Recovery uses the change tracking functionality on ESX hosts to obtain the changes since the last backup. The deduplicated store creates a virtual full backup based on the last backup image and applies the changes to it. When all the data is written, VMware Data Recovery dismounts the snapshot and takes the virtual disk out of the snapshot mode. VMware Data Recovery supports only full and incremental backups at the virtual machine level and does not support backups at file level. Figure 306 on page 399 shows a sample backup screenshot.

Figure 306 VDR backup process

When using VMware Data Recovery, adhere to the following guidelines: ◆

A VMware Data Recovery appliance can protect up to 100 virtual machines. It supports the use of only two backup destinations simultaneously. If more than two backup destinations must be used, configure them to be used at different times. It is recommended that the backup destination size does not exceed 1 TB. Backup and recovery using VMware Data Recovery

399

400



A VMware Data Recovery appliance is only supported if the mount is presented by an ESX server and the VMDK is assigned to the VDR appliance. Mounts cannot be mapped directly to the VDR appliance.



VMware Data Recovery supports both RDM virtual and physical compatibility modes as backup destinations. When using RDM as a backup destination, it is recommended to use the virtual compatibility mode. Using this mode, a VMware snapshot can be taken, which can be leveraged together with the Celerra technologies to provide crash consistency and protection for the backed-up data.



When creating vStorage VMFS over iSCSI as a backup destination, choose the block size that matches the storage requirements. Selecting the default 1 MB block size only allows for a maximum virtual disk size of 256 GB.



To realize increased space savings, ensure that similar virtual machines are backed up to the same destination. Because VMware Data Recovery performs data duplication within and across virtual machines, virtual machines with the same OS will have only one copy of the OS data stored.



The virtual machine must not have a snapshot named _data recovery_ prior to backup by using VMware Data Recovery. This is because VDR creates a snapshot named _data recovery_ as a part of its backup procedure. If the snapshot with the same name exists already, the VDR will delete and re-create it.



Backups of virtual machines with RDM can be performed only when the RDM is running in virtual compatibility mode.



VMware Data Recovery provides an experimental capability called File Level Restore (FLR) to restore the individual files without restoring the whole virtual machine for Windows machines.



Because VMware Data Recovery will only copy the state of the virtual machine at the time of backup, pre-existing snaps are not a part of the VMware Data Recovery backup process.

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

5.10 Virtual machine single file restore from a Celerra checkpoint VMware has introduced the Virtual Disk Development Kit (VDDK) to create or access VMware virtual disk storage. The VMware website (http://communities.vmware.com/community/developer/forums/v ddk) provides more information. The VDDK Disk Mount utility allows administrators to mount a virtual disk as a separate drive or partition without requiring to connect to the virtual disk from within a virtual machine. Therefore, this tool provides a way to mount a Celerra checkpoint-based virtual disk or Celerra iSCSI snapshot-based virtual disk from which one can restore specific files to production virtual machines. A virtual disk cannot be mounted if any of its vmdk files have read-only permissions. Change these attributes to read/write before mounting the virtual disk. To restore a single file for a Windows virtual machine residing on a Celerra-based file system read-only checkpoint: 1. Install VDDK either in the vCenter Server or in a virtual machine where the file has to be restored. 2. Identify the appropriate read-only checkpoint from the Celerra Manager GUI. 3. Create a CIFS share on the read-only checkpoint file system identified in step 2. 4. Map that CIFS share on to the vCenter Server or on to the virtual machine as mentioned in step 1. 5. Execute the following command syntax to mount the virtual disk from the mapped read-only checkpoint: vmware-mount

• driveletter—Specifies the drive letter where a virtual disk must be mounted or unmounted. • path-to-vmdk—Specifies the location of a virtual disk that must be mounted. • /m:n—Allows mounting of Celerra file system read-only checkpoint. • /v:n—Mounts volume N of a virtual disk. N defaults to 1. The following example shows how to mount a virtual disk when the read-only checkpoint is mapped to the U: drive of vCenter Server as shown in Figure 307 on page 402. Virtual machine single file restore from a Celerra checkpoint

401

Figure 307 Mapped CIFS share containing a virtual machine in the vCenter Server

From the command prompt, execute the following command to list the volume partitions: vmware-mount "U:\DEMO\DEMO.vmdk" /p

From the command prompt, execute the following command to mount the virtual disk: vmware-mount P: "U:\DEMO\DEMO.vmdk" /m:n

6. After the virtual disk has been mounted as a P: drive on the vCenter Server, the administrator must copy the individual files through CIFS to the corresponding production machine. 7. After the copy is completed, unmount the virtual disk by using the following command: vmware-mount P:

/d

To restore the Windows files from a vmdk residing on a vStorage VMFS datastore over iSCSI: 1. Identify the Celerra iSCSI snap from which the files have to be restored.

402

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

2. Execute the server_iscsi command from Celerra to promote the identified snap. 3. Create a new datastore and add a copy of the virtual machine to the vCenter Server inventory. 4. Install VDDK in vCenter Server and use the following syntax to mount the vmdk file: vmware-mount /v:N /i:/vm/ "[datastorename] /.vmdk" /h: /u: /s:

The following command mounts the vmdk of the testvm_copy machine on the Q: drive of the vCenter Server as shown in Figure 308. vmware-mount Q: /v:1 /i:"EMC/vm/testvm_copy" "[snap-63ac0294-iscsidatastore] testvm/testvm.vmdk" /h:10.6.119.201 /u:administrator /s:nasadmin

Figure 308 Virtual machine view from the vSphere client

5. After it is mounted, copy the files back to the production machine. 6. After the restore has completed, demote the snap by using the server_iscsi command from the Celerra Control Station. The virtual disk, which is in the RDM format, can also be mounted in a similar manner as the single file restore described in this procedure. Virtual machine single file restore from a Celerra checkpoint

403

5.11 Other file-level backup and restore alternatives There are other alternatives for virtual machine file-level backup and restore. A traditional file-level backup method is installing a backup agent on the guest operating system that runs in the virtual machine, in the same way as it is done on a physical machine. This is normally called guest-based backup. Another method of file-level backup is to use a Linux host to mount the .vmdk file and access the files within the .vmdk directly. Do the following to achieve this: 1. Download the Linux NTFS driver located at http://linux-ntfs.org/doku.php, and install it on the Linux host. 2. Mount the file system being used as the datastore on the Linux host. Administrators can now access configuration and virtual disk files and can do image-level backup of a virtual machine. # mount :/ /mnt/esxfs

3. Mount the virtual disk file of the virtual machine as a loopback mount. Specify the starting offset of 32,256 and the NTFS file system type in the mount command line. # mount /mnt/esxfs//-flat.vmdk /mnt/vmdk –o ro,loop=/dev/loop2,offset=32256 –t ntfs

4. Browse the mounted .vmdk, which can be viewed as an NTFS file system. All the files can be viewed in the virtual machine. 5. Back up the necessary files. Administrators must review the following carefully before implementing the Linux method:

404



The Linux method has been verified to work only for datastores.



VCB works only for Windows virtual machines. This alternative may work for any guest OS type whose file system can be loopback-mounted on a Linux host.



The offset for the loopback mount is not always the same. Determining the correct value may not be straightforward depending on the OS, partition, and so on.



This alternative works only when flat virtual disks are allocated as opposed to thin-provisioned. Testing has shown that thinly provisioned virtual disks cannot be mounted by using any offset. In

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

contrast, VCB comes with a utility mountvm.exe that allows mounting both flat and thin-provisioned virtual disks that contain NTFS partitions. ◆

After a successful mount of the virtual disk file, the file backup is performed on a Linux system. Thus, the Windows ACL metadata is not maintained and will be lost after a restore.

File-level backup can also be performed for RDM devices either in the physical compatibility mode or in the virtual compatibility mode by using the CBMCLI package in the following manner: 1. Take an iSCSI snapshot of the RDM LUN. 2. Promote the snapshot and provide access to the backup server by using the following command: # cbm_iscsi --snap /dev/sdh --promote --mask

3. Connect the snapshot to the backup server. The files in the snapshot can now be backed up. 4. Demote and remove the snapshot when finished.

Other file-level backup and restore alternatives

405

5.12 Summary Table 5 summarizes the backup and recovery options of Celerra storage presented to VMware vSphere or VMware Infrastructure. Table 5

Backup and recovery options Backup/recovery Image-level

File-level

NFS datastore

• • • • •

• VCB (Windows) • Loopback mount (all OS)

vStorage VMFS/iSCSI

• Celerra iSCSI snapshot (CBMCLI or server_iscsi) • Celerra iSCSI replication-based clone (CBMCLI, nas_replicate, or Celerra Manager) • VCB • Replication Manager • VDR

• VCB (Windows)

RDM/iSCSI (physical)

• Celerra iSCSI snapshot (CBMCLI, server_iscsi, or Replication Manager) • Celerra iSCSI replication-based clone (CBMCLI, nas_replicate, Celerra Manager, or Replication Manager)

• Celerra iSCSI snapshot (CBMCLI or server_iscsi)

RDM/iSCSI (virtual)

• Celerra iSCSI snapshot (CBMCLI or server_iscsi) • Celerra iSCSI replication-based clone (CBMCLI, nas_replicate, or Celerra Manager) • VDR

• Celerra iSCSI snapshot (CBMCLI or server_iscsi)

Celerra SnapSure Celerra NDMP VCB Replication Manager VDR

The best practices planning white papers on Powerlink provide more information and recommendations about protecting applications such as Microsoft Exchange and Microsoft SQL Server deployed on VMware

406

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

VMware vSphere or VMware Infrastructure. Access to Powerlink is based upon access privileges. If this information cannot be accessed, contact your local EMC representative.

Summary

407

408

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

6 Using VMware vSphere and VMware Virtual Infrastructure in Disaster Restart Solutions

This chapter presents these topics: ◆ ◆ ◆ ◆ ◆ ◆

6.1 Overview ......................................................................................... 6.2 Definitions ....................................................................................... 6.3 Design considerations for disaster recovery and disaster restart 6.4 Geographically distributed virtual infrastructure..................... 6.5 Business continuity solutions ....................................................... 6.6 Summary .........................................................................................

Using VMware vSphere and VMware Virtual Infrastructure in Disaster Restart Solutions

410 411 413 419 420 453

409

6.1 Overview VMware technology virtualizes the x86-based physical infrastructure into a pool of resources. Virtual machines are presented with a virtual hardware environment independent of the underlying physical hardware. This enables organizations to leverage different physical hardware in the environment and provide low total cost of ownership. The virtualization of the physical hardware can also be used to create disaster recovery and business continuity solutions that would have been impractical otherwise. These solutions normally involve a combination of virtual infrastructure at one or more geographically separated data centers and EMC remote replication technology. One example of such an architecture has physical servers running various business applications in their primary data center while the secondary data center has a limited number of virtualized physical servers. During normal operations, the physical servers in the secondary data center are used to support workloads such as QA and testing. In case of a disruption in services at the primary data center, the physical servers in the secondary data center run the business applications in a virtualized environment. The purpose of this chapter is to discuss:

410



EMC Celerra Replicator configurations and their interaction with an ESX server



EMC Celerra Replicator and ESX server application-specific considerations



Integration of guest operating environments with EMC technologies and an ESX server



The use of VMware vCenter Site Recovery Manager to manage and automate a site-to-site disaster recovery with EMC Celerra

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

6.2 Definitions In the next sections, the terms dependent-write consistency, disaster restart, disaster recovery, and roll-forward recovery are used. A sound understanding of these terms is required to understand the context of this section.

6.2.1 Dependent-write consistency A dependent-write I/O cannot be issued until a related predecessor I/O is completed. Dependent-write consistency is a state where data integrity is guaranteed by dependent-write I/Os embedded in an application logic. Database management systems are good examples of the practice of dependent-write consistency. Database management systems must devise a protection against abnormal termination to successfully recover from one. The most common technique used is to guarantee that a dependent-write cannot be issued until a predecessor write is complete. Typically, the dependent-write is a data or index write, while the predecessor write is a write to the log. Because the write to the log must be completed before issuing the dependent-write, the application thread is synchronous to the log write. The application thread waits for the write to complete before continuing. The result is a dependent-write consistent database.

6.2.2 Disaster restart Disaster restart involves the implicit use of active logs by various databases and applications during their normal initialization process to ensure a transactionally-consistent data state. If a database or application is shut down normally, the process of getting to a point of consistency during restart requires minimal work. If a database or application terminates abnormally, the restart process takes longer, depending on the number and size of in-flight transactions at the time of termination. An image of the database or application created by using EMC consistency technology such as Replication Manager while it is running, without any conditioning of the database or application, is in a dependent-write consistent data state, which is similar to that created by a local power failure. This is also known as a restartable image. The restart of this image transforms it to a transactionally consistent data state by completing committed transactions and rolling back uncommitted transactions during the normal initialization process. Definitions

411

6.2.3 Disaster recovery Disaster recovery is the process of rebuilding data from a backup image, and then explicitly applying subsequent logs to roll the data state forward to a designated point of consistency. The mechanism to create recoverable copies of data depends on the database and applications.

6.2.4 Roll-forward recovery With some databases, it may be possible to take a Database Management System (DBMS) restartable image of the database and apply subsequent archive logs to roll forward the database to a point in time after the image was created. This means the image created can be used in a backup strategy in combination with archive logs.

412

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

6.3 Design considerations for disaster recovery and disaster restart The effect of data loss or loss of application availability varies from one business type to another. For instance, the loss of transactions for a bank could cost millions of dollars, whereas system downtime may not have a major fiscal impact. In contrast, businesses primarily engaged in web commerce must have their applications available on a continual basis to survive in the market. The two factors, data loss and availability, are the business drivers that determine the baseline requirements for a disaster restart or disaster recovery solution. When quantified, loss of data is more frequently referred to as recovery point objective, while loss of uptime is known as recovery time objective. When evaluating a solution, the recovery point objective (RPO) and recovery time objective (RTO) requirements of the business must be met. In addition, the solution's operational complexity, cost, and its ability to return the entire business to a point of consistency need to be considered. Each of these aspects is discussed in the following sections.

6.3.1 Recovery point objective RPO is a point of consistency to which a user wants to recover or restart. It is measured by the difference between the time when the point of consistency was created or captured to the time when the disaster occurred. This time is the acceptable amount of data loss. Zero data loss (no loss of committed transactions from the time of the disaster) is the ideal goal, but the high cost of implementing such a solution must be weighed against the business impact and cost of a controlled data loss. Some organizations, such as banks, have zero data loss requirements. The transactions entered at one location must be replicated immediately to another location. This can affect application performance when the two locations are far apart. On the other hand, keeping the two locations close to one another might not protect the data against a regional disaster. Defining the required RPO is usually a compromise between the needs of the business, the cost of the solution, and the probability of a particular event happening.

6.3.2 Recovery time objective The RTO is the maximum amount of time allowed after the declaration of a disaster for recovery or restart to a specified point of consistency. Design considerations for disaster recovery and disaster restart

413

This includes the time taken to: ◆

Provision power and utilities



Provision servers with the appropriate software



Configure the network



Restore the data at the new site



Roll forward the data to a known point of consistency



Validate the data

Some delays can be reduced or eliminated by choosing certain disaster recovery options such as having a hot site where servers are preconfigured and are on standby. Also, if storage-based replication is used, the time taken to restore the data to a usable state is completely eliminated. Like RPO, each solution with varying RTO has a different cost profile. Defining the RTO is usually a compromise between the cost of the solution and the cost to the business when applications are unavailable.

6.3.3 Operational complexity The operational complexity of a disaster recovery solution may be the most critical factor that determines the success or failure of a disaster recovery activity. The complexity of a disaster recovery solution can be considered as three separate phases: 1. Initial setup of the implementation 2. Maintenance and management of the running solution 3. Execution of the disaster recovery plan in the event of a disaster While initial configuration complexity and running complexity can be a demand on people resources, the third phase, that is, execution of the plan, is where automation and simplicity must be the focus. When a disaster is declared, key personnel may be unavailable in addition to loss of servers, storage, networks, and buildings. If the disaster recovery solution is so complex that it requires skilled personnel with an intimate knowledge of all systems involved to restore, recover, and validate application and database services, the solution has a high probability of failure. Multiple database and application environments over time grow organically into complex federated database architectures. In these federated environments, reducing the complexity of disaster recovery is absolutely critical. Validation of transactional consistency within a 414

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

business process is time-consuming, costly, and requires application and database familiarity. One of the reasons for this complexity is the heterogeneous applications, databases, and operating systems in these federated environments. Across multiple heterogeneous platforms, it is hard to establish time synchronization, and therefore hard to determine a business point of consistency across all platforms. This business point of consistency has to be created from intimate knowledge of the transactions and data flows.

6.3.4 Source server activity Disaster recovery solutions may or may not require additional processing activity on the source servers. The extent of that activity can impact both the response time and throughput of the production application. This effect should be understood and quantified for any given solution to ensure that the impact to the business is minimized. The effect for some solutions is continuous while the production application is running. For other solutions, the impact is sporadic, where bursts of write activity are followed by periods of inactivity.

6.3.5 Production impact Some disaster recovery solutions delay the host activity while taking actions to propagate the changed data to another location. This action only affects write activity. Although the introduced delay may only be for a few milliseconds, it can negatively impact response time in a high-write environment. Synchronous solutions introduce delay into write transactions at the source site; asynchronous solutions do not.

6.3.6 Target server activity Some disaster recovery solutions require a target server at the remote location to perform disaster recovery operations. The server has both software and hardware costs and requires personnel with physical access to the server to perform basic operational functions such as power on and power off. Ideally, this server must have some usage such as running development or test databases and applications. Some disaster recovery solutions require more target server activity and some require none.

6.3.7 Number of copies of data Disaster recovery solutions require replication of data in one form or another. Replication of application data and associated files can be as simple as backing up data on a tape and shipping the tapes to a disaster Design considerations for disaster recovery and disaster restart

415

recovery site or as sophisticated as an asynchronous array-based replication. Some solutions require multiple copies of the data to support disaster recovery functions. More copies of the data may be required to perform testing of the disaster recovery solution in addition to those that support the data replication process.

6.3.8 Distance for the solution Disasters, when they occur, have differing ranges of impact. For instance, a fire may be isolated to a small area of the data center or a building; an earthquake may destroy a city; or a hurricane may devastate a region. The level of protection for a disaster recovery solution must address the probable disasters for a given location. This means for protection against an earthquake, the disaster recovery site should not be in the same locale as the production site. For regional protection, the two sites need to be in two different regions. The distance associated with the disaster recovery solution affects the kind of disaster recovery solution that can be implemented.

6.3.9 Bandwidth requirements One of the largest costs for disaster recovery is to provision bandwidth for the solution. Bandwidth costs are an operational expense; this makes solutions with reduced bandwidth requirements attractive to customers. It is important to recognize in advance the bandwidth consumption of a given solution to anticipate the running costs. Incorrect provisioning of bandwidth for disaster recovery solutions can adversely affect production performance and invalidate the overall solution.

6.3.10 Federated consistency Databases are rarely isolated islands of information with no interaction or integration with other applications or databases. Most commonly, databases are loosely or tightly coupled to other databases and applications using triggers, database links, and stored procedures. Some databases provide information downstream for other databases and applications using information distribution middleware and other applications and databases receive feeds and inbound data from message queues and Electronic Data Exchange (EDI) transactions. The result can be a complex, interwoven architecture with multiple interrelationships. This is referred to as federated architecture. With

416

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

federated environments, making a disaster recovery copy of a single database regardless of other components results in consistency issues and creates logical data integrity problems. All components in a federated architecture need to be recovered or restarted to the same dependent-write consistent point in time to avoid data consistency problems. With this in mind, it is possible that point solutions for disaster recovery, like host-based replication software, do not provide the required business point of consistency in federated environments. Federated consistency solutions guarantee that all components, databases, applications, middleware, and flat files are recovered or restarted to the same dependent-write consistent point in time.

6.3.11 Testing the solution Tested, proven, and documented procedures are also required for a disaster recovery solution. Often, the disaster recovery test procedures are operationally different from a true disaster set of procedures. Operational procedures need to be clearly documented. In the best-case scenario, companies should periodically execute the actual set of procedures for disaster recovery. This could be costly to the business because of the application downtime required to perform such a test, but is necessary to ensure validity of the disaster recovery solution.

6.3.12 Cost The cost of disaster recovery can be justified by comparing it with the cost of not following it. What does it cost the business when the database and application systems are unavailable to users? For some companies this is easily measurable and revenue loss can be calculated per hour of downtime or data loss. For all businesses, the disaster recovery cost is going to be an additional expense item and, in many cases, with little in return. The costs include, but are not limited to: ◆

Hardware (storage, servers, and maintenance)



Software licenses and maintenance



Facility leasing or purchase



Utilities



Network infrastructure



Personnel Design considerations for disaster recovery and disaster restart

417

418



Training



Creation and maintenance of processes

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

6.4 Geographically distributed virtual infrastructure Currently VMware does not provide any native tools to replicate the data from the ESX server to a geographically separated location. Software-based replication technology can be used inside virtual machines or the service console. However, these techniques add significantly to the network and CPU resource requirements. Integrating ESX server and storage-array based replication products adds a level of business data protection not attained easily. Using the SnapSure and Replicator families of Celerra products with VMware technologies enables customers to provide a cost-effective disaster recovery and business continuity solution. Some of these solutions are discussed in the following sections. Note: Similar solutions are possible using host-based replication software such as RepliStor®. However, utilizing storage-array based replication enables customers to provide a disaster restart solution that can provide a business-consistent view of the data that includes multiple hosts, operating systems, and application.

Geographically distributed virtual infrastructure

419

6.5 Business continuity solutions The business continuity solution for a production environment with VMware vSphere and VMware Infrastructure includes the use of EMC Celerra Replicator as the mechanism to replicate data from the production data center to the remote data center. The copy of the data in the remote data center can be presented to a VMware ESX server cluster group. The remote virtual data center thus provides a business continuity solution. For disaster recovery purposes, a remote replica of the PFS or an iSCSI LUN that is used to provide ESX server storage is required. Celerra offers advanced data replication technologies to help protect a file system or an iSCSI LUN. In case of a disaster, fail over to the destination side with minimum administrator intervention. The replication session has to be maintained and the snapshots need to be refreshed periodically. The update frequency is determined based on the WAN bandwidth and the RPO.

6.5.1 NAS datastore replication Providing high availability to virtual machines is crucial in large VMware environments. This section explains how Celerra replication technology provides high availability for virtual machines hosted on NAS datastores. Celerra Replicator technology along with Replication Manager can be used to provide the ability to instantly create virtual machine consistent replicas of NAS datastores containing virtual machines. 6.5.1.1 Replication using Celerra Replicator Celerra Replicator can be used to replicate file systems exported to ESX servers as NAS datastores. This is done by one of the following ways: ◆

Using Celerra Manager: Section , “Using Celerra Manager,” on page 422 provides more details.



Using the Celerra /nas/bin/nas_replicate command, or the /nas/bin/fs_replicate command, for versions earlier than Celerra version 5.6.

The replication operates at a datastore level. Multiple virtual machines will be replicated together if they reside in the same datastore. If further granularity is required at an image level for an individual virtual machine, move the virtual machine in its own NAS datastore. However, consider that the maximum number of NFS mounts per ESX server is 64

420

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

for VMware vSpehere, and 32 for VMware Infrastructure. Section 3.6.1.5, “ESX host timeout settings for NFS,” on page 118 provides details on how to increase the number from a default value of 8. After the failover operation to promote the replica, the destination file system can be mounted as a NAS datastore on the remote ESX server. When configuring the remote ESX server, the network must be configured such that the replicated virtual machines will be accessible. Virtual machines residing in the file system need to register with the new ESX server using the vSphere client for VMware vSphere, or the VI Client for VMware Infrastructure. While browsing the NAS datastore, right-click a .vmx configuration file and select Add to Inventory to complete the registration as shown in Figure 309.

Figure 309 Registration of a virtual machine with ESX

Alternatively, the ESX service console command vmware-cmd can be used to automate the process if a large number of virtual machines need to be registered. Run the following shell script to automate the process: for vm in `ls /vmfs/volumes/` do /usr/bin/vmware-cmd –s register /vmfs/volumes//$vm/*.vmx done

Business continuity solutions

421

After registration, the virtual machine can be powered on. This may take a while to complete. During power on, a pop-up message box regarding msg.uuid.altered appears. Select I-movedit to complete the power on procedure. Using Celerra Manager For remote replication using Celerra Manager, complete the following steps: 1. From the Celerra Manager, click Wizards in the left navigation pane. The Select a Wizard page opens in the right pane.

Figure 310 Select a Wizard

422

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

2. Click New Replication. The Replication Wizard - EMC Celerra Manager appears.

Figure 311 Select a Replication Type

3. Select the replication type as File System and click Next. The File System page appears.

Figure 312 File System

4. Select Ongoing File System Replication and click Next. The list of destination Celerra Network Servers appears. Note: It creates a read-only, point-in-time copy of a source file system at a destination and periodically updates this copy, making it consistent with the source file system. The destination for this read-only copy can be the

Business continuity solutions

423

same Data Mover (loop back replication), another Data Mover in the same Celerra cabinet (local replication) or a Data Mover in a different Celerra cabinet (remote replication).

Figure 313 Specify Destination Celerra Network Server

5. Click New Destination Celerra. The Create Celerra Network Server page appears.

Figure 314 Create Celerra Network Server

424

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

6. Specify the name, IP address and passphrase of the destination Celerra Network Server and click Next. The Specify Destination Credentials page appears.

Figure 315 Specify Destination Credentials

Note: A trust relationship allows two Celerra systems to replicate data between them. This trust relationship is required for Celerra Replicator sessions that communicate between the separate file systems. The passphrase must be the same for both source and target Celerra systems.

7. Specify the username and password credentials of the Control Station on the destination Celerra to gain appropriate access and click Next. The Create Peer Celerra Network Server page appears.

Figure 316 Create Peer Celerra Network Server

Note: The system will also automatically create the reverse communication relationship on the destination side between the destination and source Celerra systems.

Business continuity solutions

425

8. Specify the name by which the source Celerra system is known to the destination Celerra system. The time difference between the source and destination Control Station must be within 10 minutes. The Overview/Results page appears.

Figure 317 Overview/Results

9. Review the result and click Next. The Specify Destination Celerra Network Server page appears.

Figure 318 Specify Destination Celerra Network Server

10. Select the destination Celerra and click Next. The Select Data Mover Interconnect page appears. Note: Replication requires a connection between source Data Mover and peer Data Mover. This connection is called an interconnect.

426

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

Figure 319 Select Data Mover Interconnect

11. Click New Interconnect. The Source Settings page appears.

Figure 320 Source Settings

Note: An interconnect supports the Celerra Replicator™ V2 sessions by defining the communication path between a given Data Mover pair located on the same cabinet or different cabinets. The interconnect configures a list of local (source) and peer (destination) interfaces for all v2 replication sessions using the interconnect.

Business continuity solutions

427

12. Enter the Data Mover interconnect name, select the source Data Mover and click Next. The Specify Destination Credentials page appears.

Figure 321 Specify Destination Credentials

13. Specify the username and password of the Control Station on the destination Celerra and click Next. The Destination Settings page appears.

Figure 322 Destination Settings

14. Specify the name for the peer Data Mover interconnect and then select the Celerra Network Server Data Mover on the other (peer) side of the interconnect and click Next. The Overview/Results page appears.

428

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

Figure 323 Overview/Results

15. Review the results and click Next. The Select Data Mover Interconnect page appears.

Figure 324 Select Data Mover Interconnect

Business continuity solutions

429

16. Select an already created interconnect and click Next. The Select Replication Session's Interface page appears.

Figure 325 Select Replication Session's Interface

Note: Only one interconnect per Data Mover pair can be available.

17. Specify a source interface and a destination interface for this replication session or use the default of any and click Next. The Select Source page appears.

Figure 326 Select Source

Note: By using the default, the system selects an interface from the source and destination interface lists for the interconnect.

430

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

18. Specify a name for this replication session and select an existing file system as the source for the session and click Next. The Select Destination page appears.

Figure 327 Select Destination

19. Use the existing file system at the destination or create a new destination file system and click Next. The Update Policy page appears.

Figure 328 Update Policy

Business continuity solutions

431

Note: When replication creates a destination file system, it automatically assigns a name based on the source file system and ensures that the file system size is the same as the source. Administrators can select a storage pool for the destination file system, and can also select the storage pool used for future checkpoints.

20. Select the required update policy and click Next. The Select Tape Transport page appears.

Figure 329 Select Tape Transport

Note: Using this policy, replication can be used to respond only to an explicit request to update (refresh) the destination based on the source content or to specify a maximum time that the source and destination can be out of synchronization before an update occurs.

21. Click Next. The Overview/Results page appears.

432

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

Note: Select Use Tape Transport? if the initial copy (slivering) of the file system will be physically transported to the destination site using a disk array or tape unit. This will create the replication session and then stop it to enable it to initial copy using a physical tape.

Figure 330 Overview/Results

22. Review the result and click Finish. The job is submitted.

Figure 331 Command Successful

23. After the command is successful, click Close.

Business continuity solutions

433

6.5.1.2 Replication using Replication Manager and Celerra Replicator Replication Manager can replicate a Celerra-based NAS datastore that resides on an ESX server managed by the VMware vCenter Server. Replication Manager uses Celerra Replicator to create remote replicas of NAS datastores. Replication Manager version 5.2.2 supports NAS datastore replication. Because all operations are performed using the VMware vCenter Server, neither the Replication Manager nor its required software needs to be installed on a virtual machine or on the ESX server where the NAS datastore resides. Operations are sent from a proxy host that is either a Linux physical host or a separate virtual host. VMware snapshots are taken for all virtual machines, which are online and residing on the NAS datastore, just before the remote replication to ensure operating system consistency of the resulting replica. Figure 332 shows the NAS datastore replica in the Replication Manager.

Figure 332 NFS replication using Replication Manager

Administrators should ensure that the Linux proxy host is able to resolve the addresses of the Replication Manager server and mount the host and the Celerra Control Station by using DNS. After performing a failover operation, the destination file system can be mounted as an NAS datastore on the remote ESX server. When a NAS datastore replica is mounted to an alternate ESX server, Replication Manager performs all tasks necessary to make the NAS datastore visible to the ESX server. After that is complete, further administrative tasks such as restarting the virtual machines and the applications must be either completed by scripts or by manual intervention.

434

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

6.5.2 VMFS datastore replication over iSCSI Providing high availability to the virtual machines is crucial in large VMware environments. This section explains how Celerra replication technology provides high availability for virtual machines hosted on VMFS datastores over iSCSI. Celerra Replicator technology along with Replication Manager can be used to provide the ability to instantly create virtual machine consistent replicas of VMFS datastores containing virtual machines. 6.5.2.1 Replication using Celerra Replicator Celerra Replicator for iSCSI can be used to replicate the iSCSI LUNs exported to an ESX server as VMFS datastores. 1. From the Celerra Manager, click Wizards in the left navigation pane. The Select a Wizard page opens in the right pane.

Figure 333 Select a Wizard

Business continuity solutions

435

2. Click New Replication. The Replication Wizard - EMC Celerra Manager appears.

Figure 334 Select a Replication Type

3. Select the replication type as iSCSI LUN and click Next. The Specify Destination Celerra Network Server page appears.

Figure 335 Specify Destination Celerra Network Server

436

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

4. Select an existing destination Celerra. If the destination Celerra is not in the list, click New Destination Celerra. The Create Celerra Network Server page appears.

Figure 336 Create Celerra Network Server

5. Specify the name, IP address, and passphrase of the destination Celerra Network Server and click Next. The Specify Destination Credentials page appears.

Figure 337 Specify Destination Credentials

Note: A trust relationship allows two Celerra systems to replicate data between them. This trust relationship is required for Celerra Replicator sessions that communicate between the separate file systems. The passphrase must be same for both the source and target.

6. Specify the username and password credentials of the Control Station on the destination Celerra to gain appropriate access and click Next. The Create Peer Celerra Network Server page appears.

Business continuity solutions

437

Note: The system will also automatically create the reverse communication relationship on the destination side between the destination and local Celerra.

Figure 338 Create Peer Celerra Network Server

7. Specify the name by which the source Celerra will be known to the destination Celerra. The time difference between the local and destination Control Stations must be within 10 minutes and click Next. The Overview/Results page appears.

Figure 339 Overview/Results

8. Review the result and click Next. The Specify Destination Celerra Network Server page appears.

Figure 340 Specify Destination Celerra Network Server

438

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

9. Select an existing destination Celerra and click Next. The Select Data Mover Interconnect page appears.

Figure 341 Data Mover Interconnect

10. Click New Interconnect. The Source Settings page appears. Note: An interconnect supports the Celerra Replicator V2 sessions by defining the communication path between a given Data Mover pair located on the same cabinet or different cabinets. The interconnect configures a list of local (source) and peer (destination) interfaces for all V2 replication sessions using the interconnect.

Figure 342 Source Settings

Business continuity solutions

439

11. Type the name of the Data Mover interconnect, select the Data Mover, and then click Next. The Specify Destination Credentials page appears.

Figure 343 Specify Destination Credentials

12. Type the username and password of the Control Station on the destination Celerra and click Next. The Destination Settings page appears.

Figure 344 Destination Settings

440

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

13. Type the name of the peer Data Mover interconnect and select the Celerra Network Server Data Mover on the other side (peer) of the interconnect and click Next. The Overview/Results page appears.

Figure 345 Overview/Results

14. Review the results of the changes and click Next. The Select Data Mover Interconnect page appears.

Figure 346 Select Data Mover Interconnect

15. Select an already created interconnect and click Next. The Select Replication Session's Interface page appears.

Business continuity solutions

441

Note: Only one interconnect per Data Mover pair is available.

Figure 347 Select Replication Session's Interface

16. Specify a source interface and a destination interface for this replication session or use the default of any, which lets the system select an interface from the source and destination interface lists for the interconnect and click Next. The Select Source page appears.

Figure 348 Select Source

442

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

17. Specify a name for this replication session, select an available source iSCSI target and LUN for the source iSCSI LUN that needs to be replicated, and then click Next. The Select Destination page appears. Note: The target iSCSI LUN needs to be set to read only and has to be the same size as the source LUN

Figure 349 Select Destination

18. Select an available iSCSI target and iSCSI LUN and click Next. The Update Policy page appears.

Figure 350 Update Policy

19. Select the Update policy and click Next. The Overview/Results page appears.

Business continuity solutions

443

Note: Using this policy replication can be configured to respond only to an explicit request to update (refresh) the destination based on the source content. The maximum time that the source and destination can be out of synchronization before an update can also be specified.

Figure 351 Overview/Results

20. Review the changes and then click Finish.

Figure 352 Command Successful

Because the replication operates at a LUN level, multiple virtual machines will be replicated all together if they reside on the same iSCSI LUN. If better granularity is required at an image level for an individual virtual machine, place the virtual machine on its own iSCSI LUN. However, when using this design, the maximum number of VMFS file systems per ESX server is 256. As in the case of a NAS datastore, virtual machines need to be registered with the remote ESX

444

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

server after a failover. A virtual machine registration can be done by either using the datastore browser GUI interface or can be scripted by using the vmware-cmd command. 6.5.2.2 Replication using Replication Manager and Celerra Replicator Replication Manager can replicate a VMFS that resides on an ESX server managed by the VMware vCenter Server and is attached to a Celerra system. Replication Manager uses Celerra Replicator technology to create remote replicas. These replicas are actually snapshots that represent a crash-consistent replica of the entire VMFS. Because all operations are performed through the VMware vCenter Server, neither the Replication Manager nor its required software need to be installed on a virtual machine or on the ESX server where the VMFS resides. Operations are sent from a proxy host that is either a windows physical host or a separate virtual host. Replication Manager proxy host can be the same physical or virtual host that serves as a Replication Manager Server. In Celerra environments, the VMFS data may reside on more than one LUN. However, all LUNs must be from the same Celerra and must share the same target iSCSI qualified name (IQN). VMware snapshots are taken for all virtual machines that are online and reside on the VMFS just prior to replication. When a disaster occurs, the user can fail over this replica, enabling Replication Manager to make the clone LUN from the original production host's VMFS datastores on the remote ESX. Failover also makes the production storage read-only. After performing a failover operation, the destination LUN can be mounted as a VMFS datastore on the remote ESX server. After that is complete, further administrative tasks such as restarting the virtual machines and the applications must be either completed by scripts or by manual intervention. Figure 353 on page 445 shows the VMFS datastore replica in Replication Manager.

Figure 353 VMFS replication using Replication Manager

Business continuity solutions

445

6.5.3 RDM volume replication over iSCSI The iSCSI LUNs presented to an ESX server as RDM are normal raw devices just like they are in a non-virtualized environment. RDM provides some advantages of a virtual disk in the VMFS file system while retaining some advantages of direct access to physical devices. For example, administrators can take full advantage of storage array based data protection technologies regardless of whether the RDM is in a physical mode or virtual mode and another example of such a use case is physical-to-virtual clustering between a virtual machine and a physical server. Replication of RDM volumes is similar to the physical backup of RDM volumes. Celerra Replicator for iSCSI can be used to replicate iSCSI LUNs presented to the ESX server as RDM volumes either by using the cbm_replicate command of the CBMCLI package or by using the Celerra nas_replicate command in Celerra version 5.6 or Replication Manager. Replication Manager can only be used with a RDM volume that is formatted as NTFS and is in the physical compatibility mode.

6.5.4 Site failover over NFS and iSCSI using VMware SRM and Celerra VMware vCenter SRM is an integrated component of VMware vSphere and VMware Infrastructure that is installed within a vCenter-controlled VMware data center. SRM leverages the data replication capability of the underlying storage array to create a workflow that will fail over selected virtual machines from a protected site to a recovery site and bring the virtual machines and their associated applications back into production at the recovery site as shown in Figure 354 on page 447. VMware vCenter SRM 4 supports both Celerra iSCSI and NFS-based replications in VMware vSphere. With VMware Infrastructure and versions earlier than VMware vCenter SRM 4, only Celerra iSCSI based replications are supported. SRM accomplishes this by communicating with and controlling the underlying storage replication software through an SRM plug-in called Storage Replication Adapter (SRA). The SRA is a software provided by storage vendors that ensures integration of storage devices and replication with VMware vCenter SRM. These vendor-specific scripts support array discovery, replicated LUN discovery, test failover, and actual failover.

446

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

Figure 354 VMware vCenter SRM with VMware vSphere

The EMC Celerra Replicator SRA for VMware SRM is a software package that enables SRM to implement disaster recovery for virtual machines by using EMC Celerra systems running Celerra Replicator and Celerra SnapSure software. The SRA-specific scripts support array discovery, replicated LUN discovery, test failover, failback, and actual failover. Disaster recovery plans can be implemented for virtual machines running on NFS, VMFS, and RDM. Figure 355 on page 448 shows a sample screenshot of a VMware SRM configuration.

Business continuity solutions

447

Figure 355 VMware vCenter SRM configuration

During the test failover process, the production virtual machines at the protected site continue to run and the replication connection remains active for all the replicated iSCSI LUNs or file systems. When the test failover command is run, SRM requests Celerra at the recovery site to take a writeable snap or checkpoint by using the local replication feature licensed at the recovery site. Based on the definitions in the recovery plan, these snaps or checkpoints are discovered and mounted, and pre-power-on scripts or callouts are executed. Virtual machines are powered up and the post-power-on scripts or callouts are executed. The same recovery plan is used for the test as for the real failover so that the users can be confident that the test process is as close to a real failover as possible without actually failing over the environment. Companies realize a greater level of confidence in knowing that their users are trained on the disaster recovery process and can execute the process consistently and correctly each time. Users have the ability to add a layer of test-specific customization to the workflow that is only executed during a test failover to handle scenarios where the test may have differences from the actual failover scenario. If virtual machine power on is successful, the SRM test process is complete. Users can start applications and perform tests, if required. Prior to cleaning up the test environment, SRM uses a system callout to pause the simulated 448

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

failover. At this point, the user should verify that their test environment is consistent with the expected results. After verification, the user acknowledges the callout and the test failover process concludes and powers down and unregisters virtual machines, demotes and deletes the Celerra writeable snaps or checkpoints, and restarts any suspended virtual machines at the recovery site. The actual failover is similar to the test failover, except that rather than leveraging snaps or checkpoints at the recovery site while keeping the primary site running, the storage array is physically failed over to a remote location and the actual recovery site LUNs or file systems are brought online and virtual machines are powered up. VMware will attempt to power off the protected site virtual machines if they are active when the failover command is issued. However, if the protected site is destroyed, VMware will be unable to complete this task. SRM will not allow a virtual machine to be active on both sites. Celerra Replicator has an adaptive mechanism that attempts to ensure that RPOs are met, even with varying VMware workloads, so that users can be confident that the crash-consistent datastores that are recovered by SRM meet their pre-defined service level specifications.

6.5.5 Site failback over NFS and iSCSI using VMware vCenter SRM 4 and EMC Celerra Failback Plug-in for VMware vCenter SRM EMC Celerra Failback Plug-in for VMware vCenter SRM is a supplemental software package for VMware vCenter SRM 4. This plug-in enables users to fail back virtual machines and their associated datastores to the primary site after implementing and executing disaster recovery through VMware vCenter SRM for Celerra storage systems running Celerra Replicator V2 and Celerra SnapSure. The plug-in does the following: ◆

Provides the ability to input login information (hostname/IP, username, and password) for two vCenter systems and two Celerra systems



Cross-references replication sessions with vCenter Server datastores and virtual machines



Provides the ability to select one or more failed-over Celerra replication sessions for failback



Supports both iSCSI and NAS datastores

Business continuity solutions

449



Manipulates vCenter Server at the primary site to rescan storage, unregister orphaned virtual machines, rename datastores, register failed-back virtual machines, reconfigure virtual machines, customize virtual machines, remove orphaned .vswp files for virtual machines, and power on failed-back virtual machines.



Manipulates vCenter Server at the secondary site to power off the orphaned virtual machines, unregister the virtual machines, and rescan storage.



Identifies failed-over sessions created by EMC Replication Manager and directs the user about how these sessions can be failed back.

The Failback Plug-in version 4.0 introduces support for virtual machines on NAS datastores and support for virtual machines’ network reconfiguration before failback. 6.5.5.1 New features and changes New features include: ◆

Support for virtual machines on NAS datastores



Support for virtual machine network reconfiguration before failback

Changes include: ◆

Improved log file format for readability



Installation utility automatically determines the IP address of the plug-in server

6.5.5.2 Environment and system requirements The VMware infrastructure at both the protected (primary) and recovery (secondary) sites must meet the following minimum requirements: ◆

vCenter Server 2.5 or later



VI Client



SRM Server with the following installed: • SRM 1.0 or later • Celerra Replicator Adapter 1.X or later available on the VMware website

This server can be vCenter Server or a separate Windows host and should have one or more ESX 3.02, 3.5, 3i, or 4 servers connected to a Celerra storage system. 450

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

EMC Celerra Failback Plug-in for VMware vCenter Site Recovery Manager Release Notes available on Powerlink provide information on specific system requirements. 6.5.5.3 Known problems and limitations EMC Celerra Failback Plug-in for VMware vCenter SRM has the following known problems and limitations: ◆

Virtual machine dependencies are not checked.



Fibre Channel LUNs are not supported.

6.5.5.4 Installing the EMC Celerra Failback Plug-in for VMware vCenter SRM Before installing the EMC Celerra Failback Plug-in for VMware vCenter SRM, the following must be done. ◆

Install the VMware vCenter SRM on a supported Windows host (the SRM server) at both the protected and recovery sites. Note: Install the EMC Celerra Replicator Adapter for VMware SRM on a supported Windows host (preferably the SRM server) at both the protected and recovery sites.

To install the EMC Celerra Failback Plug-in for VMware vCenter SRM, extract and run the executable EMC Celerra Failback Plug-in for VMware vCenter SRM.exe from the downloaded zip file. Follow the on-screen instructions and provide the username and password for the vCenter Server where the Plug-in is registered. 6.5.5.5 Using the EMC Celerra Failback Plug-in for VMware vCenter SRM To run the EMC Celerra Failback Plug-in for VMware vCenter SRM: 1. Open an instance of VI Client or vSphere Client to connect to the protected site vCenter. 2. Click Celerra Failback Plug-in. 3. Follow the on-screen instructions to connect to the protected and recovery site Celerras and vCenters. 4. Click Discover. 5. Select the desired sessions for failback from the list on the Failed Over Datastores, Virtual Machines, and Replication Sessions areas. 6. Click Failback. Business continuity solutions

451

Note: The failback progress is displayed in the Status Messages area.

EMC Celerra Failback Plug-in for VMware vCenter Site Recovery Manager Release Notes available on Powerlink provide further information on troubleshooting and support when using the plug-in.

452

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

6.6 Summary The following table provides data replication solutions of Celerra storage presented to an ESX server. Table 6

Data replication solution Type of virtual object

Replication

NAS datastore

• Celerra Replicator • Replication Manager • VMware vCenter SRM

VMFS/iSCSI

• Celerra Replicator (CBMCLI, nas_replicate, or Celerra Manager) • Replication Manager • VMware vCenter SRM

RDM/iSCSI (physical)

Celerra Replicator (CBMCLI, nas_replicate, Celerra Manager, or Replication Manager) and SRM

RDM/iSCSI (virtual)

Celerra Replicator (CBMCLI, nas_replicate, or Celerra Manager) and SRM

Summary

453

454

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

A CLARiiON Back-End Array Configuration for Celerra Unified Storage

This appendix presents these topics: ◆ ◆

A.1 Back-end CLARiiON storage configuration ............................. 457 A.2 Present the new CLARiiON back-end configuration to Celerra unified storage...................................................................................... 468

CLARiiON Back-End Array Configuration for Celerra Unified Storage

455

A Note: This appendix contains procedures to configure the captive back-end CLARiiON storage in the Celerra unified storage. As such, this procedure should only be performed by a skilled user who is experienced in CLARiiON configuration with Celerra. This appendix is only provided for completion. Given the automation already included as part of the initial Celerra unified storage setup, a typical user will not need to perform this procedure.

The procedure in this appendix should be performed whenever there is a need to modify the configuration of the captive back-end CLARiiON storage of the Celerra unified storage. This procedure will include CLARiiON configuration and presenting this new configuration to Celerra in the form of new Celerra disk volumes that will be added to the existing Celerra storage pools.

456

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

A.1 Back-end CLARiiON storage configuration To configure the back-end CLARiiON storage, create LUNs and add them to the storage group: 1. Create a RAID group 2. Create LUNs from the RAID group 3. Add LUNs to the storage group Create a RAID group To create a RAID group: 1. In Navisphere Manager, right-click the RAID group, and then click Create RAID Group.

Figure 356 Create RAID Group option

The Create Storage Pool dialog box appears.

Back-end CLARiiON storage configuration

457

Figure 357 Create Storage Pool

2. Select the Storage Pool ID and RAID Type. Select Manual, and then click Select. The Disk Selection dialog box appears. 3. Select the disks for the RAID type from the Available Disks box, and then click OK. The selected disks appear in the Selected Disks box.

458

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

Figure 358 Disk Selection

4. Click Apply. This RAID group is created. Create LUNs from the RAID group After the RAID group is created, the LUNs must be created. With FC, SAS, and SATA disks, use the following RAID configuration: RAID 5 (4+1) group in CLARiiON with two LUNs per RAID group. These LUNs should be load balanced between the CLARiiON storage processors (SPs). Section 3.5.3, ”Storage considerations for using Celerra EFDs,” on page 107 provides configuration details for EFDs.

Back-end CLARiiON storage configuration

459

To create LUNs from the RAID group: 1. In Navisphere Manager, right-click the RAID group, and then click Create LUN.

Figure 359 Create LUN option 460

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

The Create LUN dialog box appears.

Figure 360 Create LUN

2. Select the RAID Type, Storage Pool for new LUN, User Capacity, LUN ID, Number of LUNS to create, and then click Apply. The Confirm: Create LUN dialog box appears. Note: With FC disks, use a RAID 5 (4+1) group in CLARiiON. Create two LUNs per RAID group and load-balance LUNs between CLARiiON SPs.

Back-end CLARiiON storage configuration

461

Figure 361 Confirm: Create LUN

3. Click Yes. The Message: Create LUN dialog box appears when the LUN operation is created successfully.

462

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

Figure 362 Message: Create LUN

4. Click OK. Add LUNs to the storage group The host can access the required LUNs only when the LUN is added to the storage group that is connected to the host. To add LUNs to the storage group: 1. In Navisphere Manager, right-click the storage group, and then click Select LUNs.

Back-end CLARiiON storage configuration

463

Figure 363 Select LUNs

464

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

The Storage Group Properties dialog box appears.

Figure 364 Storage Group Properties

2. Select the LUNs that need to be added, and then click Apply. The Confirm dialog box appears.

Back-end CLARiiON storage configuration

465

Figure 365 Confirm

3. Click Yes to confirm the operation. The Success dialog box appears when the LUNs are added successfully.

466

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

Figure 366 Success

4. Click OK.

Back-end CLARiiON storage configuration

467

A.2 Present the new CLARiiON back-end configuration to Celerra unified storage After the backup CLARiiON storage was configured, this new configuration should be presented to Celerra. To add the disk volume to the default storage pool a disk mark is required. To do the disk mark, type the following command at the CLI prompt of Celerra: $ nas_diskmark -mark -all -discovery y -monitor y

Figure 367 Disk mark

New disk volumes are added to the default storage pool.

468

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

B Windows Customization

This appendix presents these topics: ◆ ◆ ◆

B.1 Windows customization ............................................................... 470 B.2 System Preparation tool ................................................................ 471 B.3 Customization process for the cloned virtual machines .......... 472

Windows Customization

469

B.1 Windows customization Windows customization provides a mechanism to assign customized installations efficiently to different user groups. Windows Installer places all the information about the installation in a relational database. The installation of an application or product can be customized for particular user groups by applying transform operations to the package. Transforms can be used to encapsulate various customizations of a base package required by different workgroups. When a virtual machine is cloned, the exact copy of the virtual machine is built with the same asset ID, product key details, IP address, system name, and other system details. This leads to software and network conflicts. The customization of a clone's guest OS is recommended to prevent the possible network and software conflicts.

470

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

B.2 System Preparation tool The System Preparation tool (Sysprep) can be used with other deployment tools to install Microsoft Windows operating systems with minimal intervention by an administrator. Sysprep is typically used during large-scale rollouts when it would be too slow and costly to have administrators or technicians interactively install the operating system on individual computers.

System Preparation tool

471

B.3 Customization process for the cloned virtual machines Install Sysprep on the source virtual machine to avoid possible network and software conflicts. Running Sysprep will re-signature the present software and network setting of the source virtual machine. To customize virtual machines: 1. Run Sysprep on the source virtual machine that is identified to be cloned. Figure 368 shows the welcome screen for the customization wizard by using Sysprep.

Figure 368 System Preparation tool

2. Click OK. The following screen appears.

472

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure

Figure 369 Reseal option

3. Click Reseal. The following dialog box appears.

Customization process for the cloned virtual machines

473

Figure 370 Generate new SID

4. Click OK. The virtual machine reboots and a new SID is created for the cloned system. 5. Clone the customized virtual machine using the Celerra-based technologies: a. Create the checkpoint/snap in Celerra Manager. b. Add the checkpoint/snap to the storage of vCenter Server. c. Create the cloned virtual machine. d. Switch on the cloned virtual machine. e. Confirm the details of the new cloned virtual machine. Any possible conflict between the cloned virtual machine and the source virtual machine is avoided.

474

Using EMC Celerra Storage with VMware vSphere and VMware Infrastructure