Storage Issue - Bitpipe

8 downloads 211 Views 3MB Size Report
Aug 6, 2011 - of thousands and even millions of IOPS . . . and still affordable! But an old ..... Growing data volumes t
HOW STORAGE TECH IS CHANGING • VIRTUALIZE YOUR STORAGE

Managing the information that drives the enterprise

STORAGE Vol. 10 No. 6 August 2011

the stateof

Backup Dedupe

There are more choices than ever for deploying data deduplication for backup. See what will work best in your shop. ALSO INSIDE No more laptop backup excuses The need for speedy storage Backup options for ROBOs Hybrid clouds looming Remote backup under control

STORAGE inside |August 2011

STORAGE inside | august 2011

No excuse for lax laptop backup 5

EDITORIAL Too expensive, too much extra work and not enough integration were legitimate complaints about laptop backup a few years ago. But those excuses just don’t cut it anymore. by RICH CASTAGNA

The need for speed 9

STORWARS Servers and networks have the pedal to the metal, but storage is struggling to keep up. With applications craving more and more performance, storage vendors have to figure out how they’re going to meet those needs. by TONY ASARO

The state of backup deduplication 13

In a relatively short time, data deduplication has revolutionized diskbased backup, but the technology is still evolving with new applications and more choices than ever. by LAUREN WHITEHOUSE

New trends in storage 23

Storage technologies may sometimes seem a little stodgy and out of date, but there’s plenty of technical development going on at both the big storage vendors and smaller upstarts. by STEPHEN FOSKETT

Storage virtualization: It’s ready, are you? 32

User adoption of storage virtualization has been picking up as some of the early obstacles to implementation have been overcome. There are plenty of mature products whether you opt to deploy storage virtualization at the array or in the network. by ERIC SLACK

Options for ROBOs: Choose a backup method for the ages 41

HOT SPOTS Satellite offices and workers are changing the look of companies of all sizes, and backup technology is changing to keep pace. by LAUREN WHITEHOUSE

Hybrid clouds on the horizon 45

READ/WRITE A few notable glitches have soured some users on cloud storage services, but a hybrid approach that integrates public and private storage may ultimately convince cloud skeptics. by JEFF BYRNE

Users get upper hand over remote site backup 48

SNAPSHOT Our latest survey finds that more companies are relying on automated processes to back up their remote offices, and more backup data is making it back to the main data center than ever before. by RICH CASTAGNA

From our sponsors 50

3

STORAGE August 2011

Useful links from our sponsors. Cover image by Enrico Varrasso

server rooms that require GPS Navigation.

SOLVED. We get that virtualization can drive a better ROI. Highly certified by Microsoft, VMware, HP and others, we can evaluate, design and implement the right solution for you. We’ll get you out of this mess at CDW.com/virtualization

©2011 CDW LLC. CDW®, CDW•G® and People Who Get IT™ are trademarks of CDW LLC.

editorial | rich castagna

No excuse for lax laptop backup



Too expensive, too much extra work and not enough integration were all legitimate complaints about laptop backup a few years ago. But with so many new products and alternatives, those excuses just don’t cut it anymore.

Lax laptop backup

Need for speed

State of backup dedupe

Storage tech evolves

Virtualize your storage

Backup options for ROBOs

Hybrid clouds loom

Remote backup under control

Sponsor resources

5

TOMORROW, AND TOMORROW, AND TOMORROW” starts the second sentence of

Macbeth’s soliloquy in which he laments Lady M’s untimely demise. And for fans of “Jeopardy,” it’s also the answer to the question “When will your storage shop implement some real data protection for laptop PCs?” That probably just tacked another violation onto my poetic license, but it’s hard to avoid quoting Shakespeare even when you’re talking about something as non-Elizabethan as data storage. And the “tomorrow” reference is pretty accurate if some of the surveys I’ve seen lately are reasonably accurate. The most recent one to catch my eye is from Druva Software, which, as a laptop backup vendor, has just a wee bit of interest in the results. Nonetheless, some interesting numbers turned up in the survey. Among the survey’s 140 respondents, approximately one-third said that more than half of their users were issued laptops as their principal PCs. But a whopping 62% said a laptop backup policy wasn’t currently enforced even though most claimed they currently have something in place to do laptop backups. Those are a couple of pretty big gaps, but the survey goes on to report even more head-scratching results, like the 30% who said they don’t really see a need for a laptop backup policy. Even more perplexing are the 59% of respondents who considered themselves “satisfied” with their current laptop backup setup. What’s going on here? Maybe we just have some major denial working here—good ol’ out of sight and out of mind, and keep your fingers crossed that the CEO’s laptop doesn’t give up the ghost cruising at 35,000 feet in a first-class cabin somewhere over the Atlantic. What about SOX and HIPAA and PCI and all those other acronyms that tell us to take care of our data just in case? File-based data is quickly overrunning our corporate data stores, and a growing portion of that is being

Copyright 2011, TechTarget. No part of this publication may be transmitted or reproduced in any form, or by any means, without permission in writing from the publisher. For permissions or reprint information, please contact Mike Kelly, VP and Group Publisher ([email protected]).

Storage May 2010

STORAGE

Lax laptop backup

Need for speed

State of backup dedupe

Storage tech evolves

Virtualize your storage

Backup options for ROBOs

Hybrid clouds loom

Remote backup under control

Sponsor resources

6

created, modified and toted around the country on laptop computers. You might be OK rolling the dice when it comes to complying with laws that say what data must be preserved, but think about all that corporate IP going unprotected. That’s gonna get somebody’s attention, right? Intel recently described a study it commissioned from the Ponemon Institute in which the number of lost or stolen laptops was calculated for the 329 participating companies. Ponemon’s numbers are staggering—with an average of 263 laptops MIA for each company. Even if your company experiences just a quarter of that loss (let’s say 60 laptops with half-filled 200 GB disks), you might be kissing off 6 TB of corporate contracts, proposals, plans, projections and budgets each year. A recent study by the The study goes on to put a price Ponemon Institute tag of $49,246 on a typical “disapput a price tag of peared” laptop; again, that seems on the high side as it’s based $49,246 on a typical on just about every worst-case “disappeared” laptop. scenario imaginable. Unless your company’s laptop losers are writing patents, putting risky information in the hands of competitors and would-be litigants, and jotting down the passwords for your corporate bank accounts, your tab probably won’t be so high. But consider lost productivity, potential legal issues (and their resulting fines), compromised competitiveness and so on, and a lost laptop can easily run up a considerable bill. So, what are you doing about laptop backup? Our surveys and other research show that the “other” backup problem—backing up remote and branch offices—finally seems to be under control (see our latest Snapshot survey, “Users get upper hand over remote site backup,” page 48 in this issue). But mobile computing is still an issue, and it’s gotten a little muddled lately with smartphones and tablets getting added to the mix of things to worry about. Not too long ago laptop backup might have been one of the toughest data protection nuts to crack, with few alternatives and little or no integration with other backup processes. Cloud backup services (and there are tons of them) now offer good alternatives, and there are a handful of new endpoint backup apps that also deserve some attention. Still, a lot of shops dismiss those alternatives as just another backup application to maintain. But if you haven’t had the time to check specs lately you might not know how much the mobile backup landscape has changed, and now the

STORAGE August 2011

STORAGE

Lax laptop backup

Need for speed

State of backup dedupe

odds are that whatever your company is using right now to back up its data center and remote offices can also be used for laptops. For example, if you use a backup app from CA, CommVault, EMC, HP, IBM, Microsoft or Symantec, it has a laptop backup option. And even if you’re using a slightly less popular backup app, it’s also likely to have laptop support these days. So you can have a fully integrated backup system—data center, remote offices and mobile users—using a single app with one management console. Does adding laptop support to your backup application mean extra work for your overtaxed crew? Sure, and if you have a lot of laptops floating around, it could be a significant effort to protect them. But if you don’t think it’s really worth the time and effort, do you think it might be worth, say, $49,246? 2 Rich Castagna ([email protected]) is editorial director of the Storage Media Group. * Click here for a sneak peek at what’s coming up in the September 2011 issue.

Storage tech evolves

Virtualize your storage

Backup options for ROBOs

Hybrid clouds loom

Remote backup under control

Sponsor resources

7

STORAGE August 2011

Quantum’s DXi-Series Appliances with deduplication at lower cost than the provide higher performance p competitor. leading competi Q Quantum has helped some of the largest organizations in the world integrate deduplication into their backup process. The benefits they report are immediate and d ssignificant—faster backup and restore, 90%+ reduction in disk needs, automated DR using remote replication, reduced administration time—all while lowering overall costs u and improving the bottom line. a Our award-winning DXi®-Series appliances deliver a smart, time-saving approach O to t disk backup. They are acknowledged technical leaders. In fact, our DXi6500 was just nominated as a “Best Backup Hardware” finalist in Storage Magazine’s Best j Product of the Year Awards—it’s both faster and up to 45% less expensive than the P leading competitor. le

G more bang for your backup today. Get Faster performance. Easier deployment. Lower cost. F

Contact us to learn more at (8 (866) 809-5230 or visit www.quantum.com/dxi Preserving The World’s Most Importan Important Data. Yours.™ ©2011 Quantum Corporation. All rights reserved.

StorWars | tony asaro

The need for speed Servers and networks have the pedal to the metal, but storage is struggling to keep up. As applications crave more and more performance, data storage vendors will need to find new solutions.

Lax laptop backup

Need for speed

State of backup dedupe

Storage tech evolves

Virtualize your storage

Backup options for ROBOs

Hybrid clouds loom

Remote backup under control

Sponsor resources

9

t

HERE’S A LOT OF BUZZ around application performance and the direct connec-

tion it has with data storage performance. Server virtualization, virtual desktop infrastructure (VDI) and business intelligence/big data are some of the key forces driving this need for speed. Servers and networks are getting faster, but disk drives and the storage systems built around them aren’t keeping up. There’s also a price/performance imbalance that’s becoming alarming with the cost per I/O per second (IOPS) climbing on the storage side of the data center. Application performance isn’t just a “special case” requirement. There are certain applications that need high performance the majority of the time. However, we often have to engineer our environments for the 10% or 20% of the time when performance is critical, which would include a much larger group of applications. IT professionals want to increase virtual to physical server ratios from 10:1 to 50:1, but storage is the limiting factor. Some organizations need to have hundreds or thousands of virtual desktops accessing a single pool of storage but they’re limited by boot storms. And big data analytics drive the need for speed through an enormous number of transactions per second; there are solutions optimized to handle these workloads but they come at a high price. You could always increase the performance of storage, but just how much performance are you willing to pay for? To increase IOPS you add more disk drives, create wide stripes and implement short stroking. But that can be very expensive. Alternatively, you can just add lots and lots of solid-state drives (SSDs), but we’re talking big bucks again. And what’s the right balance of price, performance and capacity for your environment? If you don’t need lots of capacity, do you really want to buy lots of disk drives just to increase IOPS? However, if you require a substantial amount

STORAGE August 2011

STORAGE

Lax laptop backup

Need for speed

State of backup dedupe

Storage tech evolves

Virtualize your storage

Backup options for ROBOs

Hybrid clouds loom

Remote backup under control

Sponsor resources

10

of capacity, then buying SSDs will be unattractive price-wise and may not be technically practical to implement. By placing dense and fast memory inside servers, Fusion-io has been the big winner in terms of market buzz and IPO so far. Yet the Fusion-io solution lacks in capacity and high availability, and it’s an expensive and non-shareable resource. It may also be a concern that 90% of its revenue comes from just a handful of customers. Storage system vendors have also seen the trend for more performance and nearly all have responded with SSD options. A few have automated tiering that can move data at a sub-LUN level between tiers, including Dell Compellent with Data Progression, EMC with FAST, Hitachi Data Systems with Hitachi Dynamic Tiering and Hewlett-Packard 3PAR with Adaptive By placing dense and Optimization. All these solutions fast memory inside typically have some page or extent of varying sizes they promote/demote servers, Fusion-io based on activity/inactivity. has been the big Xiotech has a unique approach with its Hybrid ISE product using winner in terms of Continuous Adaptive Data Placement market buzz and (CADP) that creates a single pool of IPO so far. storage from SSDs and hard disk drives (HDDs). Instead of promoting and demoting data based on activity/ inactivity, Xiotech monitors application performance and places data on SSD or HDD based on whether there will be an actual improvement perceivable to the user. The goal is to ensure that price, performance and capacity are in optimal balance. There are also a number of notable startups, including Nimble Storage. Nimble is taking the world by storm with an iSCSI solution that has SSD and HDD, and leverages inline data compression to optimize capacity. Additionally, there are pure-play SSD storage systems from companies like Nimbus Data Systems and Violin Memory. And solid-state stalwarts like Texas Memory Systems are revitalized because of the new attention to high-performance storage. Potential customers are inundated with choices and the various options come with incredible claims of IOPS and throughput performance. Hundreds of thousands and even millions of IOPS . . . and still affordable! But an old skeptic like me knows that performance depends on a number of factors.

STORAGE August 2011

STORAGE

Lax laptop backup

Need for speed

State of backup dedupe

Storage tech evolves

Virtualize your storage

Backup options for ROBOs

And besides, all those marketing numbers you’re getting showered with are always based on best-case scenarios. What happens to performance when something goes wrong? What if a disk drive fails (and we’re not just talking HDDs; solid-state drives don’t spin but they can also fail)? What happens to performance when a Application controller fails? How is primary performance is the application performance impacted if there’s another operation such as hot new requirement mirroring running? How is performand storage is the ance impacted as capacity utilization bottleneck. increases? What is performance over time: one year, two years or three years after initial implementation? These are questions that are rarely asked, and when they are, they often trip up storage vendors. Application performance is the hot new requirement and storage is the bottleneck. The imbalance in the data center is real and will only get worse if things continue as they are. Server and desktop virtualization as well as the emergence of big data analytics as a major application all highlight the performance disadvantage that’s inherent in disk-based storage systems. The good news is that there’s a ton of investment in trying to solve this problem. The bad news is that the number of options IT professionals will have to choose from will make their heads spin; and we all know how slow and error prone that can be! 2 Tony Asaro is senior analyst and founder of Voices of IT.

Hybrid clouds loom

Remote backup under control

Sponsor resources

11

STORAGE August 2011

Up to 85% of computing capacity sits idle in distributed environments. A smarter planet needs smarter infrastructure. Let’s build a smarter planet. ibm.com/dynamic

IBM, the IBM logo and ibm.com are trademarks of International Business Machines Corporation, registered in many jurisdictions worldwide. A current list of IBM trademarks is available on the Web at “Copyright and trademark information” at www.ibm.com/legal/copytrade.shtml.

the stateof

Backup Dedupe

Lax laptop backup

Need for speed

State of backup dedupe

Storage tech evolves

Virtualize your storage

In a relatively short time, data deduplication has revolutionized disk-based backup, but the technology is still evolving with new applications and more choices than ever.

d

BY LAUREN WHITEHOUSE

Backup options for ROBOs

Hybrid clouds loom

Remote backup under control

Sponsor resources

13

ATA DEDUPLICATION TECHNOLOGY identifies and eliminates redundant data

segments so that backups consume significantly less storage capacity. It lets organizations hold onto months of backup data to ensure rapid restores (better recovery time objective [RTO]) and lets them back up more frequently to create more recovery points (better recovery point objective [RPO]). Companies also save money by using less disk capacity and by optimizing network bandwidth.

STORAGE August 2011

STORAGE

Lax laptop backup

Need for speed

State of backup dedupe

Storage tech evolves

Virtualize your storage

Backup options for ROBOs

Hybrid clouds loom

Remote backup under control

Sponsor resources

14

Deduplication was first adopted by companies with tight backup windows and those looking to reduce tape usage. The primary considerations were seamless integration with incumbent backup apps and processes, and ease of implementation. In the next wave of adoption, concerns shifted to scaling capacity and performance. Vendors beefed up disk capacity, performance, network connectivity and system interfaces, and also improved deduplication processing. Recovery was improved with the use of optimized replication. With ongoing data growth and highly distributed environments, organizations and data dedupe vendors have been driven to investigate other ways to optimize deduplication, including new architectures, packaging and deduplication techniques.

DEDUPLICATION IS DEFINITELY DESIRABLE Research from Milford, Mass.-based Enterprise Strategy Group (ESG) reveals that the use of deduplication is increasing. Thirty-eight percent of survey respondents cited adoption of deduplication in 2010 vs. 13% in 2008. By 2012, another 40% plan to adopt deduplication (ESG Research Report, Data Protection Trends, January 2008 and ESG Research Report, Data Protection Trends, April 2010). In addition, according to the ESG Research Report 2011 IT Spending Intentions, data reduction ranked in the top one-third of all storage priorities for enterprise-scale organizations (those with 1,000 or more employees). While debates continue about the nuances of deduplication products such as file vs. virtual tape library (VTL) interface, source vs. target, hardware vs. software, inline vs. post process, fixed-block size vs. variableblock size, it’s important to remember that the goal of any deduplication approach is to store less data.

TARGET DEDUPLICATION SYSTEMS Products that deduplicate at the end of the backup data path are called target deduplication systems. They’re often storage appliances with disk storage or gateways that can be paired with any disk. Target dedupe vendors include EMC Corp., ExaGrid Systems Inc., FalconStor Software Inc., Fujitsu, GreenBytes Inc., Hewlett-Packard (HP) Co., IBM, NEC Corp., Quantum Corp., Sepaton Inc. and Symantec Corp. What often distinguishes these products is their underlying architecture. Aside from

STORAGE August 2011

STORAGE

Lax laptop backup

Need for speed

State of backup dedupe

appliance vs. gateway differences (FalconStor and IBM offer gateways), another key factor is whether they’re single- or multi-node configurations. With a single-node architecture, performance and capacity scaling is limited to an upper threshold for the configuration. While some of these products can be sized to handle tremendous scale, you may have to overpurchase now to accommodate future growth. When the upper limit is hit, a “forklift” upgrade is required to move up in performance or capacity, or another deduplication unit must be added. The latter option results in deduplication “islands” because backup data isn’t compared for redundancy across systems.

s

APIs and open standards

YMANTEC CORP.’S OpenStorage Technology (OST) is an API for NetBack-

Storage tech evolves

Virtualize your storage

Backup options for ROBOs

Hybrid clouds loom

Remote backup under control

Sponsor resources

15

up (Versions 6.5 and higher) and Backup Exec 2010. Target deduplication system vendors leverage the API to write a software plug-in module that’s installed on the backup media server to communicate with the storage device, creating tighter integration between the backup software and target storage. It enables features such as intelligent capacity management, media server load balancing, reporting and lifecycle policies. It also delivers optimized duplication— network-efficient replication and direct disk-to-tape duplication that’s monitored and cataloged by the backup software. EMC Corp. offers similar functionality for EMC NetWorker; however, to date, the benefits are only extended to EMC Data Domain deduplication systems. APIs facilitate interoperability, but could the industry take it one step further with a deduplication standard? A standard algorithm, similar to compression today, could emerge and open-source software could be the vehicle for it to develop and gain a following. The lobby for a standard is fueled by the need to seamlessly, efficiently and rapidly move data between disk and tape (without having to un-deduplicate or rehydrate the data), as well as to improve recovery operations. Any of the dedupe technologies added to open-source backup apps—such as Bacula and Amanda—and open-source ZFS and SDFS file systems could one day emerge as a standard.

STORAGE August 2011

STORAGE

Lax laptop backup

Need for speed

State of backup dedupe

Storage tech evolves

Virtualize your storage

Backup options for ROBOs

Hybrid clouds loom

Remote backup under control

Sponsor resources

16

Vendors with a single-node architecture include EMC, Fujitsu, GreenBytes and Quantum. EMC does offer the Data Domain Global Deduplication Array (GDA), a composite system consisting of two DD890 devices that appear as a single system to the backup application. EMC might argue that GDA meets the criteria to be considered a multi-node configuration with global deduplication, but it has two controllers, two deduplication indexes and two storage silos. The devices also aren’t in a high-availability configuration; in fact, if one DD890 goes down, then neither DD890 is available. EMC distributes a portion of deduplication processing upstream from its appliance, but only for EMC backup apps and backup apps that support Symantec OpenStorage Technology (OST). For example, at the media server, EMC performs pre-processing, creating 1 MB chunks to compare with the deduplication index. If the pattern of the content contained in the large chunks has redundancy, the data is broken down into the more traditional 8 KB chunks, compressed, and transferred to one DD890 controller or the other for further processing, depending on where there’s a better chance of eliminating redundant data. In a multi-node architecture, a product can manage multiple dedupe systems as one. This approach also provides linear throughput and capacity scaling, high availability and load balancing. There’s a reduction in administrative overhead and, importantly, global deduplication is typical. ExaGrid EX Series, FalconStor File-interface Deduplication System (FDS), HP’s Virtual Library Systems (VLS), IBM ProtecTier, NEC Hydrastor, Sepaton DeltaStor and Symantec NetBackup 5000 Series all have multi-node configurations

g

Global deduplication

LOBAL REFERS TO the domain of comparison for deduplication. Identifi-

cation of duplicates occurs in two ways. Within a single domain, backup data passes through an individual system and is compared with data passing through the same system. With deduplication across domains, backup data passes through an individual system and is compared with data passing through the same system as well as other systems in the domain. Global deduplication can result in higher deduplication ratios because there are more comparisons and, therefore, more chances to find replicate data.

STORAGE August 2011

STORAGE

Lax laptop backup

Need for speed

State of backup dedupe

Storage tech evolves

Virtualize your storage

Backup options for ROBOs

Hybrid clouds loom

Remote backup under control

Sponsor resources

17

and support global deduplication. The modular architectures of these products deliver impressive aggregate performance and let you grow the systems seamlessly. Symantec’s appliance is a new entrant in the target deduplication system field through a joint venture with Huawei. Symantec maintains a unique position in the data protection market as the only vendor to offer integrated deduplication in its own backup software- and hardware-based products as well as catalog-level integration with backup target devices of thirdparty vendors via its OST interface.

DEDUPLICATION IN BACKUP SOFTWARE While originally limited to so-called “next-generation” backup apps like EMC’s Avamar, deduplication in backup software is now pervasive. Backup software products with deduplication include Arkeia Network Backup, Asigra Cloud Backup, Atempo Time Navigator, CA ARCserve, Cofio Software AIMstor, CommVault Simpana, Druva InSync and Phoenix, EMC Avamar, i365 EVault, The deduplication IBM Tivoli Storage Manager (TSM), Quest Software NetVault Backup, domain is limited to Symantec Backup Exec and NetBackdata protected by up, and Veeam Backup & Replication. the backup applicaIn software, client agents running on application servers identify and tion; multiple backup transfer unique data to the backup applications in the media server and target storage desame environment vice, reducing network traffic. Other software solutions deduplicate the create deduplication backup stream at the backup server, silos. removing any potential performance burden from production application servers. The deduplication domain is limited to data protected by the backup application; multiple backup applications in the same environment create deduplication silos. Global deduplication isn’t a given with software approaches either. First of all, not all vendors employ the same techniques for identifying duplicates. Some deduplicate by employing delta differencing (e.g., Asigra), which compares data segments for the same backup set over time. Deltas identify unique blocks for the current set vs. the previous backup of that set and

STORAGE August 2011

w

only transfer unique blocks. It doesn’t make comparisons across different sets (i.e., no global deduplication). ITH THE INTRODUCTION of IBM Linear Another approach is to use a Tape File System (LTFS), a data hash algorithm. Some vendors format that provides a file system segment the backup stream into interface to data stored on LTO-5 fixed blocks (anywhere from 8 KB tape media, tape can be treated to 256 KB), generate a hash valmore like an external disk device. ue and compare it to a central With LTFS, data doesn’t have to index of hashes calculated for be written in a tape format, so previously seen blocks. A unique the data is independent of the hash indicates unique data that application that wrote it. It may should be stored. A repeated also be a more appropriate longhash signals redundant data, so term storage medium for uncoma pointer to the unique data is pressible data types, such as stored instead. Other vendors medical images and video files. rely on variable block sizes that Does LTFS offer an opportunity help increase the odds that a for dedupe vendors to integrate common segment will be detape as a long-term storage tier tected even after a file is modifor deduplicated data? The jury’s fied. This approach finds natural still out on that one, as we’ll have patterns or break points that to see if vendors adopt it. might occur in a file and then segments the data accordingly. Even if blocks shift when a file is changed, this approach is more likely to find repeated segments. The trade-off? A variable-length approach may require a vendor to track and compare more than just one unique ID for a segment, which could affect index size and computational time. Arkeia Software uses another approach it calls progressive deduplication. This method optimizes deduplication with a sliding-window block size and a two-phase progressive-matching deduplication technique. Files are divided into fixed blocks, but the blocks can overlap so that when a file is changed, the block boundaries accommodate the insertion of bytes. Arkeia adds another level of optimization by automatically assigning fixed block sizes (from 1 KB to 32 KB) based on file type. The technique also uses a sliding window to determine duplicate blocks at every byte location in a file. Progressive deduplication is designed to achieve high reduction ratios and to minimize false positives while accelerating processing.

LTFS

Lax laptop backup

Need for speed

State of backup dedupe

Storage tech evolves

Virtualize your storage

Backup options for ROBOs

Hybrid clouds loom

Remote backup under control

Sponsor resources

18

STORAGE

STORAGE August 2011

STORAGE

DEDUPLICATION’S GROWING PAINS

Lax laptop backup

Need for speed

State of backup dedupe

Storage tech evolves

Virtualize your storage

Backup options for ROBOs

As deduplication technology has matured, users have experienced most of the growing pains. Growing data volumes that tax backup and recovery have been a catalyst for performance and scale improvements, and have shifted attention to scale-out architectures for deduplication solutions. And replacing tape devices at remote and branch offices created requirements for optimized site-to-site replication, as well as a way to track those duplicate copies in the backup catalog. In its most recent Data Protection Trends research report, ESG surveyed end users regarding their deduplication selection criteria and cost was the top purchase consideration. Some of the issues affecting cost include the following: • Some backup software vendors add deduplication as a no-cost feature (CA and IBM TSM), while others charge for it. • There are hidden costs, such as the added fee to enable replication between deduplication systems. And the recovery site has to be a duplicate (or nearly so) of the system at the primary location, which can double fees. There are exceptions, such as Symantec 5000 Series appliances, which include device-to-device replication at no charge. Symantec also licenses its product based on the front-end capacity of the data being protected vs. the back-end capacity of the data being stored, so replicated copies don’t incur additional costs. • Target deduplication system vendors bundle their storage hardware with the deduplication software, so refreshing the hardware platform means the software is repurchased. Again, Symantec takes a different approach, licensing software and hardware separately.

Hybrid clouds loom

USERS DRIVE NEW DEDUPE DEVELOPMENTS Remote backup under control

Sponsor resources

19

In addition to Arkeia’s progressive deduplication approach, other developments have been pushing the dedupe envelope. CommVault’s ability to deduplicate on physical tape media is one such example. In spite of the initial hype regarding disk-only data protection and the potential to eliminate tape, for most companies the reality is that tape is still an obvious, low-cost choice for long-term data retention. Dedupe has been considered only a disk-centric process due to the need for the deduplication index and all unique data to be available and accessible to “rehydrate” what’s stored. That means when deduplicated data is copied or moved from the deduplication store to tape media, it must be reconstituted, reversing all

STORAGE August 2011

STORAGE

Lax laptop backup

Need for speed

State of backup dedupe

Storage tech evolves

Virtualize your storage

Backup options for ROBOs

Hybrid clouds loom

Remote backup under control

Sponsor resources

20

the benefits of data reduction. But CommVault’s Simpana software enables archival copies of deduplicated data without rehydration, requiring less tape media. Importantly, data can be recovered from tape media without having to first recover the entire tape to disk. When source deduplication approaches gained traction, the key benefits touted were the end-to-end efficiency of backing up closer to the data source (content-awareness, network bandwidth savings and faster backups) and distributing deduplication processing across the environment (vs. having the proverbial four-lane highway hit the one-lane bridge downstream at the target deduplication system). These two themes are evident in HP’s StoreOnce deduplication strategy and EMC Data Domain’s Boost approach. While HP Data Protector software doesn’t have deduplication built into its backup architecture today, users can benefit from HP’s StoreOnce deduplication strategy. StoreOnce is a modular component that runs as a service in a file system. It can be integrated with HP Data Protector backup software and HP’s scale-out file system or embedded in HP infrastructure compoEMC Data Domain’s nents. The StoreOnce algorithm involves two steps: sampling large data seBoost option quences (approximately 10 MB) to enables Data determine the likelihood of duplicates Domain to perform and routing them to the best node for deduplication, and then doing a hash deduplication and compare on smaller chunks. HP’s pre-processing dedupe strategy is differentiated because it’s portable, scalable and global. earlier in the The implication is that dedupe deploybackup flow with ments can extend across a LAN or NetBackup, Backup WAN and among storage systems without flip-flopping data between Exec, EMC Avamar rehydrated and deduplicated states. or EMC NetWorker. EMC Data Domain’s Boost option enables Data Domain to perform deduplication pre-processing earlier in the backup flow with NetBackup, Backup Exec, EMC Avamar or EMC NetWorker. A Data Domain software component is installed on the backup server or application client. The tasks performed there help improve deduplication performance by distributing the workload while introducing network efficiency between the backup server or application client and the Data Domain system.

STORAGE August 2011

STORAGE

WHAT’S IN STORE FOR DEDUPLICATION?

Lax laptop backup

Need for speed

State of backup dedupe

Storage tech evolves

Virtualize your storage

Disk-based data protection addresses backup window issues and deduplication addresses the cost of disk used in backup configurations. But new capture techniques, such as array-based snapshots, are emerging to meet high-performance requirements for those organizations with little or no backup window and minimal downtime tolerance. In many cases, block-level incremental capture and deduplication are baked into snapshot products. NetApp’s Integrated Data Protection products (SnapMirror, SnapProtect and SnapVault), coupled with NetApp FAS-based deduplication, eliminate the need for deduplication in backup software or target deduplication systems. Similarly, Actifio VirtualData Pipeline (VDP) takes a full image-level backup and continuous block-level incrementals thereafter, and deduplicates and compresses the data so a third-party data reduction application isn’t needed. Nimble Storage takes a similar approach. It combines primary and secondary storage in a single solution, leverages snapshot- and replication-style data protection, and employs capacity optimization techniques to reduce the footprint of backup data. These approaches undermine traditional-style backup and, therefore, traditional deduplication techniques. 2 Lauren Whitehouse is a senior analyst focusing on data protection software and systems at Enterprise Strategy Group, Milford, Mass.

Backup options for ROBOs

Hybrid clouds loom

Remote backup under control

Sponsor resources

21

STORAGE August 2011

Switch to single-platform Simpana® software for truly modern data and information management. Year after year, some companies stick with legacy data protection software not designed to handle today’s IT realities. The result? Business at risk, frustrated users, out-of-control costs, and compromised business agility. In a word, insanity.

computing, and smooth adaptation to challenges like data center consolidation and eDiscovery requirements.

With its revolutionary single-platform architecture, Simpana software enables you to solve these problems right now and far into the future. It will lower operational, labor, and infrastructure costs, streamline integration of new technologies like virtualization and cloud

To learn how you can do far more with less and add real value to your end users and your business with Simpana software, visit AchieveOneness.com or call 888-311-0365.

Backup & Recovery

Archive

The result? Up to 50% reduction in storage-related costs, and a far simpler, saner way to manage, access, and recover business information. In a word, oneness.

Virtual Server Protection Information Governance Deduplication

Disaster Recovery

Search

©1999-2011 CommVault Systems, Inc. All rights reserved. CommVault, the “CV” logo, Solving Forward, Simpana, and AchieveOneness are trademarks or registered trademarks of CommVault Systems, Inc. All specifications are subject to change without notice.

NEW TRENDS in STORAGE Lax laptop backup

It may seem as if storage technologies are a little stodgy and out of date, but there’s plenty of technical development going on at both big storage vendors and smaller upstarts. BY STEPHEN FOSKETT

Need for speed

State of backup dedupe

Storage tech evolves

Virtualize your storage

Backup options for ROBOs

Hybrid clouds loom

Remote backup under control

t

HE ENTERPRISE DATA storage industry doesn’t have a reputation as a hotbed

Sponsor resources

23

of innovation, but that characterization may be unfair. Although bedrock technologies like RAID and SCSI have soldiered along for more than two decades, new ideas have flourished as well. Today, technologies like solidstate storage, capacity optimization and automatic tiering are gaining prominence, and specialized storage systems for virtual servers are being developed. Although the enterprise arrays of tomorrow will still be quite recognizable, they’ll adopt and advance these new concepts.

STORAGE August 2011

STORAGE

SOLID-STATE CACHE

Lax laptop backup

Need for speed

State of backup dedupe

Storage tech evolves

Virtualize your storage

Backup options for ROBOs

Hybrid clouds loom

Remote backup under control

Sponsor resources

24

Spinning magnetic disks have been the foundation for enterprise data storage since the 1950s, and for just about as long there’s been talk of how solid-state storage will displace them. Today’s NAND flash storage is just a decade old, yet it has already gained significant traction thanks to its performance and mechanical characteristics. Hard disk drives (HDDs) won’t go away anytime soon, but NAND flash will likely become a familiar and dependable component across the spectrum of enterprise storage. Hard disks excel at delivering capacity and sequential read and write performance, but modern workloads have changed. Today’s hypervisors and database-driven applications demand quick random access that’s difficult to achieve with mechanical arms, heads and platters. The best enterprise storage arrays use RAM as a cache to accelerate random I/O, but RAM chips are generally too expensive to deploy in bulk. NAND flash memory, in contrast, is just as quick at servicing random Flash memory has read and write requests as it is with found a niche as a those that occur close together, and the fastest enterprise NAND flash parts cache for hard disk challenge DRAM for read performance. drive-based storage Although less expensive, flash memory systems. (especially the enterprise-grade singlelevel cell [SLC] variety) remains an order of magnitude more costly than hard disk capacity. Growth in the deployment of solid-state drives (SSDs) has slowed and isn’t likely to displace magnetic media in capacity-oriented applications anytime soon. Flash memory has found a niche as a cache for hard disk drive-based storage systems. Caching differs from tiered storage (see the section on “Automated tiered storage”) in that it doesn’t use solid-state memory as a permanent location for data storage. Rather, this technology redirects read and write requests from disk to cache on-demand to accelerate performance, especially random I/O, but commits all writes to disk eventually. Major vendors like EMC Corp. and NetApp Inc. have placed flash memory in their storage arrays and designed controller software to use it as a cache rather than a tier. NetApp’s Flash Cache cards use the internal PCI bus in their filers, while EMC’s Clariion FAST Cache relies on SATA-connected SSDs. But both leverage their existing controllers and expand on the algorithms already in place for RAM caching.

STORAGE August 2011

STORAGE

Lax laptop backup

Need for speed

State of backup dedupe

Storage tech evolves

Virtualize your storage

Avere Systems Inc. and Marvell Technology Group Ltd., a couple of relative newcomers, take a different tack. With a history in the scale-out network-attached storage (NAS) space, Avere’s team developed an appliance that sits in-band between existing NAS arrays and clients. “No single technology is best for all workloads,” said Ron Bianchini, Avere’s founder and CEO, “so we built a device that integrates the best of RAM, flash and disk.” Bianchini claims Avere’s FXT appliance delivers 50 times lower access latency using a customer’s existing NAS devices. Marvell’s upcoming DragonFly Virtual Storage Accelerator (VSA) card is designed for placement inside the server itself. The DragonFly uses speedy non-volatile RAM (NVRAM) as well as SATA-connected SSDs for cache capacity, but all data is committed to the storage array eventually. “This is focused on random writes, and it’s a new product category,” claims Shawn Kung, director of product marketing at Marvell. “DragonFly can yield an up to 10x higher virtual machine I/O per second, while lowering overhead cost by

All-Flash storage ALTHOUGH FLASH is expensive on a capacity basis compared to

Backup options for ROBOs

Hybrid clouds loom

Remote backup under control

Sponsor resources

25

hard disk technology, many applications can be run completely in flash. iSCSI pioneer Nimbus Data Systems Inc. transitioned to an all-flash offering last year and has seen good results. “Our S-Class enterprise storage arrays deliver 90% lower energy costs and 24x better I/O performance,” said CEO Thomas Isakovich. “And since we include inline deduplication and thin provisioning, we’re competitive on a cost-per-used-capacity basis as well.” All-flash storage in a PCI card form factor is popular in highperformance applications as well. Fusion-io has gained traction with its ioDrive cards, and LSI, OCZ Technology Group Inc., Texas Memory Systems Inc. and Virident Systems Inc. have also found enterprise success with solid-state systems. Flash maker Micron Technology Inc. recently jumped into this market with a PCI Express flash storage card priced 25% lower than its competition.

STORAGE August 2011

Fluid Data storage helps you save time and slash costs. Manage data efficiently Scale on a single platform Automate with a single tool See a Webinar on Dell Compellent Storage Solutions featuring



EfficientVirtualStorage.com





• •





• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • •







• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • •

Fluid Data Technology

• • • • • • • •

• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • •

• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • •• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • •• •• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • •

• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • •

storage.



with Dell Compellent



of customer success



Scan to see a snapshot























STORAGE

Lax laptop backup

50% or more.” The company plans to deliver production products in the fourth quarter. EMC, famous for its large enterprise storage arrays, is also moving into server-side caching. Barry Burke, chief strategy officer for EMC Symmetrix, said EMC’s Lightning project “will integrate with the automated tiering capabilities already delivered to VMAX and VNX customers.” EMC previewed the project at the recent EMC World conference and plans to ship it later this year.

VIRTUALIZATION-OPTIMIZED STORAGE Need for speed

State of backup dedupe

Storage tech evolves

Virtualize your storage

Backup options for ROBOs

Hybrid clouds loom

Remote backup under control

Sponsor resources

27

One common driver for the adoption of high-performance storage arrays is the expanding use of server virtualization. Hypervisors allow multiple virtual machines (VMs) to share a single hardware platform, which can have serious side effects when it comes to storage I/O. Rather than a slow and predictable stream of mostly sequential data, a busy virtual server environment is a fire hose torrent of random reads and writes. This “I/O blender” challenges the basic assumptions used to develop storage system controllers and caching strategies, and vendors are This “I/O blender” rapidly adapting to the new rules. The deployment of SSD and flash caches challenges the basic help, but virtual servers are demandassumptions used ing in other ways as well. Virtual to develop storage environments require extreme flexibility, with rapid storage provisystem controllers sioning and dynamic movement of and caching strateworkloads from machine to machine. gies, and vendors Vendors like VMware Inc. are quickly rolling out technologies to integrate are rapidly adapting hypervisor and server management, to the new rules. including VMware’s popular vStorage API for Array Integration (VAAI). Virtual server environments are an opportunity for innovation and new ideas, and startups are jumping into the fray. One such company, Tintri Inc., has developed a “VM-aware” storage system that combines SATA HDDs, NAND flash and inline data deduplication to meet the performance and flexibility needs of virtual servers. “Traditional storage systems manage LUNs,

STORAGE August 2011

STORAGE

Lax laptop backup

volumes or tiers, which have no intrinsic meaning for VMs,” said Tintri CEO Kieran Harty. “Tintri VMstore is managed in terms of VMs and virtual disks, and we were built from scratch to meet the demands of a VM environment.” Tintri’s VM-aware storage target, isn’t the only option. IO Turbine Inc. leverages PCIe-based flash cards or SSDs in server hardware with Accelio, its VM-aware storage acceleration software. “Accelio enables more applications to be deployed on virtual machines without the I/O performance limitations of conventional storage,” claims Rich Boberg, IO Turbine’s CEO. The Accelio driver transparently redirects I/O requests to the flash as needed to reduce the load on existing storage arrays.

Need for speed

CAPACITY OPTIMIZATION State of backup dedupe

Not all data storage innovations are focused on performance. The growth of data has been a major challenge in many environments, and deleting data isn’t always an acceptable answer. Startups like Ocarina and Storwize

Storage tech evolves

Virtualize your storage

Backup options for ROBOs

Hybrid clouds loom

Remote backup under control

Sponsor resources

28

The end of the SAN? ALTHOUGH SCSI IS still the dominant enterprise data storage pro-

tocol in the form of Fibre Channel and iSCSI, that might change in the future. The rise of PCI Express storage suggests that centralized networked storage might not always dominate. Internal cards dramatically reduce access latency, and the performance of these solutions is an order of magnitude better than traditional SCSI-based technology. The rise of virtual machine-specific and cloud storage suggests that other changes are imminent. In both cases, some products eschew traditional block or file access in favor of an application programming interface (API). These devices are designed to be integrated, automated components of a larger environment, application platform or hypervisor, and would no longer require storage architects and managers.

STORAGE August 2011

STORAGE

Lax laptop backup

Need for speed

State of backup dedupe

Storage tech evolves

Virtualize your storage

Backup options for ROBOs

updated existing technologies like compression and single-instance storage (SIS) for modern storage applications. Now that these companies are in the hands of major vendors (Dell Inc. and IBM, respectively), users are beginning to give capacity optimization a serious look. Reducing storage has ripple effects, requiring less capacity for replication, “The Ocarina technolbackup and disaster recovery (DR) as well as primary data storage. “The ogy is flexible enough Ocarina technology is flexible enough to be optimized for to be optimized for the platforms we’re the platforms we’re embedding the technology into,” said Mike Davis, marketing manager for embedding the Dell’s file system and optimization technology into.” technologies. “This is an end-to-end —MIKE DAVIS, marketing manager, Dell strategy, so we’re looking closely at how we can extend these benefits beyond the storage platforms to the cloud as well as the server tier.” Data deduplication is also moving to the primary storage space. Once only used for backup and archiving applications, NetApp, Nexenta Systems Inc., Nimbus Data Systems Inc., Permabit Technology Corp. and others are applying deduplication technology in arrays and appliances. “NetApp’s deduplication technology [formerly known as A-SIS] is optimized for both primary [performance and availability] as well as secondary [capacityoptimized backup, archive and DR] storage requirements,” said Val Bercovici, NetApp’s cloud czar. NetApp integrated deduplication into its storage software and claims no latency overhead on I/O traffic.

Hybrid clouds loom

AUTOMATED TIERED STORAGE Remote backup under control

Sponsor resources

29

One hot area of innovation for the largest enterprise storage vendors is the transformation of their arrays from fixed RAID systems to granular, automatically tiered storage devices. Smaller companies like 3PAR and Compellent (now part of Hewlett-Packard Co. and Dell, respectively) kicked off this trend, but EMC, Hitachi Data Systems and IBM are delivering this technology as well. A new crop of startups, including Nexenta, are also active in this area. “NexentaStor leverages SSDs for hybrid storage pools, which automatically tier frequently accessed blocks to the SSDs,” noted Evan Powell, Nexenta’s CEO. Powell also said that his firm’s software platform allows users to

STORAGE August 2011

STORAGE

Lax laptop backup

Need for speed

State of backup dedupe

Storage tech evolves

Virtualize your storage

Backup options for ROBOs

Hybrid clouds loom

supply their own SSDs, which he claims reduces the cost of entry for this technology. EMC has added virtual provisioning and automated tiering across its product line. “EMC took a new storage technology [flash] and used it to deliver both greater performance as well as cost savings,” said Chuck Hollis, EMC’s global marketing chief technology officer. “Best of all, it’s far simpler to set up and manage.” Like caching, automated tiered storage improves data storage system performance as much as it attacks the cost of capacity. By moving “hot” data to faster storage devices (10K or 15K rpm disks or SSD), tiered storage systems can perform faster than similar devices without the expense of widely deploying these faster devices. Conversely, automated tiering can be more energy- and space-efficient because it moves “bulk” data to slower but larger-capacity drives.

INNOVATION IN STORAGE Enterprise storage vendors must maintain compatibility, stability and performance while advancing the state of the art in technology—goals that may sometimes seem at odds. Although smaller companies have been a little more nimble at introducing new innovations like capacity optimization and virtualization-aware storage access, the large vendors are also moving quickly. They’ve put into service solid-state caching and automated tiered storage, and are moving forward in other areas. Whether through invention or acquisition, innovation is alive and well in enterprise storage. 2 Stephen Foskett is an independent consultant and author specializing in enterprise storage and cloud computing. He is responsible for Gestalt IT, a community of independent IT thought leaders, and organizes their Tech Field Day events. He can be found online at GestaltIT.com, FoskettS.net and on Twitter at @SFoskett.

Remote backup under control

Sponsor resources

30

STORAGE August 2011

Simply Smarter Business Storage for Virtualization, Backup and Cloud Computing

ReadyNAS® Pro 2 ReadyNAS® Pro 4 ReadyNAS® Pro 6

ReadyNAS® 3100, 2100, 4200, 3200

(Top to Bottom)

Products shown above are GSA compliant.

Backup, Restore and Disaster Recovery

Virtualization

• Ideal disk-to-disk backup target for Symantec, Acronis or StorageCraft

• Build affordable virtualization solutions in small or remote offices

• Improves Symantec Backup Exec performance by up to 120%

• VMware Ready and Microsoft Hyper-V certified

• Ideal target for virtual machine backups with Veeam or Vizioncore • ReadyNAS® Replicate option for easy offsite disaster recovery

• Ideal backup target for VMs

Cloud Computing • Hybrid cloud solutions for combination local and hosted file sharing and archiving • FREE! 100GB of ReadyNAS Vault offsite archive

NETGEAR® is Smart IT, Not Big IT Reliable

Affordable

Simple

• 5 year warranty

• A fraction of the cost of traditional vendors

• Easy installation

• No consultants required, no new training or licenses needed

• Embedded VPN remote access

• Enterprise hard disks • Embedded offsite archive

• Reduces operating expenses through automation

• Painless remote management • Centralized multi-site backup management with optional ReadyNAS Replicate

*The 5-Year Hardware Warranty only covers hardware, fans and internal power supplies, and does not include external power supplies or software. Hardware modifications or customization void the warranty. The warranty is only valid for the original purchaser and cannot be transferred. NETGEAR, the NETGEAR logo, Connect with Innovation, ReadyNAS, ReadyNAS Replicate, and ReadyNAS Vault are trademarks and/or registered trademarks of NETGEAR, Inc. and/or its subsidiaries in the United States and/or other countries. Other brand names mentioned herein are for identification purposes only and may be trademarks of their respective holder(s). Information is subject to change without notice. © 2010 NETGEAR, Inc. All rights reserved.

Learn more at: www.netgear.com/business_storage

STORAGE VIRTUALIZATION:

Lax laptop backup

Need for speed

State of backup dedupe

Storage tech evolves

Virtualize your storage

It’s ready, are you? Adoption of storage virtualization has been accelerating as some of the early obstacles to implementation have fallen by the wayside. There’s a wide choice of mature products whether you decide to deploy storage virtualization at the array or in the network.

w

BY ERIC SLACK Backup options for ROBOs

Hybrid clouds loom

Remote backup under control

Sponsor resources

32

HILE THERE MAY BE some dispute over an exact definition, storage virtualization

is generally considered technology that provides a flexible, logical arrangement of data storage capacity to users while abstracting the physical location from them. It’s a software layer that intercepts I/O requests to the logical capacity and maps them to the correct physical locations. The most basic implementation of storage virtualization is at the host level, where a logical volume manager allows the simple provisioning of storage capacity to apps and users. While also implemented with file storage systems, block storage virtualization is more commonly implemented due to the complexity of LUN management and the requirements

STORAGE August 2011

STORAGE

for flexibility in storage provisioning, especially in multi-user environments. This article covers storage virtualization technologies at the network and storage device level, not at the host level.

GOODBYE TO GROUPS, LUNs AND PARTITIONING Lax laptop backup

Need for speed

State of backup dedupe

Storage tech evolves

Virtualize your storage

Backup options for ROBOs

Hybrid clouds loom

Remote backup under control

Sponsor resources

33

The legacy process of creating array groups, allocating LUNs and partitioning volumes is a complicated and inefficient way to provision storage, particularly when it involves balancing performance and reliability of physical disks across drive shelves. Similarly, expanding an existing host’s volume can be a time-consuming process of concatenating LUNs and copying data. Storage virtualization provides a better way to keep up with the Virtualization can demands of provisioning storage to applications and servers while reducimprove performance ing time and resources expended by as host volumes are allowing the “brains” of the storage easily spread across system to make most of the decisions. It can also improve utilization larger numbers of by replacing the guesswork of manual disk drives, which allocation while supporting technolocould negatively gies like thin provisioning. Initially, virtualization was simply affect capacity a tool used to provision and manage utilization. storage efficiently. But by isolating the host from physical storage, the technology also enabled storage capacity in different physical chassis (even from different manufacturers) to be logically combined into common pools that could be managed more easily. While some of these heterogeneous systems were used to create larger volumes than were physically present on any one disk array, most use cases employed storage virtualization as a common management platform. This enabled existing storage systems to be repurposed and reduced the overhead associated with managing multiple silos of storage, although the physical disk systems still needed to be maintained. Virtualization can improve performance as host volumes are easily spread across larger numbers of disk drives, which could negatively affect capacity utilization. Virtualization also allows storage tiering and data migrations between devices, such as moving older data to an archiving appliance or

STORAGE August 2011

hot database indexes to a solid-state drive (SSD) cache. These activities are typically carried out based on policies set at the host, application or file level, and the same data movement mechanism can be used to migrate data offsite for disaster recovery (DR) purposes.

DEVICE-BASED VIRTUALIZATION Lax laptop backup

Need for speed

State of backup dedupe

Storage tech evolves

Virtualize your storage

In the traditional scale-up architecture where the controllers are separate from the disk shelves, virtualization at the storage device level is typically built into the controller operating system. As a standard feature it essentially provides a workable solution for provisioning the tens or hundreds of terabytes that modern storage arrays can contain. Most systems include the ability to create tiers of storage within a single virtualized system or among discrete systems, using different storage types (performance drives, capacity drives or SSDs) and different RAID levels. Some also include a policy engine and the ability to move file or sub-file data blocks among the tiers based on activity, application and so on. Most systems allow data to be copied to a second chassis for high availability or moved to a second system at a remote site for DR. While the majority of storage systems include virtualization, most don’t support storage from other vendors. For a heterogeneous virtualization solution, one that can consolidate different vendors’ storage systems, most options are network based.

Backup options for ROBOs

NETWORK-BASED VIRTUALIZATION Hybrid clouds loom

Remote backup under control

Sponsor resources

34

A number of years ago, the conventional storage wisdom was that storage services, like virtualization, and to an extent storage control, would eventually reside in “smart switches” on the storage-area network (SAN). While at least one storage virtualization product is moving in that direction, the network implementation of storage virtualization technology has commonly been in the form of appliances. These appliances are essentially storage controllers that connect to disk arrays or storage systems from certified vendors, or they’re software that’s installed on user-supplied servers or virtual machines (VMs). Storage virtualization appliances connect to heterogeneous storage arrays directly, or via Fibre Channel (FC) or iSCSI SANs, but most provide the option of using their own disk capacity as well. Most solutions include some storage services, like file sharing, snapshots, data deduplication, thin provisioning, replication, continuous data protection (CDP) and so on. STORAGE August 2011

Feel the exhilaration of Business Continuity without Limits Introducing NEW ShadowProtect® Virtual™ from StorageCraft®

Maintaining business continuity in today’s complex business world can seem very constricting because IT environments are constantly changing. ShadowProtect ® Virtual™ frees you to backup everything in any virtualized Windows environment and then recover it anywhere: onsite, off-site or in the cloud! It: • Supports VMware, Microsoft Hyper-V, Citrix XenServer or Oracle VirtualBox • Is priced per VM and can be deployed on multiple hosts • Includes VirtualBoot™ technology so any ShadowProtect backup can be booted as a VM in less than 5 minutes • Provides Hardware Independent Restore™ to allow you to recover to the same system, a physical system or even to a different hypervisor

The result is a solution that provides business continuity that is virtually limitless. Now that’s freedom! For your FREE 30-day trial, visit www.storagecraft.com/virtual

STORAGE

IN-BAND AND OUT-OF-BAND VIRTUALIZATION

Lax laptop backup

Need for speed

State of backup dedupe

Storage tech evolves

Virtualize your storage

Backup options for ROBOs

Hybrid clouds loom

Remote backup under control

Sponsor resources

36

Early on in the lifecycle of storage virtualization technology two primary architectures emerged: in-band and out-of-band virtualization. In-band implementations placed a controller between users and physical storage or the SAN, and passed all storage requests and data through that controller. Out-of-band products placed a metadata controller on the network that remapped storage requests to physical locations, but didn’t handle the actual data. That added complexity to the process but reduced the CPU load compared to in-band virtualization. Out-of-band storage virtualization also removed the potential disruption associated with decommissioning an in-band device, as users are disconnected from their data while storage is remapped. Most network-based virtualization solutions today use the in-band architecture, probably because CPU power is relatively plentiful compared to when storage virtualization first appeared. Another reason for the popularity of in-band solutions is that they’re easier to implement, which means faster time to market and fewer problems.

WHAT IS

SCALE-OUT STORAGE?

“Scale-out” storage refers to modular systems that combine processors and storage capacity into discrete physical nodes. This clustered architecture lets processing power expand with capacity as nodes are added, and provides for a more incremental, albeit non-heterogeneous, growth. While it could be called “device based,” virtualization in the scale-out space is more than a standard feature, it’s required. It enables these systems to scale non-disruptively while user volumes span nodes in the cluster.

STORAGE August 2011

STORAGE VIRTUALIZATION PRODUCTS Virtualization has become an essential function for storage provisioning and is included in some form with most midsized and larger storage systems. While there are many differences between arrays and their virtualization technologies, the majority of these device-based implementations don’t support disk capacity from other manufacturers. Instead of listing the large number of these storage systems, we’ll focus on the smaller category of heterogeneous storage systems. The following are examples of heterogeneous storage virtualization as implemented in hardware and software products available from a variety of vendors.

STORAGE

Lax laptop backup

Need for speed

State of backup dedupe

Storage tech evolves

Virtualize your storage

Backup options for ROBOs

Hybrid clouds loom

Remote backup under control

Sponsor resources

37

DataCore Software Corp.’s SANsymphony is a network-based, in-band software product that runs on commodity x86 servers. It supports heterogeneous storage devices via FC, Fibre Channel over Ethernet (FCoE) or iSCSI, and connects to hosts as FC or iSCSI storage. Multiple-node clusters can be created to scale capacity and provide high availability. The system provides remote replication and storage services like synchronous mirroring, CDP, thin provisioning and tiered storage. EMC Corp.’s Invista is an out-of-band software solution that runs on a pair of servers (called a Control Path Cluster or CPC) and interacts with “intelligent switches” from Brocade or Cisco. It can virtualize storage from most major vendors, connecting to storage and host servers via Fibre Channel. Invista provides mirroring, replication and point-in-time clones between storage arrays. FalconStor Software Inc.’s Network Storage Server (NSS) is a network-based, in-band appliance that connects to heterogeneous storage systems via iSCSI, FC or InfiniBand, and supports host connectivity with Fibre Channel or iSCSI. Expansion and high availability are provided by connecting multiple controller modules. Besides WAN-optimized replication, NSS also provides synchronous mirroring, thin provisioning, snapshots and clones. Hitachi Data Systems’ Universal Storage Platform V (USP V) is a tier 1 storage array system that also provides in-band heterogeneous connectivity to most major storage vendors’ arrays. It includes the kinds of features and services expected from a tier 1 solution, including thin provisioning of internal and externally attached storage. IBM’s SAN Volume Controller (SVC) is a network-based, in-band virtualization controller that sits on the SAN and connects to heterogeneous storage systems via iSCSI or FC. Pairs of SVC units provide high availability, and up to eight nodes can be clustered to scale bandwidth and capacity. Each SVC module features replication between storage systems and a mirroring function between local or remote SVC units. NetApp Inc.’s V-Series Open Storage Controller is an in-band virtualization solution that’s very similar to a NetApp filer controller, but configured to support heterogeneous storage arrays. It connects to a FC SAN on the back end to consolidate as much storage as desired from existing LUNs, and pools them into NetApp LUNs for block or file provisioning as would a regular NetApp filer. NetApp recently acquired the Engenio Storage Virtualization Manager (SVM), a network-based, in-band virtualization controller that supports heterogeneous storage systems. Details of how NetApp will market this solution have yet to be announced.

STORAGE August 2011

STORAGE

HANDLE WITH CARE

Lax laptop backup

Need for speed

State of backup dedupe

Storage tech evolves

Virtualize your storage

Backup options for ROBOs

Hybrid clouds loom

Remote backup under control

Sponsor resources

38

Because most storage virtualization products are in-band, care should be taken to understand the effective performance of the virtualization appliance or cluster as this will be the gating factor to capacity expansion. In addition, storage services or features will also consume CPU cycles, further reducing effective capacity. Storage virtualization is a powerful tool to reduce Capex by improving capacity utilization or performance, but its biggest benefit may be on the

FILE STORAGE VIRTUALIZATION

W

HILE MANY STORAGE systems include file services, they virtualize

data at the block level. However, there are network-attached products that can consolidate standalone network-attached storage (NAS) systems. These appliances provide a global namespace to users on the front end and map file requests to the right physical NAS on the back end. These systems can also provide file storage tiering and migration, some even to cloud storage providers. Examples of file virtualization products include the following: AutoVirt Inc. markets an out-of-band file storage virtualization software product that runs on a pair of Windows servers or virtual machines (VMs). It also provides a global namespace and a policy engine for data tiering, migration and archiving. Being out-of-band, it can be taken out of the environment without disruption. Avere Systems Inc.’s FXT is a heterogeneous, scale-out NAS appliance implemented in clusters of up to 25 2U modules, each containing primarily solid-state (DRAM and solid-state drive) storage. The FXT cluster supports a global, tiered file system, typically encompassing NAS systems from other manufacturers; it also provides file virtualization across platforms. F5 Network Inc.’s ARX products are a series of in-band file virtualization appliances that can consolidate multiple heterogeneous NAS devices behind a global namespace, supporting CIFS and NFS protocols. They also provide a policy engine that can automatically move files between NAS systems, locally or to the cloud, based on file attributes, activity or other criteria.

STORAGE August 2011

STORAGE

Opex side. It can simplify storage management, even across platforms, and reduce administrative overhead. Virtualization can also make storage expansion a relatively simple operation, often done without taking storage systems down or disrupting users. 2 Eric Slack is a senior analyst at Storage Switzerland.

Lax laptop backup

Need for speed

State of backup dedupe

Storage tech evolves

Virtualize your storage

Backup options for ROBOs

Hybrid clouds loom

Remote backup under control

Sponsor resources

39

STORAGE August 2011

EMC PRESENTS

“Discover the Power of Disk-Based Backup”

“Improve Backup Performance and Reliability”

DISCOVER THE POWER OF

BLOCKBUSTER BACKUP SOLUTIONS Don’t miss the next generation of blockbuster solutions from EMC, #1 in disk-based backup and recovery. Learn more at www.EMC.com/backuptothefuture. Join us at the EMC Backup Showcase on September 8 to learn more about EMC backup solutions.

EMC2, EMC, and the EMC logo are registered trademarks or trademarks of EMC Corporation in the United States and other countries. © Copyright 2011 EMC Corporation. All rights reserved.

hot spots | lauren whitehouse

Options for ROBOs: Choose a backup method for the ages Lax laptop backup

Need for speed

Satellite offices and workers are changing the look of companies of all sizes, and backup technology is changing to keep pace. Learn which strategy is best for your remote office, and whether remote copies and tape are necessary or not.

d

UE TO THE wide distribution of corporate data across sites, organizations

State of backup dedupe

Storage tech evolves

Virtualize your storage

Backup options for ROBOs

Hybrid clouds loom

Remote backup under control

Sponsor resources

41

with remote offices/branch offices (ROBOs) are often challenged by the demands associated with backup and recovery. Enterprise Strategy Group (ESG) recently surveyed more than 450 IT professionals regarding people, process and technology at ROBO locations (“2011 Remote Office and Branch Office Technology Trends,” June 2011) and found that 59% of firms with fewer than 10 employees at ROBOs function without any local IT Both disk and tape storage staff, even though 71% indicated that on-site storage is leveraged systems remain the go-to at some point in the backup components of most ROBO processes at these locations. Both data protection strategies, disk and tape storage systems but newer wide-area/ remain the go-to components remote backup technologies of most ROBO data protection strategies, but newer wideare garnering more serious area/remote backup technologies consideration as a primary are garnering more serious conmeans of data backup. sideration as a primary means of data backup. Specifically, 26% of organizations currently back up data from these locations over the WAN directly to a centralized corporate site vs. a mere 7% employing this methodology back in 2007. Those with more storage capacity at ROBOs cited improving backup and recovery processes as a top IT priority. For example, ROBOs with more than 25 TBs of storage capacity ranked this as their No. 1 priority, those with 1 TB to 25 TBs of storage capacity ranked it second and ROBOs with less than

STORAGE August 2011

STORAGE

1 TB ranked it fourth. Data growth is a contributing factor. The top ROBO data storage challenges include keeping pace with overall data growth, the need to improve backup and recovery processes, and storage system costs.

ROBO DATA PROTECTION STRATEGIES Lax laptop backup

Need for speed

State of backup dedupe

Storage tech evolves

Virtualize your storage

Backup options for ROBOs

Hybrid clouds loom

Remote backup under control

Sponsor resources

42

There are many options available when planning and configuring a data protection strategy for ROBOs. Choices will depend on the availability of on-site staff, the volume of data to protect, corporate policies regarding retention and privacy/security, available bandwidth and the capabilities of the backup infrastructure. Centralized backup with no ROBO-based copy: With this option, data is backed up directly to an off-site corporate location, such as a corporate headquarters (HQ) data center, with no on-site copy. All backup data is centralized and under the direct control of the IT organization. This ensures the security of the backup copies, and the ability to enforce requirements for corporate or regulatory mandates. It also eliminates the need for local backup infrastructure and personnel. The downside is that the bandwidth required between sites to transfer daily backup streams could be costly and/or it could take considerable time to transmit backup data to/from the central site—unless source deduplication is employed to reduce the volume of data transferred between sites. That’s probably why ESG research found this to be the top method for companies with 1 TB or less of data to protect. Software as a Service (SaaS) with no ROBO-based copy: Data is backed up to a third-party service provider’s cloud storage directly over the WAN, with no on-site copy. Similar to a centralized backup strategy, this approach maintains only a remote copy of data for recovery. After the initial configuration via a Web-based application, data is automatically backed up over a WAN connection at scheduled intervals to the service provider. Because data is transmitted over the WAN and there’s no onpremises copy, the pros and cons of the SaaS model are similar to the HQ centralized approach; however, backup data custody is with a third party, so you have to be comfortable with everything that accompanies that strategy. The most important thing here is to make sure you understand your service-level agreements (SLAs) and that they work for you. Local-only backup: Data is backed up to on-site storage with no offsite copy. This approach ensures a duplicate copy of data is made, but doesn’t provide contingencies for a possible outage at the site. In the

STORAGE August 2011

STORAGE

Lax laptop backup

Need for speed

State of backup dedupe

Storage tech evolves

Virtualize your storage

Backup options for ROBOs

Hybrid clouds loom

Remote backup under control

Sponsor resources

event data can’t be recovered locally or a catastrophe destroys the local copies and the original, it may not be possible to recoup your losses. Local backup with an off-site copy via tape media: Data is initially backed up to on-site storage and a copy is sent off site via removable media (i.e., tape). This approach is the most traditional and still one of the more popular ways to ensure a two-site copy strategy. The on-site copy can be disk- or tape-based (D2D2T or D2T2T), with backup to disk providing a few benefits: speed, the ability to deduplicate data and ease of remote management. Copy to tape, however, requires local tape equipment, media, and a mechanism to transport copies to the central HQ or third-party storage facility. It also typically requires a local operator, especially if tape device or media error troubleshooting is required. Even with all the constant talk about eliminating tape and the adoption of disk in backup processes, this approach is the most popular overall as reported by ESG research respondents. Local backup with an off-site copy sent over the WAN to HQ: Data is backed up to on-site storage and a copy is transmitted to a central corporate location over the WAN. With a disk-to-disk-to-disk (D2D2D) configuration, IT organizations can more easily manage backup operations from a remote location and reduce or eliminate ROBO-based staff. This method has gained in popularity over the last few years, mainly driven by lower disk costs, data deduplication and optimized replication between backup disk targets. The optimization introduced through deduplication delivers more efficient use of bandwidth and storage. The only downside is bulk recovery from the HQ’s copy. In the unlikely event a recovery is required from the HQ copy, it may be faster to ship a portable disk to the ROBO site than to recover the data over existing bandwidth. This is an approach more often adopted by organizations with higher volumes of data to protect. Local backup with an off-site copy sent over the WAN to the cloud: Data is initially backed up to on-site storage and a copy is then sent to a third-party cloud storage provider. The disk-to-disk-to-cloud (D2D2C) scenario uses local disk for most recoveries, while public cloud storage provides the repository for the long-term data retention. Organizations get faster operational recovery from disk; however, rapid recovery from the cloud may prove to be a challenge for larger data sets. 2 Lauren Whitehouse is a senior analyst focusing on backup and recovery software and replication solutions at Enterprise Strategy Group, Milford, Mass.

43

STORAGE August 2011

Memorizing RAID level definitions and knowing which level does what can be: Confusing Hard to Remember Useful All of the above

So how much do you think you know about RAID? Find Out For Yourself and Test Your Knowledge with Our Exclusive RAID Quiz! And don’t forget to bookmark this page for future RAID-level reference.

Test your knowledge at SearchSMBStorage.com/RAID_Quiz

The Web’s best storage-specific information resource for IT professionals in the UK

read/write | jeff byrne

Hybrid clouds on the horizon A few notable glitches have soured some users on cloud storage services, but a hybrid approach that integrates public and private storage may convince cloud skeptics.

Lax laptop backup

t

HE FIRST HALF OF 2011 won’t be remembered as the best of times for the

Need for speed

State of backup dedupe

Storage tech evolves

Virtualize your storage

Backup options for ROBOs

Hybrid clouds loom

Remote backup under control

Sponsor resources

45

cloud. Despite optimistic predictions, it’s been a stormy few months for cloud storage services. An Amazon Web Services (AWS) networking glitch in April caused a multi-day interruption in service for some news-sharing and social networking sites. Earlier that month, Iron Mountain Digital announced it would be exiting the commodity-oriented, public cloud storage business over the next couple of years (although the company will continue to provide enterprise-class cloud storage services to business customers through an agreement with Autonomy). Finally, Cirtas Systems withdrew its cloud storage offering in April and laid off much of its engineering staff. That was the big news, but we’ve also noted that some small vendors are struggling to gain traction for their cloud storage and compute offerings. These developments may not be surprising in what’s still a fledgling market, but they’ve shaken data storage managers’ confidence in the public cloud. To hedge their bets, some users are now considering alternative strategies, including hybrid clouds, which enable storage and associated apps to be deployed across both public and private cloud infrastructures. In fact, true hybrid cloud storage will span public and private clouds and be optimized for a user’s specific applications and service-level requirements. Granted, not many companies are running hybrid clouds today. But while the technology that will power hybrid clouds is still developing, the potential benefits are already coming into focus. Hybrid clouds provide the advantages users already expect from public cloud storage deployments, like pay-as-you-go flexibility and self-service. They also promise to provide the enterprise-level capabilities typically found only in a private cloud, such as secure multi-tenancy and the ability to deliver quality-of-service levels for availability and performance. Major storage, systems and virtualization vendors are all working on hybrid cloud strategies and roadmaps they hope will give them a leg up in

STORAGE August 2011

STORAGE

Lax laptop backup

Need for speed

State of backup dedupe

Storage tech evolves

Virtualize your storage

Backup options for ROBOs

Hybrid clouds loom

Remote backup under control

Sponsor resources

46

what’s expected to be a fast-growing market. Dell, Hewlett-Packard and IBM have hybrid cloud plans that encompass servers and storage. EMC, Hitachi Data Systems and NetApp have hybrid storage stories and even some concrete offerings. Before hybrid clouds can enter Security of data in the mainstream, some fundamental technical issues must be retransit and at rest is a solved. Security of data in transit paramount concern of and at rest is a paramount conusers, particularly in cern of users, particularly in light of recent data breaches. Storage light of recent data vendors and cloud security startbreaches. ups are developing new encryption, firewall, identity management and associated technologies. Performance of critical applications is another key issue, and several vendors now offer innovative on-premises caching products that reduce data access latency and speed up data recovery. Business issues are another concern for storage managers considering cloud deployments. Some industry regulations dictate how and where critical data can be stored, which might, for example, prevent users from using public clouds that have data centers spanning multiple geographies. The prospect of getting locked into a particular provider’s public cloud is another worry. It’s easy to upload data into most public clouds, but moving that data months or years later to a different provider can be difficult and costly. While 2011 may not be the “knee-of-the-curve” year when hybrid cloud storage takes off, a number of interesting applications are catching on. Hybrid clouds may not yet have the capabilities to support primary storage for critical applications, but several vendors offer cloud-based disaster recovery, backup and gateway solutions. TwinStrata is building a strong cloud storage gateway business that enables on-demand expansion of storage capacity as well as data protection capabilities, linking into several different cloud providers. Another startup, StorSimple, helps users control large sets of distributed, unstructured data by surrounding it with a full complement of data lifecycle services. Many of these solutions aren’t just ready for prime time, they’re already satisfying growing numbers of early adopters. At least one provider—Nirvanix—is delivering on the vision of hybrid cloud storage for the enterprise. Nirvanix hNode provides private cloud storage services that front-end the company’s Storage Delivery Network

STORAGE August 2011

STORAGE

Lax laptop backup

Need for speed

State of backup dedupe

Storage tech evolves

Virtualize your storage

Backup options for ROBOs

Hybrid clouds loom

(SDN) public cloud storage offering. The company’s Cloud Sideloader technology lets users migrate files directly from providers such as AWS and Iron Mountain into Nirvanix data centers. Beyond storage, hybrid clouds require a networking infrastructure that enables high availability and performance for a diverse set of workloads moving between public and private clouds, along with the monitoring and management tools to ensure it all works. As most IT managers are well aware, bigger pipes alone aren’t enough to solve this problem. Rather, it takes optimizing data services across the scattered locations where apps may move, regardless of where the data is coming from, while providing visibility into the data passing through the network at an application, user and server level. Riverbed Technology, as one example, provides these enabling capabilities for a hybrid cloud today through its Steelhead and Cascade product families, and with its Akamai partnership looks likely to deliver new ways to optimize all manner of data and content no matter where the endpoints may reside. While offerings such as these suggest that mainstream adoption of hybrid clouds may be fast approaching, we’re not there yet. Clouds are still in their “wild west” growth phase, and the hybrid model is still evolving. But we see hybrids as a stabilizing force in the cloud market, bringing together the best of private and public clouds to address the demands of midsized and enterprise users. As we assess some of the early hybrid cloud storage solutions and look forward to the innovations that lie ahead, we’re optimistic that the dark storms of this past spring are behind us. 2 Jeff Byrne is a senior analyst and consultant at Taneja Group. He can be reached at [email protected].

Remote backup under control

Sponsor resources

47

STORAGE August 2011

snapshot

Users get upper hand over remote site backup

Lax laptop backup

Need for speed

State of backup dedupe

Storage tech evolves

Three years ago, nearly 25% of the firms we polled entrusted data backup at their remote sites to non-IT staff members. That number has now plummeted to only 6%. At the same time, the number of companies using automated processes to back up remote offices grew from 33% to 46%, so it looks like many firms are no longer relying on “civilian” backup jockeys. And two-thirds report that backup data is shipped to the main data center from an average of 28 remote locations. Thirty percent back up directly to disk at remote sites and then replicate to the data center, while 25% dedupe backup data first and then replicate. Thirty percent of firms looking to centralize their backup are considering a WAN optimization device and 29% expect to add a dedupe appliance that can replicate to the data center. The biggest gripe about remote site backups is throughput and packet —Rich Castagna loss issues when sending data over the WAN.

Currently, what’s the most common problem you encounter when backing up remote or branch office data? 23% Sending data across the WAN to corporate results in intolerable packet loss/throughput levels 16% Backing up to tape at each site is costly and/or unreliable 12% Remote site backup tapes (or other media) aren’t sent offsite

Virtualize your storage

Backup options for ROBOs

Hybrid clouds loom

7%

The added backup data from remote sites makes backup at the main data center difficult

7%

None

3%

Cloud (online) backup services are inadequate

28

Average number of remote offices Sponsor resources

An automated process

46%

A general IT staffer

22%

A dedicated storage admin

21%

Non-IT staff member Other

10% Remote site backups are done by inexperienced staff and/or aren’t performed regularly

22% Other Remote backup under control

Who performs the remote backups?

0%

6% 5% 10

20

30

40

50

How do you back up data at remote offices? 24% To a deduplication appliance or storage device 23% To disk arrays and then to tape 23% Directly to tape 12% Other 9%

To a cloud backup service

6%

Directly to disk

3%

No backup/No data stored remotely

“I think the biggest problem with cloud backup is that the bandwidth required for efficient backup is never available.” —Survey respondent

48

STORAGE August 2011

TechTarget Storage Media Group

STORAGE

STORAGE Vice President of Editorial Mark Schlack Editorial Director Rich Castagna Senior Managing Editor Kim Hefner Executive Editor Ellen O’Brien Creative Director Maureen Joyce Contributing Editors Tony Asaro, James Damoulakis, Steve Duplessie, Jacob Gsoedl, W. Curtis Preston

Executive Editor Ellen O’Brien

Lax laptop backup

Senior News Director Dave Raffo Senior News Writer Sonia Lelii Features Writer Carol Sliwa Senior Managing Editor Kim Hefner

Need for speed

State of backup dedupe

Associate Site Editor

Megan Kellett

Assistant Site Editor Rachel Kossman

Executive Editor Ellen O’Brien Assistant Site Editor Rachel Kossman

Senior Site Editor Andrew Burton Managing Editor Heather Darcy Assistant Site Editor John Hilliard Features Writer Todd Erickson

Backup options for ROBOs Senior Site Editor Sue Troy

Hybrid clouds loom

Assistant Site Editor Francesca Sales

Senior Site Editor Sue Troy UK Bureau Chief Antony Adshead

Remote backup under control

Assistant Site Editor Francesca Sales

Sponsor resources

Director of Editorial Events Lindsay Jeanloz

TechTarget Conferences Editorial Events Associate Jacquelyn Hinds Storage magazine Subscriptions: www.SearchStorage.com Storage magazine 275 Grove Street, Newton, MA 02466 [email protected]

49

September The pros/cons of FC, iSCSI and NAS for virtual server storage Matching virtualized servers to the right type of storage can be a critical decision, but there’s no single type of networked storage that’s hands-down the best for virtual servers. We review the pros and cons of each array alternative and suggest where each would fit best.

Quality Awards VI: Midrange arrays

Storage tech evolves

Virtualize your storage

COMING IN

STORAGE August 2011

Our Quality Awards program surveys midrange systems users for the sixth time. Three vendors that won four out of the first five Quality Awards in this category (StorageTek, EqualLogic and Compellent) have been acquired by larger firms. We’ll see if last year’s winner (Compellent) can repeat as part of Dell’s stable.

Backing up the cloud Backup was one of the first services offered by cloud storage vendors, and it’s still the most popular way of using cloud storage. We’ll cover how the technology works, when it’s a viable alternative, how it can integrate with traditional backup, how much it costs, and other pros and cons of cloud backup services. And don’t miss our monthly columns and commentary, or the results of our Snapshot reader survey.

SPONSOR RESOURCES

See ad page 4

• Smarter Storage Management • Storage Management: Control your Data

See ad page 22

• Whitepaper: Snapshot Management & Source-Side Dedupe are Vital to Modern Data Protection • Top 5 Challenges for Virtual Server Data Protection

See ad page 26

• Fluid Data Storage: A How To Guide • 7 Ways Compellent Optimizes VMware Server Virtualization

SPONSOR RESOURCES

See ad page 40

• ESG: What EMC is Doing to Backup • IDC: Worldwide Purpose-Built Backup Appliance Study

• How Data Deduplication Works • SAN for Dummies Chapter 13: Using Data De-duplication to Lighten the Load

SPONSOR RESOURCES

See ad page 12

• Confidently maximize virtual investments with IBM Integrated Service Management • Analyst Whitepaper: Storage-efficient Data Protection and Retention

See ad page 31

• ReadyNAS® 3200 Boosts Financial Services Firm Productivity by Supporting 20 Virtual Machines, Cutting Rack Space by 80% and Costs by Over 50% • Server Virtualization: A Game-Changer For SMB Customers

See ad page 8

• Quantum DXi Validation Report • Data Deduplication for Dummies

SPONSOR RESOURCES

See ad page 35

• White Paper: Adaptive Infrastructure--Managing Resources in a Mixed Up World • Cloud Storage: Adoption, Practice and Deployment