Moving on with mobile - the challenge of success - Connect-World [PDF]

1 downloads 155 Views 6MB Size Report
MW Asia. NAB. Connect-World. CommunicAsia. AT&T. EMC. 2. 2. IFC. 3. 6. 11. 18. 23. 28 .... to evolve: cell size, spectrum utilization and performance. Reduced cell ... In fact, SDN and virtualization are already paving the way for 5G. ..... This calls for balancing the requirement for security with the imperative of low latency,.
North America I 2014

Nicholas Ilyadis

VP and CTO, Infrastructure & Networking Group (ING) Broadcom Corporation

Moving on with mobile the challenge of success

DELIVER THE BEST EXPERIENCE ON EVERY PHONE, TABLET AND GADGET (EVEN ONES THAT HAVEN’T BEEN INVENTED YET). Mobile customers get savvier—and more demanding—every day. So the network has never mattered more. With device numbers set to nearly double in four years, Cisco is helping carriers offer better plans, more services and, always, a top-tier experience. The Cisco® Intelligent Network masters any device, anywhere, anytime. Regardless of operating system, communications standard, apps or hardware. Now, offering customers more is an easy call. Use the device of your choice to learn more at cisco.com/go/yourway

© 2012 Cisco Systems, Inc. All rights reserved.

All articles are available for download at www.connect-world.com

CONTENTS Size, Spectrum, Speed and No Latency

4

7

9

12

Tomorrow’s telecom: Preparing for a 5G World

4

Balancing security and speed in LTE: Can vendors keep pace?

7

by Lauri Oksanen, VP, Research and Technology, Nokia Solutions and Networks by Vikash Varma, President & CEO, Stoke, Inc.

Relentless Growth: Video, VoLTE and IoT

14

16

19

21

Enabling the smarthome: Networks in the Zettabyte Era

9

Mobile video in the Zettabyte Era

12

The Cloud crushes the barriers to true OTT TV Everywhere

14

by Todd Antes, VP, Product Management Qualcomm Atheros, Inc. by Chris Goswami, VP of Marketing, Openwave Mobility

24

26

29

32

by Louis C. Schwartz, CEO, UUX

SDN, NFV and Cloud - Culprits or Saviours?

34

36

Five key trends for network transformation in the Zettabyte era

16

Innovation for a new era of information technology

19

by Nicholas Ilyadis, VP and CTO, Infrastructure & Networking Group (ING), Broadcom Corporation

38

by Ariel Efrati, CEO, Telco Systems, and COO, BATM Advanced Communications

Boosting the Transport Network

Captains of disruption in the telecom world: Carrier Ethernet by Paul Pierron, CEO, FiberLight, LLC.

21

ZetaMan 24 by Cheri Beranek, President and CEO, Clearfield, Inc.

Optical wireless broadband: The need for bandwidth in the Zettabyte era by Gerardo Gonzalez, President and CEO, Skyfiber

Connections

26

Managing Zettabyte Capacity and Performance

From the Editor-in-Chief’s desk

2

Improve performance management while increasing network data 29

Imprint

2

Mastering virtualization challenges

32

Feeding the two-headed broadband monster: Scaling the Internet’s foundation to keep pace with demand

34

by Rebecca Copeland

Advertisements Cisco Globecast WCIT Cartes Asia MW Asia NAB Connect-World CommunicAsia AT&T EMC

IFC 3 6 11 18 23 28 31 IBC OBC

by Anand Gonuguntla, President & CEO, Centina Systems

by Alastair Waite, Head of Enterprise Data Centre Business, EMEA, TE Connectivity

by Geoff Bennett, Director of Solutions & Technology, Infinera

Fast Data and Asset Analytics Big Data in the Zettabyte Era

36

Explosive growth of network data: Are operators ready to control their network CAPEX?

38

by Mike Hummel, Co-founder and CEO, ParStream

by Vinod Kumar, Chief Operating Officer, Subex

CONNECTIONS Managing the Data Crisis The relentless growth in data consumption is driven by growing demand for voluminous video-based big data, for real-time and time-critical fast data, and for remotely stored anywhere data that is transported to wherever the user is. Broadband is a ‘two-headed monster’. One head facing ever-faster access networks, with the rollout of LTE, multi-user WLAN and long-reach fibre. The other head is facing the core, with cloud-hosted functions, OTT-TV broadcasting and streaming video distribution that generate even more heavy traffic. Everything in the network must gear up to the inevitable Zettabyte future, and there is a sense of a looming crisis. It is a crisis of spectrum crunching, so unused spectrum must be reclaimed, and new spectrum ranges must be licensed. It is a crisis that demands increasing capacity at alarming rate, so every last byte should be squeezed from existing infrastructure via performance and asset management. Most importantly, this crisis means finding ways of combining heterogeneous network technologies, such as Carrier Ethernet and Optical Wireless Broadband, and utilizing legacy as well as advanced networks, such as DWDM and Optical Transport Network. These pressing requirements explain, perhaps, the great interest in Carrier SDN and NFV as well as the Cloud. SDN and NFV are complementary technologies that usher in greater automation via virtualization, which is essential for complex network and massive traffic. They decouple hardware from networked functions, allowing more cost effective deployment and multi-vendor integration. The Cloud enables distribution of data closer to the users, for faster access on any user device and shorter network journey. However, these Zettabyte ‘saviours’ are also the culprits - SDN/NFV send data to different server locations, wherever the virtualized functions have been assigned to a real server, and remotely stored data has to be transmitted to the device every time instead of used locally. Cloud data is also sent across to multiple data centres for safe keeping. Hence, the monster remains hungry and data volumes continue to spiral up. At the same

time, users’ expectations of uninterrupted, high quality service are growing too, as both consumers and enterprises increasingly depend on the network for essential daily functions. Availability of faster data, not just bigger data, encourages the unprecedented demand for video streaming and real-time transmission. What LTE provides in increased speed, VoLTE and on-demand OTTTV will take. Smarter ways of satisfying customers’ preferences are required, e.g. lower quality but fast start with no stutter is preferable to clearer picture but slow, haltering playback. Perceptions are allimportant also for SLA management, with real-time response to network events, pre-empting problems before users notice them. Analysing what happens in the network now means processing huge amount of network information. Therefore, more powerful, dynamic network management and analytics tools are required, which can be run, for example, at the cell tower, not just at the core, and provide a dynamic view of how customers’ services actually perform. The next generation of 5G standards should focus on methods of overcoming the Zettabyte issues. You may see ‘ZettaMen’, armed with a pioneering spirit, who will build DIY broadband connections to remote co-operative communities and find novel ways of re-engineering fibre conduits. More likely, though, 5G will bring enhanced methods of utilising the spectrum, methods of increasing density by smaller cells, more powerful handover and collaborative outdoors WiFi, compressionas-you-need-it that responds to congestion situations and generally performance improvements. No one knows how fast and how far this data monster will travel in future, so all we can do is keeping just a few steps ahead.

Rebecca Copeland, Editor, Connect-World [email protected]

Editor-in-Chief: Fredric J. Morris [email protected] Publisher: David Nunes [email protected] Editorial Department: [email protected] Connect-World is published under licence by INFOCOMMS MEDIA LTD email: [email protected] URL: www.connect-world.com

Production Department: [email protected] Sales Department: [email protected] Administration Department: [email protected]

All rights reserved. No part of this publication may be reproduced, stored in a retrieval system or transmitted in any form or by any means electronical, mechanical, photocopying, recording or otherwise, without prior permission from the publishers. The content of this publication is-based on best knowledge and information available at the time of publication. No responsibility for any injury, death, loss, damage or delay, however caused, resulting from the use of the material can be accepted by the publishers or others associated with its preparation. The publishers neither accept responsibility for, nor necessarily agree with, the views expressed by contributors.

ISSN 1468-0203 2

n

North America I 2014

Taking content further Making great content is tough Distributing it shouldn’t be

Size, Spectrum, Speed and No Latency

Tomorrow’s telecom: Preparing for a 5G World by Lauri Oksanen, VP, Research and Technology, Nokia Solutions and Networks

The new era of bigger and faster data drives in not only 4G technology, which is now in progress, but also the era of 5G, which is yet to be specified. It is clear that three areas in particular need to evolve: cell size, spectrum utilization and performance. Reduced cell size must overcome interference and installation costs. More spectrum will be licensed in the millimeter band ranges, but higher ranges of 70-85 GHz will need costly changes to equipment. Performance must also be improved significantly. Higher density of cells will require automated management, and new precision applications, such as autonomous vehicles, will require ever decreasing latency. In fact, SDN and virtualization are already paving the way for 5G.

Lauri Oksanen is the Vice President of Research and Technology in Nokia Solutions and Networks. With over 25 years of extensive experience in the telecommunications industry, he is responsible for the development of new technologies at NSN. Lauri started his telecoms career at Nokia Cables in 1988, where he developed the quality system for the unit and managed research and development in the area of fiber optics. In 1995, Lauri moved to Nokia Networks, where he first worked in GSM and OFDM research, and in the company’s 3G WCDMA program with responsibility for the development of planning, optimization and performance of the system, including the creation of the world’s first 3G Radio Planning Tool. Later, Lauri led Network Systems Research covering radio access, core and services. Since 2006 he headed technology in Nokia Networks, covering also software and hardware research. His group developed the Nokia Networks’ LTE radio and architecture proposal. This included the power efficient uplink concept and flat architecture, which formed the core of the final standard. Lauri has a Diploma (MSc) in Engineering from the Helsinki University of Technology, and a Licentiate of Technology from the Helsinki University of Technology.

Introduction Looking into the future of technology is something like looking at a new baby and trying to predict what her life will be like. There are few certainties, but some patterns are likely to emerge over the coming years. While it’s impossible to prepare for everything that will take place, making plans will make it easier to adapt to what will come. One of the most dynamic areas of technical development over the past two decades is that of wireless mobility. We have seen mobile networks grow from the most basic voice functionality to full-scale providers of multimedia and cutting-edge entertainment. The first generation of large-scale mobile technology was referred to as 2G. This was the initial radio access technology (RAT) which made it possible for the masses to enjoy the benefits of cell phones and global roaming. Over time a new RAT

4

n

North America I 2014

was developed, with 3G first providing functionality beyond voice communication. But 3G was a compromise that combined different technical solutions, with IT and telecom solutions competing and creating challenges for the network provider. As it became clear that the ideal use case for the immediate future was Internet access, a clear vision of what 4G needed to be has emerged. As a result, the creation of large-scale 4G networks has been quick and massively successful, with many carriers providing excellent network performance and high speeds for most subscribers. This is particularly true in North America, where the major operators, including AT&T, Bell Canada, Sprint, TELUS, T-Mobile, U.S. Cellular and Verizon, have 4G/ LTE network deployments. As user needs continue to evolve, mobile operators are beginning to look forward to the next generation of network technology. However, there are more questions than

answers as to what 5G will become, and what applications will need to be supported. Yet, while the specific use case remains uncertain, there are three key pillars that will characterize successful 5G telecommunications, which network developers will have to address to make 5G a reality: cell size, spectrum and performance. The reduction in cell size One of the most certain trends to continue in the coming years is that of increasing traffic. But how much will it truly increase? Experts are uncertain, but some estimates place this growth on the order of 1,000 times during this decade, at least in terms of traffic volume. The other side of the coin is data speed, which will require high network performance. While it’s unlikely that the average network will need to support this much traffic on a daily basis, it also seems certain that some will. As a result, telecom providers should

Size, Spectrum, Speed and No Latency

prepare for this kind of growth in order to be certain they are fielding sufficiently robust networks. Key to this growth in traffic support will be an increase in cell densification. While today’s most traffic-intense networks require base stations or towers every few hundred meters, 5G will require ultradense deployments. The limitations of 4G are already becoming apparent in the most crowded 4G environments, such as a crowded stadium or railway station. It is also becoming increasingly clear that 5G will require cells that can operate every few tens of meters in order to maximize network performance. This approach is already seen in certain Wi-Fi applications, such as homes with more than one router. There are problems, however, with interference from other devices such as neighbors’ routers. The 5G environment will have to successfully overcome the challenges inherent in the use of such small cell deployments. This includes coordination between heterogeneous connecting technologies and connections between local nodes and access points higher in the network hierarchy. The use of more spectrum Another integral part of realizing 5G will be the use of larger parts of the communications spectrum. Without this increase, communities that have no wired broadband will be unable to take advantage of new services. As the amount of traffic everywhere increases, current systems operating on centimeterband frequencies will need to be used more efficiently. This is already happening in today’s 4G environment with carrier aggregation. Today’s networks operate primarily in the range of 700 MHz range to 3.5 GHz and require good area spectral efficiency. The difference in a 5G network, however, will be the licensing and use of much wider bands of millimeter-band frequencies, which will change system requirements. In the 70-85 GHz range, there is potentially a lot of new spectrum available, but these areas will require significant adaptation to current equipment for feasibility of use. Today’s radios are not generally sensitive enough to make effective use of these higher frequencies. This means that the hardware will need to continue evolving to support 5G systems and frequency bands. In addition,

these frequencies are also not well suited to providing wide area coverage, and may be used for dense local cells instead. Telecom providers will be working in the coming years to address these challenges in anticipation of increasing traffic. The need for improved network performance The third significant area that must be addressed in a successful 5G telecom network is that of performance. Not only will the amount of traffic be increasing, but so will the demand for speed and flexibility in the face of continued application development. We are already seeing this in today’s 4G deployments, with users now streaming highdefinition videos and games to tablets and smartphones. This trend is likely to continue throughout the coming decade. While telecom providers often advertise their maximum network speeds, the reality is that the most significant need in terms of bitrates is an improvement in the minimum average data rate that is provided. While users in areas with densely spaced access points will in the future be able to see speed in the order of several hundreds of megabits per second, those in other areas that are further from the base station or tower will not reach such high speeds. All users are still competing for bandwidth against other users in the same radio cell. Providers must support a consistently higher base rate for all users, that is sufficient to support their growing needs. Automation Another consequence of smaller cells will be a vast increase in the number of access points that must be maintained cost-effectively. Even in current multisystem 2G+3G+4G networks, this is becoming difficult to do in an optimally manual way, and will be much more challenging in 5G. In order to prevent skyrocketing operational costs, network providers will have to implement effective automation systems to maintain an optimal network environment for the user. Another challenge in installing numerous new access points is to connect them to the core network. Many access points will be connecting wirelessly to core networks simply because a dedicated fiber network will be unfeasible in their cases. Latency

of bandwidth, sending and receiving information only occasionally. While their individual requirements are low, there will be many more such devices connecting to the networks. Tomorrow’s high-performance networks will demand greatly reduced latency for certain applications to create a satisfactory experience. The first goal will be to reduce latency to the point below the human tactile ability to perceive, i.e. ~ ten milliseconds. This is a good start, but there will be the need for even lower latency for certain highly sensitive control applications. If, for example, autonomous vehicle systems begin to see widespread deployment, this will require extremely low latency as vehicles detect each other’s position at high speeds, and send and receive information about their location and road conditions. Latency can only progress so far, which is where other technology such as the distributed processing cloud comes in. When network delays are squeezed down to the millisecond level, the speed of light itself becomes the limiting factor. Telecom providers may find it necessary to distribute data centers to maximize proximity to large groups of users, rather than just maintaining a smaller number of large centers. Conclusion We are already beginning to see certain networking technologies that are paving the way for 5G communications. SoftwareDefined Networking (SDN), for example, is enabling the development of new networks that are simpler to control. This will greatly enhance the flexibility of tomorrow’s networks and improve providers’ ability to respond to changing user demands. While there is no single answer as yet to what 5G will be, it will most likely make use of both current and emerging radio access technologies. Blending the use of different spectrum bands; dealing with small, ultra-dense cells; and ensuring consistent, high-performance networks will ensure that mobile carriers in the next few years will be able to keep up with the evergrowing need for mobility. Carriers should begin preparing now to make certain that the transition to 5G after 2020 will involve as few growing pains as possible.l

More and more devices are connecting to the Internet, from home thermostats to Internet-enabled refrigerators. Many of these devices require only a small amount

North America I 2014

n

5

Size, Spectrum, Speed and No Latency

Balancing security and speed in LTE: Can vendors keep pace? by Vikash Varma, President & CEO, Stoke, Inc.

The rapid growth in data continues with implementation of LTE - not just more speed, but also more dependable, high quality services, and in particular Voice over LTE (VoLTE). This calls for balancing the requirement for security with the imperative of low latency, which is far more stringent for packet-based Voice than for data. Security has been ignored in early-bird US LTE implementations, but is now regarded as a differentiator, especially as cybercrime is rising fast. With managed quality and low latency, VoLTE has a chance of strengthening the hand of mobile carriers against OTT VoIP over the unmanaged Internet.

Vikash Varma is President and CEO of Stoke, Inc. Vikash Varma brings more than 20 years of multi-disciplined, international experience to Stoke. Vikash has a successful track record of building businesses from the early stages to market leadership and maximizing value for investors. He was most recently the President, Worldwide Sales, Marketing and Field Operations at CloudShield Technologies. Previously he was President of Worldwide Sales and Field Operations at P-Cube Inc. with overall responsibility for Sales, Channels, Partners, Professional Services, where he grew the business with top tier carriers worldwide before its acquisition in 2004 by Cisco Systems. Prior to P-Cube, Vikash was General Manager at Hewlett-Packard Co. of their Internet Usage Manager (IP Mediation) Software business. Vikash holds a BSc in Engineering from the Birla Institute, one of India’s premier academic establishments.

Mobile data has reached the crossover point where it represents a bigger revenue source for operators than voice traffic, according to figures issued by AT&T, Verizon and multiple industry analysts. In 2013 data traffic revenues grew by a healthy 15% year on year. At the same time, operators have been rolling out LTE networks at a brisk pace - 180 LTE networks were deployed by the end of last year to meet demand - without giving up too much of that revenue in infrastructure costs. After lagging behind the rest of the world in 3G, the U.S. has stormed ahead to lead the LTE market. Though most carriers around the world have committed to LTE and it is heavily deployed in some countries, in most areas, for example in some European countries where spectrum regulations have delayed LTE rollouts, it is just getting off the ground. Despite the early deployment, the

U.S. is behind the market when it comes to LTE security. Security matters a whole lot more in LTE because these networks are all-IP and therefore as vulnerable to security breaches as any unprotected device in the network. Backhaul traffic in 3G networks was encrypted, but in LTE networks, backhaul traffic from the radio network to the operator core is not, and is subject to all the same massive vulnerabilities that have dogged our network communications for decades. U.S. mobile operators have been slow to admit to these flaws, arguing that their ‘closed’ networks prevented any possibility of hacking or other threats. That has changed over the last 18 months, largely because of public awareness. We can probably thank the NSA (National Security Agency) and Edward

Snowden for demonstrating the disadvantages of an unprotected communications networks to the entire world. In part, this is driven by factors including a more intense cybercrime focus on mobile networks (the fact that Telecoms Tech hosted an ‘M2M & Internet of Things Hackathon’ in London last November is just one indicator) and a rising incidence of breaches such as eavesdropping, man-in-the-middle attacks, denial of service, and packet insertion. Perhaps the chief influence, though, is operators’ growing recognition that if their LTE networks aren’t secured, they will be at a competitive disadvantage. This is borne out by an increasing body of formal and informal user data. In a recent U.S. online survey, more than 20% of respondents said that their major LTE security

North America I 2014

n

7

Size, Spectrum, Speed and No Latency

concern was criminals stealing sensitive information. Chief anxiety for 17.5% was the possibility of attackers triggering network outages and degradations, and for 16.5% it was the fear of insecure mobile operating systems and apps. Another 14.6% worried about human error - customers unknowingly downloading malware that might affect the larger network. The profit motive LTE security is now at the heart of discussions - especially when it comes to customer retention. Most of the U.S. mobile carriers have adopted aggressive customer acquisition strategies, with pricing and with network availability, as in the case of Verizon, or with better LTE speeds, as in the case of their close competitor AT&T. However, every operator looking for long-term success, according to Heavy Reading’s Patrick Donegan, must play the security game, and U.S. operators can no longer hang back. In a 2013 report, Donegan asserts that a financial analysis of LTE operators four years from now is likely to show a close relationship between support for end-to-end network security and superior financial performance. The argument that IPSec is costly has been knocked down, as well: results of recent European deployments have proven conclusively that the introduction of security does not result in higher operating expenses. Reading the performance

signals:

Security

and

As LTE networks mature, user numbers grow, data traffic increases, and operators begin to introduce new latency-intolerant services, can security keep pace? Latency - even infinitesimal traffic processing delays - must be kept well within the 20-30 millisecond targets if operators are to deliver on LTE’s core value proposition: faster, better, and far more traffic. Traffic patterns in LTE are vastly different from 3G networks. Consumers use far more data, and expect higher and higher levels of performance from their mobile devices. LTE now accounts for about for 50% of all wireless connections in the U.S. According to Ericsson’s June 2013 report, the video traffic on mobile networks grew by 60% year on year. Operators must secure traffic while handling the more complex and unpredictable LTE traffic flows and levels

8

n

North America I 2014

of network-wide signaling traffic. The US (Verizon) and Canada (Rogers) have already seen embarrassingly high profile outages due to signaling. What causes signaling surges? There’s an army of potential culprits, including network software updates; malware introduced by applications; heavier use of real-time applications such as video and audio streaming, gaming and advertisements, and Voice over LTE (VoLTE). Because smartphones and devices can host an increasingly higher number of applications, background signaling activity per device rises significantly because of the frequent update requests from applications - especially those for ‘chatty’ social apps. So it’s not surprising that signaling capacity and mechanisms to prevent signaling overloads from causing outages are receiving greater attention from operators when provisioning core infrastructure elements. The role of LTE security has evolved - it must also protect against sudden and unexpected surges in signaling and user data traffic. The increase in the volume of signaling traffic makes it harder for operators to identify threats and effectively control them in real time. Voice over LTE: Balancing security and performance Infonetics Research logged 12 commercial VoLTE networks with 8 million subscribers by year-end 2013. The ramp in 2014 will be faster, principally because of economics: VoLTE allows operators to re-farm spectrum away from 2G/3G to LTE, which will significantly lower voice infrastructure costs. Research firm iGR expects the number of VoLTE subscribers to grow significantly - at a CAGR of 187% between 2012 and 2017. But subscribers need to see a value in VoLTE. There’s a general expectation that calls will have a superior audio quality over 3G or other calls, and this quality of voice will be the key differentiator as operators roll out VoLTE services in competition with overthe-top VoIP service providers. However, subscribers also expect their VoLTE calls to be secured. It’s not yet clear how Sprint, T-Mobile, Verizon and AT&T Mobility or other U.S. Carriers will market VoLTE this year. The likelihood is that, rather than going straight for groundbreaking new services like real-

time language translation or video voicemails, carriers will use VoLTE to reach parity with current communication innovations like FaceTime, Google Hangouts, WhatsApp and other over-the-top (OTT) messaging services. To make this move, operators will need to balance the requirement for security with the imperative of low latency. The U.S. Department of Defense won’t allow a carrier to operate a network without encryption. The latency factor VoLTE commands a significant shift in how mobile operators think about success in mobile broadband networks. So far, operators have defined quality primarily in terms of speed and throughput, or bandwidth. With VoLTE, the focus on speed has been reversed completely. Network performance will be measured by how well operators can deliver on consumers’ expectations for improved voice quality. That means no jitter, no latency and no dropped calls. Latency is defined as the time it takes for a source to send a packet of data to a receiver, which is typically measured in milliseconds. The lower the latency (the fewer the milliseconds) the better the network performance and the more ‘real time’ the voice will sound to the subscriber. The acceptable level of latency for LTE traffic delivery is pegged at about 20 to 30 milliseconds, a microscopic amount that is virtually unnoticeable in some applications, but which would seriously degrade a voice call. This means that the introduction of security to LTE must be achieved in such a way that it doesn’t contribute to latency. This is not the case with all security solutions, and definitely not the case once the smaller size packets, which are typical for voice traffic, begin to dominate the pattern of network traffic With several of the major U.S. mobile operators poised to make VoLTE widely available by the end of 2014, and new VoLTE phones about to hit the market in the first quarter of this year, there is pressure on the traditional vendor ecosystem to deliver technology that will allow operators to retain and build their subscriber base. As a result, we can expect to see new players coming to the table with innovative solutions, and operators opening their doors to newer players who can help them get the balancing act right: secure, high performing LTE voice services, that will allow the US carrier market to take on their OTT competition and win. l

Relentless Growth: Video, VoLTE and IoT

Enabling the smarthome: Networks in the Zettabyte Era by Todd Antes, VP, Product Management Qualcomm Atheros, Inc.

Contributing to the Zettabyte era will be billions of communicating Internet-of-Things devices, such as home appliances, home security, home health etc. The smarthome can change our lives. However, to enable efficient interaction between devices, they all must have a common platform and a common communication language. The breaking of the walled garden and the spread of web technologies have accelerated the creation of innovative mobile applications. Similarly, compatibility of devices with an open smarthome platform and a standard language (AllJoynTM) will allow for the development of even more convenient and collaborative services, and allow for a community of developers to flourish.

Todd D. Antes is Vice President of Product Management, Wired/Wireless Infrastructure Networking Business Unit in Qualcomm Atheros. Todd Antes joined Qualcomm in 2011, following the acquisition of Atheros. He has extensive experience and has held several senior management roles in the wireless semiconductor and cellular industries. Prior to joining Qualcomm, Mr Antes held various senior marketing and product management roles at Philips Semiconductor, including Director of Marketing for the company’s Wireless LAN, Bluetooth, ultra-wideband, and cellular chipset businesses. He was also a member of the founding executive team of AirPrime, a 3G/ cellular data products company acquired by Sierra Wireless in 2003. Mr Antes holds Bachelor of Science in electrical engineering and Master of Business Administration degrees from Santa Clara University.

Many people in the developed world already pay for some combination of the so-called ‘triple play’ of services - video, voice and data - to their home. Broadband service providers have invested billions of dollars into networking infrastructure to deliver these services. The same service providers are now investing more money into delivering even more value-add applications and services to consumers as part of an emerging smarthome environment. In addition, new over-thetop service providers can leverage the same broadband IP pipe to deliver applications like home security, energy management, health care, etc. What would make a consumer willing to pay more? For some, it would be feeling more secure. For others, it would be saving time and hassle by enabling them to be virtually in two places at one time or it would be the satisfaction of saving money or reducing energy to protect the environment. Imagine being able to set up your home to turn on your lights when you’re away. The

1

same system automatically sends you a text message every day when your son and daughter come in the door from school. It also records a video of the person who broke into your car in the driveway which you can give to the police. While you’re on vacation, it lets you know that a pipe has burst so you can have it repaired before you return, to minimize the damage. Of course, much of this is possible today - using a number of different devices. In the U.S. alone, there were roughly seven connected devices per household in 2012. That number is expected to increase to approximately 22 per household by 20201. Everything around us is becoming intelligent and increasingly connected, forever changing the way we interact with the world. The opportunity before us is to deliver connected devices that can collaborate to solve real-life problems, and robust networks to support them - all while hiding the complexity of the underlying technology.

The future is multi-Zettabyte networks A recently published study by a major network equipment vendor claims that worldwide cloud traffic - 1.2 Zettabytes in 2012, will grow to 5.3 Zettabytes by 2017. Global data centre traffic during the same period will reach the astounding total of 7.7 Zettabytes by 2017. One Zettabyte = 10007 bytes = 1000 exabytes = 1 billion terabytes. That is huge. It is the equivalent of 100 trillion hours of music (for context, there are 114,077,116.131 years, roughly, in just one trillion hours). A single Zettabyte is enormous, but broadband service providers are already working to build, connect, and manage multi-Zettabyte networks. It’s critical that they build these networks to support new applications and services that they are delivering as part of an emerging smarthome environment. Service providers are preparing their networks for the Zettabyte era. Home is where you are As new applications and services come

Machina Research on behalf of GSMA, Apr.

North America I 2014

n

9

Relentless Growth: Video, VoLTE and IoT

online, there are three main areas we need to collectively focus attention to enable and optimize the smarthome in the Zettabyte era. First, we need to redefine the boundaries of ‘home’. Home is no longer a building at an address. Instead, it’s where the user is. It is always at his side. Even today, a person spends their day in the connected world, either at home, work or on-the-go. They are connected via a platform that spans three networks: public, private and proximal.

work to enable interoperability across multiple devices, systems and services and support the broadest cross-industry effort to accelerate Internet of Everything.

Second, we need to seamlessly and securely connect new applications and services. We need to create a horizontal platform that not only coordinates activities of devices and people, but also acts as a conduit to new resources and services that enhance user experiences or improve operation of the connected home.

While many homes already have broadband access, service providers are continuing to look for new ways to take better advantage of public, private and proximal clouds to deliver more personal and intuitive experiences to their users.

Third, we need to build a highly collaborative ecosystem to develop those applications and services, APIs, and infrastructure needed to turn homes into smarthomes. Only then can we unlock the potential of the smarthome. Breaking down walls, building a community What stands between us and this opportunity is the fragmentation of the ecosystem. We need to break down the walls and develop a burgeoning ecosystem of developers who work from a common framework to deliver applications for the smarthome. The connected home is a point of convergence for multiple verticals, each having their own interests and legacy technologies. How can we get the garage door opener, doorbell, home audio, HVAC, mobile phone and other systems on the same network? WiFi®, Bluetooth® or Powerline can be the physical link, but those systems are speaking different languages. We need a common language like the AllJoynTM software framework hosted by the AllSeen Alliance, which allows a variety of devices to discover, connect and communicate directly with other products enabled by AllJoyn. AllJoyn and its service frameworks enable interoperability across Operating Systems, platforms and products - from any brand. The AllSeen Alliance, a cross-industry consortium, was formed in December 2013 by a group of companies - Haier, LG Electronics, Panasonic, Qualcomm, Sharp, Silicon Image, TP-LINK and more - that is dedicated to driving widespread adoption of products, systems and services to accelerate Internet of Everything. Through this alliance, these organizations will

10

n

North America I 2014

The connected home is bigger than the sum of its parts. If we collaborate intelligently, allowing each of us to focus on what we do best, everyone will benefit. Connecting the smarthome

The public cloud offers vast resources that the connected home can tap into. It delivers resources for communication, and enables value-added services such as monitored security, cloud storage, entertainment, energy management and many more that transform the home experience. The issues then become: connecting private and public clouds; making sure data is shared, but shared securely; and exposing the capabilities of both clouds to applications. For that, we need a smart gateway. A smart gateway is an alwayson device that sits in the home and bridges private and public clouds by leveraging the consumer broadband Internet connection. The smart gateway is a connectivity engine that helps manage bandwidth to optimize the user experience. From the gateway, you can see and touch potentially all of the devices or things in your home. A new class of home networking platform The mobile phone world already dealt with fragmentation. Walled gardens - with proprietary technologies, proprietary APIs, incompatible architectures - are gone. Now smartphones offer a platform for innovation. The smarthome is next. When applying the same performance, power, networking, and mobile technology leadership that are used to enable the smartphone as a platform, we are enabling the smarthome as a platform. We are building solutions that enable horizontal connected home platforms across networks and across devices. One of the first challenges is to transform home networking equipment into something that provides more than a shared broadband connection - into a platform to accommodate the influx of connected devices and services. This can be achieved by creating a new class of Internet processors that provide the control

point from which the Internet (IP-based) applications are served up from the cloud. In doing so, these Internet processors prepare the home network for the next generation of Internet content, applications, and services. A horizontal platform like this is critical for traditional and new Zettabyte-era service providers, who are already looking at how they can increase average revenue per user (ARPU), reduce subscriber churn, increase loyalty, and enhance their brand by adding more customer value. Service providers have been working hard to evaluate which products and services will open subscribers’ wallets. Does it make good business sense for the service provider to add things like home security, energy management systems, healthcare and fitness to the “triple play” they already offer? Can a home appliance manufacturer offer convenience and support services via the home network in order to differentiate their products? Finally, as more services are delivered through the gateway and distributed over more powerful Wi-Fi technologies like 802.11ac, there is another factor that is becoming important to consumers and service providers - power usage. While the perhousehold cost of a gateway is not exorbitant, many are looking to reduce energy costs and environmental impact. Service providers are also keen to minimize the power consumption of their gateways, in response to many regional and international regulations calling for lower energy usage. Smarthomes and multi-Zettabyte networks are close The reality is that the industry is already building point technologies and products that allow companies to build, control, fill and pay for multi-Zettabyte networks. It’s up to us to collectively build an ecosystem to ensure everything will seamlessly and securely work together. The opportunity is tremendous, as these new networks will open still more opportunity. They will enable a new line of in-home applications and services that enhance our daily lives. With these networks, service providers will be able to find new ways to reach in and expand their footprint in users’ homes and lives. Networks for the Zettabyte era will be here before you know it.l

48

Speakers

100+

Exhibitors

58

Countries

2,800 Visitors

CARTES ASIA 2014 EXHIBITION & CONFERENCE SECURE SOLUTIONS FOR PAYMENT, IDENTIFICATION AND MOBILITY

19-20 MARCH 2014 Hong Kong Convention and Exhibition Centre HONG KONG NEW Venue, NEW Focused themes: - ADVANCED PAYMENT & MOBILE MONEY - CARD MANUFACTURING & PERSONALIZATION - SECURE IDENTITY

Register on www.cartes-asia.com

CONCURRENT EVENT

an event by

Relentless Growth: Video, VoLTE and IoT

Mobile video in the Zettabyte Era by Chris Goswami, VP of Marketing, Openwave Mobility

Video, at long last, is now in demand. Video is also the top culprit in bringing on the Zettabyte era. Everyone is a movie director, and everyone is reporting on live events. To prepare for the video era, carriers must balance cost of optimizing the network, bandwidth savings and achieving Quality of Experience. Most importantly, there must be changes in attitudes, especially in net-neutrality regulations and in appropriate monetization. Users are willing to pay for video, but would prefer lower quality with faster download and no stutter over HD quality with slow download and haltered viewing, so managing quality of experience is more complex than just ‘big pipe’.

Chris Goswami is VP of Marketing in Openwave Mobility. Chris has worked for 25 years in the telecoms industry. Within Openwave Mobility, Chris heads up Product Marketing, providing tactical support and business cases for areas of strategic interest. In his previous role in Openwave’s Global Solutions Team, he was responsible for coordinating solutions for complex customer problems in the mobiledata space. Formerly Chris was with Magic4 - providers of Messaging clients to numerous blue-chip handset manufacturers, and played a key role in coordinating Magic4’s new Messaging technologies. Prior to that, he led the mobile-data testing business at NCC-Group, and was a member of key management staff leading to the company’s successful sale to Barclays Private Equity. In the early eighties, Chris was involved in some of the earliest ever research into CDMA systems. He is an experienced speaker, having regularly spoken and chaired many events on such topics as Emerging Economies, Mobile Search, UGC, Mobile Advertising, Messaging, User Experience, and IMS. Chris holds a First Class Degree, MSc, and PhD in Electronic Engineering and Telecommunications.

When it came to mobile video and 3G, it was always a clear case of “video killed the radio star”, with the surge in demand for video delivered to smart devices completely overtaking the ability of networks and operators to manage. It is expected that by 2017 worldwide cloud traffic will grow to to 5.3 Zettabytes (more zeroes than you can fit on a line of text). According to CISCO VNI, two thirds of this will be mobile video. Video already accounts for more data by volume than all other types of traffic added together, and at a CAGR of 75% it is growing faster than any other type of traffic. In this article we take a look is driving this unprecedented from the perspective of people as technology, in developing as

12

n

North America I 2014

at what growth as well well as

developed markets, and lastly we draw out some of the lessons the industry needs to take from this story so far. People are changing faster than technology Mobile video is quickly moving from niche interest to the most dominant form of communication and it’s not hard to see the shift in people’s behaviour. Ten years ago, the industry failure we used to call ‘video calling’ is now all the rage - but it takes the form of Facetime or Skype, driven in part by front-facing cameras on our handsets. However, it is not just video calling pushing operators’ networks to the limit - video has become an integral part of everyday life. Clips and snippets are being uploaded and downloaded at sporting events, concerts

and holiday destinations as a new ‘wish you were here’ video trend becomes increasingly easy on every platform. Social networking is also propelling mobile video usage - with ‘must see’ viral clips from YouTube and Vine plastered across your Facebook and Twitter feeds. These trends are causing even macro level changes in the business of film and journalism. At a recent conference on Video Optimization in Berlin, John Nolan from North-One TV, lamenting the TV industry’s inability to monetise mobile video, said “In the good old days, to make a movie you had to be rich, clever, and connected…Today, everyone is a producer and everyone is a distributor!” The trend for citizen journalism has seen a huge rise in ‘everyday’ people

Relentless Growth: Video, VoLTE and IoT

taking to the streets and reporting on events first-hand and as it happens. Bigger screens, front-facing cameras, HD technology, plus the fact that people no longer need a PhD in gadgets to work the camera on a mobileall of these are factors causing a seismic shift in the way people are interacting and collaborating through video. Technology is changing too 4G/LTE is called ‘LTE’ for a reason - it is a long-term evolution and it is taking time for rollout and adoption. Strategy Analytics stated recently that there are now 320 million 4G/LTE connections. This may sound like a large figure, but when you consider that there are around 4.7 billion unique connected users in the world, it’s actually a relatively small number. If you spin this statistic around, over 90% of people in the world are not using 4G/LTE. As well as coverage there remain serious issues of interoperability between handsets which will persist for some time.  That said, there are significant developments in LTE that do help advance mobile video delivery and consumption. First there are the obvious ones: lower latency, higher throughput and spectral flexibility to aggregate carriers together. There is also the prospect of guaranteed Quality of Service (QoS) provided by LTE’s management of bandwidth, although that remains to be proven. Best of all, there is the new upcoming technology of LTE Broadcast. For mass consumption, this provides a pointto-multipoint broadcast of video content in a limited space, for example a sporting event or shopping mall, or during a limited period of time, for example a breaking news item. LTE Broadcast provides a colossal step-up in efficiency since it delivers rich video content to hundreds of users, while using the bandwidth of only a single user. LTE Broadcast could certainly lead to new business models in the way mobile video is priced and distributed and as an enabler of location based video-ads. So, LTE is certainly a helpful stop along the way, even if it is not the final destination. Developing markets are not excluded The Zettabyte video phenomenon is not limited to developed markets. We have seen in developing markets previously that areas lacking fixed line infrastructure will often leapfrog to newer technologies. This is helping to shape a mobile-friendly and mobile-focused attitude in emerging markets. Video usage on mobile devices has

the potential to grow faster here because smartphones are often the user’s only access to the internet - individuals tend to use their phones for everything and mobile video can play a vital role in day-to-day life. Mobile video is starting to provide huge opportunities in developing markets, and open up a whole world of entertainment and education. For example individuals in remote locations, who are suffering from medical issues, can use mobile video for a ‘face-toface’ consultation with a doctor or a nurse, showing the actual ailments and problems without having to travel long and potentially dangerous journeys. Delivering education to remote locations is another example, e.g. in Sub Saharan Africa. Users in developing markets are typically on much lower incomes - ARPUs are often an order of magnitude lower. As a result operators need to align their offerings to meet that lower income, meaning that margins can be incredibly low. As a result, every byte of mobile data is important and needs to be accounted for. By contrast, around major conurbations and capital cities there will often be hotspots of wealth and high-intensity usage, making these very specific ‘island’ areas more similar to a developed market. 3G to 4G - lessons to learn Coming back to the more technologically advanced markets, it is worth asking the question - what have we learnt about bandwidth consumption in 3G? Firstly, optimization of mobile video does reduce the volume of data in the network but that does not lead to automatic benefits. So, the argument that ‘size matters’ is proven false once again. Often, compressing data in the network can simply lead to more users, more video, more time - and inevitably more congestion. Video optimization makes complete sense whenever and wherever congestion occurs, but not otherwise. Solutions such as congestion-triggered optimization are the key to achieving optimum use of bandwidth, not just minimum use of bandwidth. Operators have to learn to perform a three-way balancing act between cost of optimizing, bandwidth savings achieved and Quality of Experience (QoE). Video QoE has become the number one factor in customer dissatisfaction. In commercial trials, users overwhelmingly prefer reduced quality video that starts quickly and does not stutter over a greater definition video that takes long to begin and falters during

playback. This is a far more complex formula than the ‘size matters’ idea of one to two years ago. Secondly, net neutrality in mobile is going to run into trouble. A few countries such as Chile and the Netherlands, have adopted stringent net neutrality regulation for mobile, and there are some discussions to bring about some measure of Europe-wide regulations. Rigid net neutrality can only hold back deployment and adoption of mobile services. It is one thing on fixed broadband to say that Internet Explorer should not be the only browser, and that traffic in big pipes should all be treated the same, but mobile networks are different. Mobile networks are inherently non-neutral. How can we say mobile networks are impartial when on the one side mobile operators bear the burden of paying - while on the other side OTT players take the revenue? This inherent bias that has been built into mobile from day one cannot be defused by adding a veneer of ‘net neutrality’. Video services bring a sharpened focus to this debate, since they are the dominant form of traffic and potentially the most lucrative. Lastly, to state the obvious, monetization of video really matters. The low price per GB of video is causing margin erosion worldwide and contributing to declining revenues for European operators year-on-year. Informa said recently: “Video streaming will account for a third of mobile data traffic on handsets in 2016; but money paid by mobile users for streaming will only amount to 0.6% of mobile data revenue”. This game has to change, and it is the right time to change. Today with the right technology in place, operators have the ability to treat every video flow individually, to decide in real-time how to charge for it, and how to achieve the best QoE. Some mobile operators most notably US regional carrier C Spire have started to experiment with ‘video-as-a-service’. This operator offers packages such as two hour and five-hour video streaming plans, as well as day-passes as add-ons to existing, lowcost plans. This separating out of video as a premium offering is being met with a popular response. Unsurprisingly, it seems that people will pay for what they want - as long as they understand what they are getting. So changes in culture, yes; changes in technology, definitely; changes in revenue well that’s just beginning! l

North America I 2014

n

13

Relentless Growth: Video, VoLTE and IoT

The Cloud crushes the barriers to true OTT TV Everywhere by Louis C. Schwartz, CEO, UUX

TV everywhere is experiencing huge growth, which poses three main challenges: broadband access, scalability and billing. Combining multiscreen OTT TV with Cloud technology has considerable advantages to both mobile carriers and content providers, leveraging mobility, cost effective storage, local distribution and most importantly - mobile based billing, especially where users are wary of credit facilities. A cloud based OTT TV can cope with the tremendous growth in traffic, and mobile carriers can optimize the transport in places where broadband capacity is not sufficient, thus ensuring users’ quality experience.

Louis C. Schwartzis the CEO of UUX. He has over 20 years of experience as an entrepreneur, operating executive and attorney in the digital media space. In 2000, Lou co-founded and served as the Chairman and CEO of Multicast Media Technologies, Inc. (“Multicast”), one of the first global online video platform companies to reshape the way organizations create, manage and leverage video to communicate with their target audiences. Lou orchestrated the sale of Multicast to Piksel, Inc. (formerly KIT digital, Inc.) in 2010 and assumed the role of CEO of the Americas and then Global General Counsel of Piksel. Lou’s latest venture, UUX, is the result of two merged companies (Totalmovie and OTT Networks) to create one of the first whitelabelled Pay TV-as-a-service offerings for mobile carriers around the world. The service is intended to provide mobile carriers with an end-to-end service platform for delivering linear and VOD content to all IP connected devices. As CEO of UUX, Lou is driving the company’s corporate strategy to increase distribution, subscriber adoption and monetization of next-generation Pay TV services over wireless networks. Lou has won several entrepreneurship awards, and is an internationally recognized expert in new media and IP video convergence.

Part of the huge increase in data held in the cloud can be attributed to the launch of cloud powered TV services. Online and mobile video consumption are growing at an especially fast pace in Africa, APAC, LATAM and the Middle East: 2013 saw 42 million online video viewers in APAC and OTT penetration is already estimated at over a million in a number of countries in LATAM. OTT growth is not limited to emerging markets. A report carried out by ABI Research, titled OTT & Multiscreen Services Research in April 2013, found OTT video revenue exceeded US$8 billion in 2012, with North America accounting for 57% of global revenue. It also predicted that revenue will reach the US$20 billion mark by 2015. Both the US and Canada have seen

14

n

North America I 2014

OTT growth of over 50% in the past few years and it appears this momentum shows no signs of slowing down. TV Everywhere is becoming ever more popular in these developing regions, which requires operators to make radical transformations to the networks. As end users increasingly seek a single platform catering to all their video consumption needs, multiscreen service providers have to deploy a full end to end offering which is compatible with mobile enabled devices. The rising demand for true TV Everywhere services in these markets is a revolution in the connected entertainment industry, which needs to offer services that solve three main challenges: broadband access, scalability and billing.

The emergence of the cloud offers the chance to address these issues and improve interactions between content providers and mobile operators, who are pivotal in the adoption of multi-screen in emerging markets. As these markets become crucial for broadcasters and OTT service providers, adapting the services to the end users most deep-seated habits will be key in delivering the connected entertainment they desire Leveraging the cloud enables operators to target these customers wherever they are with a video service optimized for mobile, removing the barriers to service adoption and integrating with telecom operators’ existing billing system. As mobile devices come to be regarded as the primary screen, stronger bonds will form

Relentless Growth: Video, VoLTE and IoT

between the connected entertainment industry and the mobile carriers, enabling new business models. Cloud-based platforms offer mobile network operators an agnostic, market ready solution which meets the challenges of regions where broadband infrastructure is not yet developed. As more and more multi-screen services become dependent on the cloud, content owners, broadcasters and service providers need to update the tools with which they access a network. This is particularly crucial for VoD, which is set to grow from US$21 billion in 2013 to over US$45billion in 2018, a Compound Actual Growth Rate (CAGR) of 16.5% according to research by MarketsandMarkets. Consumer demand for high quality content across any connected device is considered the main factor leading to such tremendous growth. The study also finds that the consumer’s TV experience is veering away from a linear one to a model which demands high quality content across any connected device. OTT viewers especially are surpassing IPTV viewers in the realm of VOD, with North America expected to be the biggest market in terms of revenue contribution while APAC and LATAM are expected to experience increased market traction during the forecast period. Moving forward, the consumer craving for a more personalised television experience will have a significant impact on the amount of data stored in the cloud. As the cloud becomes more and more prevalent, it will see OTT providers go beyond merely distributing content to offering additional services to providers in the form of marketing tools and expertise, which will result in new forms of monetization. OTT operators are placing more of their workload into the cloud as it offers advantages such as rapid access to multi-screen content and services. Cloud and OTT convergence will see new business models emerge Some of the more established OTT providers have failed to make a mark in developing markets. For example, in LATAM, a large scale OTT provider ran into billing issues after launching with a direct debit payment method in Brazil and through debit card payments in Mexico. The provider ultimately failed due to offering North American payment methods rather than tailoring for the region, where consumers prefer mobile phone payment instead of direct debit or credit/debit card billing. The lesson is that OTT service providers must be

careful to adapt to local customs and habits when launching in new territories. The emergence of cloud-based OTT TV platforms has seen a surge in multiscreen across the globe. This continued convergence between the cloud and OTT will mean that business models will undoubtedly shift as service providers create models which allow them to handle content, metadata, processing and billing over the cloud. Carriers are utilizing the cloud in order to enhance multiscreen services, migrate IP operations and lower the costs of geographical expansion. The cloud also provides the opportunity to deliver a high calibre OTT service where there is a poor broadband infrastructure by leveraging the telco network, thus enabling those from developing nations to benefit from multiscreen services. The growing demand for content, 24/7 viewing hours and high-quality viewing experiences across a multitude of devices means that current technologies cannot compete as the process becomes more expensive and complex. The cloud negates these problems by allowing broadcasters more scalability and control over their environment than hardware ever could. OTT becomes more scalable via the cloud The cloud is also natively scalable: storage can be expanded through virtual servers, enabling operators to store larger files such as 4K content in the cloud. It prompts brands to assess their platforms and their delivery models, to enable streamlined delivery. For example, in-house solutions of OTT delivery of live sports, which is becoming ever more popular, find it difficult to cope with the massive increases in traffic in a cost effective way, whereas the cloud can streamline that delivery due to the virtualization of hardware. As more and more services turn to the cloud in this Zettabyte era in which larger amounts of data flow freely, it is imperative that providers pick their multiscreen partners wisely, to ensure seamless content delivery. A solution which encompasses preparation, rights and security and choosing the right multiscreen partner will allow for better agility into the cloud. The most fundamental part of ensuring a successful OTT service is to provide high quality user experience. This is achieved through proper workflows, transcoding,

security etc., which can all be hosted and delivered via the cloud, ensuring consistently good user experience. iTaaS solutions offer a unique opportunity Service providers have a unique opportunity to make the most of the move from physical to IP networks by developing strong OTT partnership strategies, taking advantage of emerging technologies such as Infrastructure as a Service (IaaS) cloud hosting solutions. From a video perspective, a cloud-based Internet TV as-a-Service (iTaaS) platform offers mobile network operators a fully agnostic, market ready solution that meets the challenges of markets where broadband infrastructure is not mature. This end-toend platform can be specifically optimized for mobile billing, thus allowing the mobile operator to utilize the existing billing system. It can even provide the operator with its own branded service which can be brought to market swiftly. The cloud is part of the seismic shift that is occurring in the Pay-TV world, where operators are adding OTT elements to their service portfolio. The iTaaS model can deliver a wide range of services, including Live, Catch-Up, Cloud DVR and VOD, consumer applications, personalization and management, to name but a few. iTaaS has the additional benefit of being able to slash CAPEX and to reach smartphones and tablets regardless of where they are in the world. The universal appeal of an iTaaS platform is perfectly suited to developing markets where it’s hard for new market entrants to develop a billing relationship with customers. OTT technology is shaking up the traditional forms of communications in the telco industry. By adopting an expansive IaaS cloud hosting strategy, service providers use high end network assets to formulate interdependent relationships with developers. This new collaboration will ultimately see telcos and OTT providers take more market share as they monetize these innovative top level approaches. The cloud is synonymous with mobility, which places telcos in a unique position to leverage partnerships and offer agile systems. By leveraging solutions which support billing requirements, integrated platforms and content expertise, the cloud provides compelling OTT opportunities. l

North America I 2014

n

15

SDN, NFV and Cloud - Culprits or Saviours?

Five key trends for network transformation in the Zettabyte era by Nicholas Ilyadis, VP and CTO, Infrastructure & Networking Group (ING), Broadcom Corporation

As the Zettabyte era is approaching, five technology areas are developing to cope with it: SDN, NFV, Cloud, Big Data, and Green Data Centers. SDN enables network complexity to be handled by automated software based system; NFV hosts networked functions on virtualized machines, decoupling the functions from the hardware; Cloud serves both data and applications from a distributed networked environment, to any device, anywhere; Big Data techniques make it possible to perform real-time analytics on the large accumulated data, and Green Data Centers apply methods of minimizing the consumed power.

Nicholas Ilyadis is VP and CTO of the Infrastructure & Networking Group (ING), in Broadcom Corporation. In his role as VP and CTO for ING, he leads the product strategy and cross portfolio initiatives for a broad portfolio of Ethernet chip products including network switches, high speed controllers, PHYs, enterprise WLAN, SerDes, silicon photonics, processors and security. Prior to Broadcom, Ilyadis served as Vice President of Engineering for Enterprise Data Products at Nortel Networks and held various engineering positions at Digital Equipment Corporation and Itek Optical Systems. He holds an MSEE from the University of New Hampshire and a BTEE from the Rochester Institute of Technology. Ilyadis is a senior member of the IEEE and contributes to both the IEEE Computer and Communications Societies.

Times are changing. Computer networking is no longer the patchwork construction it once was. Today, computer networks are the foundation for all modern communication. Businesses large and small, and individuals, are now all connected in one form or another to a modern network system. As a result, these systems and the data centers used to house them are being scaled up to accommodate the load. They are getting bigger, more pervasive and increasingly, more complex. The amount of data carried by networks has exploded, heralding in what many now call the Zettabyte era (1 ZB= 1021 bytes of digital information storage capacity). According to The Cisco Global Cloud Index, by the end of 2017, the annual global data center IP traffic will reach 7.7 ZBs. That is where the problem begins. How do companies and service providers manage

16

n

North America I 2014

the growing complexity and ever increasing need for bandwidth, while still enabling easy network configurability? A number of emerging trends now offer a way to transform the IT infrastructure, while also making it more manageable and configurable. Software-Defined Networking (SDN) Consumers want increased access to highbandwidth, multimedia rich applications and content, and much more of it. This presents a host of problems for today’s networks and data centers - problems that are getting more complex with every passing day. SDN provides an answer to this dilemma. It manages this complexity via a network-wide software platform that enables centralized network coordination, control and programmability.

For IT professionals, turning the complex task of provisioning, optimizing and monitoring network traffic over to software presents a number of potential benefits, namely that it gives them a programmable and customizable interface for controlling and orchestrating the operation of a collection of devices at different levels of abstraction. This makes data center processes more agile and increases their performance. It also allows data centers to use their assets more effectively - all of which reduces cost and improves both efficiency and productivity. While still in its infancy, there is little denying the impact of SDNs on the manual network configuration and management process. Being able to replace this effort with a software platform would certainly allow network administrators to roll out

SDN, NFV and Cloud - Culprits or Saviours?

new services and functionality faster and with fewer errors, and more easily balance network loads. Such capabilities make SDN an enabler of other emerging trends, like network virtualization and cloud computing.

hold. As the industry begins its march toward virtualization, there will likely be a blend of soft and hard assets within the network, with many of the hard assets eventually being replaced with virtual machines.

Network Function Virtualization (NFV)

Cloud Computing

Whereas SDN promises a way to manage and control increasingly complex networks, Network Functions Virtualization (NFV) grants users the flexibility to relocate network functions from dedicated appliances to general purpose servers, making networks more scalable, agile and efficient. Rather than buying a hard asset, such as a router or box for a single purpose, service providers can now take the function associated with the box and instantiate it as a virtual machine on a server. By using standard IT virtualization technology to consolidate many network equipment types onto industry standard highvolume servers, switches and storage, NFV aims to transform the way network operators architect and operate networks and network services (Figure 1).

When it comes to getting power for an electric device these days, it’s as simple as plugging it into a power outlet. We don’t think about where the power comes from or even how it gets to us, we just know it’s there in the outlet waiting for us to use it. Now, with Cloud Computing, this same concept is being re-envisioned for computing. Many of us have a laptop for work that’s carried home each night. What if computing became much like a plug-in utility? That’s the promise of cloud computing, with specific applications and data files available when and where they are needed, without having to be attached to any specific device.

Figure 1. The vision of NFV (Courtesy ETSI NFG)

The vision of NFV is to consolidate many network equipment types onto industrystandard, high-volume servers, switches and storage. While still in the early stages of development - the first draft of the NFV architecture document isn’t expected until January 2015 - the technology promises a host of benefits. For example, because the network functions will be implemented in software, they can be easily moved to, or instantiated in, various locations in the network without having to install new equipment. Also, network operators and service providers won’t need to deploy as many hard assets. Instead, inexpensive, high-volume server infrastructure would be deployed with virtual machines running on top. Additionally, use of virtualization would eliminate the dependency between a network function and its hardware, allowing physical hardware to be shared by multiple virtualized network functions. Currently some proof of concepts are underway for NFV, however SDN must first be in place for the technology to really take

Cloud computing has a number of benefits for the scientific community, business and society. It provides scientists with easy access to thousands of servers and the processing power they need to follow scientific pursuits. That same computing power can be leveraged by companies to test their designs and discoveries. Cloud computing even improves collaboration and makes education available to the masses regardless of location. Big Data Everything we do these days leaves a digital trace and there’s no shortage of people looking to use and analyze this information. ‘Big Data’ is the term many use to describe the enormous amounts of ever-expanding data-both structured and unstructured-that we generate on a daily basis. Collecting and analyzing this data has significant implications for businesses looking to monitor buying patterns, extract specific business information or make sound strategic decisions. It can also help to significantly improve our ability to understand the world around us, whether that means monitoring trends in social networks, foiling an act of terrorism or even finding a cure for cancer. Organizations have been capturing and analyzing data for some time, however, the rate at which it is generated today has grown by leaps and bounds and continues to increase. Big Data analytics now allow those organizations to analyze very large, complex forms of data by breaking the task into smaller ones that can run in parallel on tens,

hundreds or even thousands of computers within the cloud. By doing so, companies are now uncovering hidden patterns, unknown correlations and other useful information. Green data center According to the New York Times, data centers can waste 90% or more of the electricity they pull off the grid. With energy costs spiraling out of control and demand for computing power continuing to increase, the “greening” of the data center, i.e. using different techniques to reduce the operational power requirements, has become increasingly important. One possible green technique is usage-based power versus constant power. In the past, networks were always on, even if no information was flowing. They were powered up at maximum just to maintain link integrity between devices. In the energyefficient networking world, however, links are now being powered up and down based on their loading. If a piece of equipment is idle, it powers down portions of its circuitry, and when needed, it quickly powers up again. Using techniques like this, green data centers can realize lower energy costs and overall operating costs. Lowering data center power use also results in a potential CO2 emissions savings of up to 3.5 million metric tons (based on ~5 TWh/year). This infographic presents statistics based on the full deployment of energy efficient Ethernet on a population of links equal to those in use today, assuming all 10/100 links become 1G and account for some migration of data center links to 10G speeds. Conclusions With bandwidth demands exploding and data centers scaling up to meet the demand, finding ways to make networks scalable, manageable and configurable have become absolutely essential. The development of technologies like SDN and NFV to address this problem signal a paradigm shift in networking design and development that will increasingly be defined by software. Cloud computing will also play a role, redefining the way people use and access networks and data, while Big Data analytics is working to put the data to good use. For their part, green technologies will reduce adverse impact of the rapidly escalating power consumption. In all, these trends promise to transform not only the future of networks and the data center, but every aspect of society as well. l

North America I 2014

n

17

SDN, NFV and Cloud - Culprits or Saviours?

Innovation for a new era of information technology by Ariel Efrati, CEO, Telco Systems, and COO, BATM Advanced Communications

As the demand for communicating more data grows, the technologies that support such growth are evolving. In particular, SDN and NFV are complementary technologies that optimize inter-connectivity and service delivery by decoupling software and data from their platforms. However, in hosting more data on the Cloud and in utilizing hosted network facilities to access it, more control traffic is generated, and even more data traverses the network. The new era of Zettabyte affects all corners of the industry, from access and transport capacity, to enhanced network management and SLA monitoring.

Ariel Efrati is the Chief Operating Officer of BATM Advanced Communications (LSE: BVC; TASE: BATM), where he oversees BATM’s global business operations and telecommunications practices, and acts as Chief Executive Officer for Telco Systems, a BATM wholly owned subsidiary. Previously, Ariel held several senior positions in the telecom industry. As Senior Vice President, Ariel led Amdocs (NYSE:DOX) Venture Investments, Open Innovation and Product M&A unit. He headed Amdocs corporate competitive strategy unit, and served as the General Manager of the Service Fulfillment Product Business Division. Ariel also led Amdocs strategic and substantial entry to the OSS markets through organic growth and M&A activities. His past senior positions include leading an advanced technological unit in the Israel Defense Forces Intelligence Corps. He was also the CEO of CallmyName, a startup in the Mobile sphere. He is a member of several successful startups’ BoDs, and is highly experienced in technological and market innovations. Ariel is a software engineer and an EMBA graduate of the Kellogg School of Management (Northwestern University).

Today’s network capacities are like pushing a group of elephants into a Mini Cooper it’s time to ‘trade up’ to make room for the Zettabyte era Information technology is well embedded into our daily lives - we are constantly sharing data across networks via a range of devices - yet IT, networking and data exchange is still a rapidly evolving concept guided by a number of innovation-driven industries. Technology and innovation are two areas well known to go hand-inhand. As the information we share across networks, as well as the complexity of the networks themselves, continue to evolve and expand, the software and processes used to

control the networks need to keep pace with this evolution. People expect more from their networks - we want our information fast, accurate, anytime, anywhere. We’re also putting a record-high amount of content out into the world, and seeking to distribute it more often, and in wider circles. This, in turn, has generated a data and traffic explosion, causing network limitations to wear thin. It forces seeking new concepts and innovations to remedy the congestion… leading us to the Zettabyte era of information technology. Some internet traffic researchers believe that global internet traffic will amount to

1.4 Zettabytes (ZBs) in 2017 - larger than the total traffic in the whole history of the internet from 1984 to 2012 (1.2ZB). The drivers behind this notion are attributed to more users coming online, users connecting to more devices, growing network and broadband speeds and more media rich content being shared. A good example of network capacities needing to accommodate the impending Zettabyte era is the past year’s enhancements in medical science for genome mapping, or, the creation of a genetic map assigning DNA fragments to chromosomes. Today, this information has become much more accessible due to the reducing network cost.

North America I 2014

n

19

SDN, NFV and Cloud - Culprits or Saviours?

With the increase in production, comes an increase in sharing. One human genome map requires around 200GB of data. Doctors and scientists would like to get these files in real time, and in some cases, share it with their colleagues. This immense size of data sharing will have a dramatic impact on current networks, which are presently not designed to support such high volumes of traffic. Today’s traffic is estimated to be roughly over 1,000 petabytes of information per day - and rising thus, the next generation of telecom networks will have to support Zettabyte traffic - that’s 1 billion terabytes!! Moreover, the telecommunications industry has recently experienced fantastic growth in mobile internet traffic, which is expected to reach 11.2EByte (exbibyte=260) by 2017. New services and applications, such as ‘catch-up TV’ and social video sites like YouTube , raise demand for mobile video and high-definition (HD) content, which further increases bandwidth requirements. Improvements in smartphones that drive the desire for HD content are expected to be some of the main contributors towards the necessity of Zettabyte networking capacities. The explosion of data traffic has forced network architects to form new networking concepts and designs, in anticipation of the Zettabyte era. Virtualization and Cloud technologies are becoming an integrated part of IT, meaning more and more information will be stored and processed outside the core devices. The externalization of currently internal enterprise IT functions onto the cloud and into datacenters has been dictated by the need for performance assurance. SDN and NFV are two good examples of this evolution. Software-Defined Networking (SDN) and Network Functions Virtualization (NFV) have been hot topics of late in the telecommunications and IT industries. They are complementary approaches, offering new ways to design, deploy and manage networks and services. Simply put, SDN focuses on virtualize inter-connection with other networks and NFV focuses on the virtualized services - both working to ensure optimal quality of service and experience for the user. Therefore, the future success of NFV and SDN within the service provider’s environment has to be tightly coupled with the underlying transport technologies. It’s thanks to these technologies that software can finally be decoupled from the hardware, removing the constraints imposed by the actual networking device that delivers the service. In other words, network

20

n

North America I 2014

administrators are able to manage network services separately from the device that implements them, and can deployr multivendor solutions across a network. With NFV, the network hardware infrastructure allows for the option to scale the network elements. Similar to adding more capacity to a datacenter through the use of an additional blade - operators can add more standard open hardware to a network, as traffic grows. There are certain requirements imposed on transport networks for running networked functions remotely. The first is higher capacity. With the adoption of SDN & NFV, many functions will relocate from customer premises to datacenters. The result will be that more traffic will have to traverse the network, either as control traffic between the end device and the virtual service, or as data payload traffic, where it was previously handled by dedicated customer platforms and is now handled deeper in the network (depending on where the NFV service allocates to it). Another requirement is an increase in demand for service-level agreement assurance and measurements. When extracting an inline function from the enterprise and pushing it into the service provider network, the delay and packet loss of the network has to be monitored in real time, as major changes in these parameters may drastically impact the networking functions’ processes. Monitoring loss and delay will help assure the application’s health and identify degradation of these service parameters. While more and more services are moving out from the enterprise, the availability and QoE (Quality of Experience) still has to remain the same - otherwise customers will reject this new technology. There is a general consensus in the industry that network virtualization through SDN & NFV will be implemented by mobile operators and vendors within the next few years, and this is already happening in North America. Tier 1 wireless operators appear to be leading the way for virtualization in the U.S. In the fall of 2013, AT&T announced plans to virtualize their networks by implementing SDN & NFV. This reconfigured network architecture is expected to accelerate time to market for new products and services, while simplifying the network and improving functionality.   This trend appears to be catching on, as the U.S. Government also has plans to move to cloud computing - an

infrastructure complementary to SDN & NFV implementation. In 2013, the U.S. Government adopted the ‘Cloud First’ policy, driving wider adoption of cloud computing in the public sector. SDN and NFV mark a new era in the number and type of services the carriers and service providers will be able to provide and the time to market for which these services will be launched, mainly because of the shift from hardware-based services to softwarebased services. These services will pose new quality requirements for the networks, some of them more complex, forcing the network to provide highly granular hierarchical quality of service (HQoS). While these innovative technologies have brought new spirit to the telecom market, and carriers are starting to explore how they can take advantage of them, we’re still some years away from major commercial deployments. So the short term concern is how to make current network investments future proof, while preparing for the Zettabyte era. The Zettabyte Era will have great effects on networking and information technology in many ways: networks will need to become much faster, while allowing for a higher level of decision making, policing and optimization; network processes will become more automated, as an increasing amount of information will go online and more content will be stored off-site in big datacenter locations; and most crucially, networks will evolve to be more resilient, measurable, selfaware, self-healing, and cyber-aware, capable of detecting and isolating networking attacks. It is clear that a lot of thought and innovation will have to go into the preparation for the Zettabyte Era. It affects everyone and everything, from the architects that design the networks, to the enterprises and people sharing information and services over the networks. SDN & NFV are just the beginning of a data exchange revolution, as the world prepares to innovate for a new era of information technology. l

Boosting the Transport Network

Captains of disruption in the telecom world: Carrier Ethernet by Paul Pierron, CEO, FiberLight, LLC.

Carrier Ethernet is gaining a much broader market foothold, thanks to its versatility, flexibility and cost-efficiency. While originally conceived for connecting LANs, it is now used by mobile carriers to carry mobile internet, therefore recent demand for it is driven by the roll out of 4G/LTE in the US. Ethernet standards have evolved and can now run over copper and fiber, legacy and advanced networks. It allows paying for immediate capacity as and when needed, and introducing services promptly, when they are required. Corporate demand is also pushed up by the new WLAN standards that allow higher multi-user throughput, and the uptake of cloud services. The scalability of connecting to the cloud is now addressed by the CloudEthernet Forum as well as the MEF.

Paul Pierron became FiberLight’s Chief Executive Officer in 2014. As CEO, Mr. Pierron oversees the day-to-day activities of FiberLight’s Sales, Marketing, Operations and Finance departments and is responsible for the company’s overall growth strategy and corporate culture. Under his direction, FiberLight has secured a number of major sales wins, enhanced internal systems and procedures and implemented aggressive growth plans for each of their markets. Mr. Pierron brings 39 years of telecommunications and business experience to his position with FiberLight. He previously served in leadership roles with AT&T, SBC, Sprint, Nuvox, LightCore and Xspedius. Prior to becoming FiberLight’s CEO, he also served as Chief Operating Officer for the company. Throughout his tenure, Mr. Pierron witnessed firsthand the growth and evolution of Carrier Ethernet across the globe. He maintains in-depth expertise, skills and knowledge of Carrier Ethernet technologies, services, applications and standards. Mr. Pierron plays an instrumental role in driving the growth of FiberLight’s Ethernet portfolio, which offers best-in-breed Carrier Ethernet technology delivered over a US$1 billion diversely constructed optical ring topology network.

Introduction Carrier Ethernet continues is evolving into the network platform of choice for organizations requiring high-capacity bandwidth to respond to exploding rates of data consumption. Carrier Ethernet solution is better at delivering streaming voice-datavideo without delays than legacy platforms such as SONET-based T1s with its 1.54Mbps maximum transmission capacity. Ethernet offers scalable circuits, beginning with 10Mbps to 100Mbps, for faster access. As more businesses clamor for bandwidth for streaming content and access to the cloud, Carrier Ethernet is positioned to meet demand in the Zettabyte era. The rise of Carrier Ethernet Originally, Carrier Ethernet was designed to connect Local Area Networks (LANs)

and was deployed over Synchronous Digital Hierarchy (SDH) or Multiprotocol Label Switching (MPLS) - neither of which offered much flexibility. Now, Carrier Ethernet service is growing to support 4G/ LTE deployment and faster content delivery. While capital expenditures for overall network expansion have remained somewhat flat, investment in U.S. carrier networks for backhaul support of 4G/LTE technology (such as fiber-to-the-tower, or FTTT) has been on the upswing, resulting in US$30 billion in new expenditures. The U.S. leads the world with 1.38 Gigabytes of mobile data consumption per month, representing 5% of the world’s wireless connections and 50% of all LTE deployments. This trend is likely to continue due to the rise in cellular device sales - expecting ten billion new devices by 2018, well beyond the seven billion devices sold last year.

Individuals and businesses alike are ditching telephone landlines in favor of Voice over Internet Protocol (VoIP) and wireless communications, creating a greater need for a Carrier Ethernet solution. Carrier Ethernet for the Zettabyte era All factors point to the need for a better solution to meet heavier IP demand. It is estimated that global IP will reach 1.4 Zettabytes per year by 2017, based on two wireless devices per user. It is expected that IP networks will carry 11.2 exabytes of mobile traffic per month by 2017. To prepare for this growing demand, telecom carriers have spent US$70 billion on Carrier Ethernet equipment and services in 2013 and expect to spend US$100 billion by 2017. The ongoing effort is paying dividends. Today, Carrier Ethernet is being delivered to business premises more often than all other legacy technologies combined.

North America I 2014

n

21

Boosting the Transport Network

Carrier Ethernet may be best positioned to become the new standard for several reasons. It offers a simplified design. It scales as an organization grows, enabling bandwidthon-demand for peak periods. Operational expenses are lower because organizations only use bandwidth as and when it is required, therefore Carrier Ethernet has better gross margins and ROI. The network infrastructure is flexible enough to perform on multiple platforms, including Ethernet over Copper (EoC), DSL, fiber, SONET, DWDM or SDH. Carrier Ethernet is granular, which makes it easier to manage and control, and new applications can be added as needed.

required, port speed, contract terms, and the facility’s location. U.S. customers pay around US$1.47 per megabit of broadband service compared to customers in Western Europe, who spend about US$0.49 per megabit.

Carrier Ethernet architecture is more reliable because of redundant equipment, reducing the risk of network failure. It is more secure than the public Internet, since it is delivered over a private, non-shared network. Unlike other network solutions, Carrier Ethernet delivers information at the speed of the wire, which is the fastest bandwidth speed available today, and there is less risk of jitter and delay due to propagation, data protocols, or problems due to routing and switching, where data can become congested and information can get lost.

Carrier Ethernet’s surge in popularity is driven by greater cellular demand for 4G/ LTE transport infrastructure; greater capacity requirements of data-centric, cloud-based applications; and ongoing demand for greater flexibility and speed when accessing the cloud. However, it must also be attributed to the new standards, set by the IEEE (Institute of Electrical and Electronics Engineers) and MEF (Metro Ethernet Forum). The MEF, a consortium formed to promote the adoption of Metro Ethernet, has recently created a committee to look at operations management for multi-carrier networks. The goal is to create more service diversity, interoperability, standardized contractual arrangements and further clarity regarding national regulatory standards. In September 2012, the IEEE released an updated Ethernet Standard (IEEE 802.3), which enables higher multi-user throughput in wireless LANs at a data rate of up to 7 Gbps. The new standard is 10 times faster than the previous standard and allows differing Ethernet speeds to be adjusted before transmitting information. The end result is a better multi-user experience.

Network design considerations Carrier Ethernet network design is flexible and can be customized, selecting between pointto-point (building-to-building or building-todata center); multi-point to multi-point bridge (a multi-point service connecting a group of customer endpoints known as an E-LAN); or multi-point service connecting one or more businesses, but preventing one group from communicating directly with another (called an E-Tree). It can be a virtual network, providing multiple connections to subscribers called the Ethernet Virtual Private Line (E-VPL); Dedicated Internet Access (DIA); Ethernet Access to the Internet over a privately connected network (IP/MPLS VPN); and Wide Area Network (WAN) VPLS. Connections can extend to LAN, WAN, Metropolitan Area Network (MAN), data center, or the cloud. Bandwidth can be dedicated and provisioned to serve single or multiple locations and subscribers, at native LAN speeds. While Carrier Ethernet users have access to circuit speeds of 10Mbps, 100Mbps and 1000Mbps (GbE), transport capacities of 10G, 40G and 100G Ethernet are becoming increasingly common. About 50% of all companies using Ethernet customer ports today are riding on fiber. More buildings are coming on-net and greater demand for 10G Ethernet has brought in more competitive pricing. Although pricing may vary from location to location, all prices are based on the type of service, Ethernet ports

22

n

North America I 2014

Some sectors would like to see bandwidth sold as a commodity. Recently, the U.S. Patent Office approved an application for a Colorado-based telecom company to patent a bandwidth trading tool. The service platform will permit the company to sell unused blocks of bandwidth in real-time at a decreased rate. Carrier Ethernet standards

In 2013, the CloudEthernet Forum was established to address scaling and to define the most appropriate ways to meet cloud service demands. In collaboration with MEF, the CloudEthernet Forum addresses performance and technical challenges associated with VLAN (Virtual LAN) scaling, deliverability, regulatory requirements and cost efficiency across complex virtual networks, data centers, large domains, and consolidated storage networks. While all of these factors have made a positive impact on Carrier Ethernet’s establishment as the industry standard, growth may be an inevitable part of the Internet evolution, based on a theory set forth by Moore’s Law. Created by Intel cofounder Gordon E. Moore and highlighted in a 1965 paper, Moore’s Law makes the observation that every two years, the number of transistors on integrated circuits increases exponentially. Other factors such

as processing speed and memory capacity also double. Today, it is not uncommon to see demand triple on a daily basis, requiring a technology capable of dynamically multiplying to provide necessary capacity as well as cost-efficiencies. Summary Carrier Ethernet is gaining broader acceptance across market sectors. Education, manufacturing, the arts, travel, finance, healthcare, agriculture and government are all seeking the reliability and cost efficiency Carrier Ethernet delivers. For example, major strides are being taken in the education sector to provide rural schools and universities with high-speed Web access. Ethernet offers solutions for Education’s growing demand for data and technology, delivering robust, school and campus-wide connectivity to support initiatives such as virtual classrooms for distance learning, campus security systems, SMART Board technology, videos streaming, and more. Ethernet is also enabling the delivery of high-speed data transport services, as well as high-bandwidth applications to carrier, enterprise and government customers in underserved areas. Traditionally supported by local cable and telephone companies, these critical connections to data centers and major touch points across the globe will now be facilitated via Ethernet. Other factors such as availability and low latency for streaming audio and video, secure access to off-site data storage and retrieval, disaster recovery, business continuity, VoIP, and flexible access to virtualized service offerings such as Software-as-a-Service (SaaS), Platform-as-a-Service (PaaS), and Infrastructure-as-a-Service (IaaS) applications also position Carrier Ethernet in the forefront as the best delivery choice. What sets Carrier Ethernet apart from other platforms, however, is its fiber core. Multiple services can be delivered simultaneously with 99.999% reliability and improved uptime due to its redundant architecture, which can reroute traffic in the event of a network failure. As Carrier Ethernet continues to improve, it is further differentiated from legacy solutions. Offering seamless, integrated Ethernet WAN connectivity between LANs, fast throughput and multi-user access to public, private and hybrid cloud solutions, Carrier Ethernet continues to set the stage for greater adoption. In the meantime, the medium is well positioned as the best Layer 2 transport mechanism for 4G/LTE Ethernet Backhaul moving forward. Its cost efficiency and scalability makes it the number one choice for cellular providers who are seeking the best solution for 4G/LTE deployment. l

The advances at play in media and entertainment have created unprecedented opportunity for you to deliver innovation to the connected consumer. The digital insight you need to accomplish your goals — and play to win — is here. Global to mobile, live to archive, sound and picture — from previs to post, big data or small market, NAB Show® is your channel. And this is your opportunity.

Conferences: April 5 –10, 2014 | Exhibits: April 7–10 | Las Vegas Convention Center, Las Vegas, Nevada USA

Join Us! #nabshow | www.nabshow.com

FREE

Exhibits-only Pass code PA02.

Boosting the Transport Network

ZetaMan by Cheri Beranek, President and CEO, Clearfield, Inc.

In the Zettabyte era, ZetaMen will roam the earth. They are pioneers, thinking for themselves. They will have to, if they wish to connect broadband to far-out places or impossible to reach highly urban locations. If broadband installation is not commercially viable, a rural co-operative of locals will make it happen, own it and manage it themselves - it will be simple enough to maintain by then. Perhaps we are all going to own the wire that comes to our door…

Clearfield, Inc. President and CEO Cheri Beranek is a founding member of the company. She is considered a visionary in the telecommunications networking industry, and has extensive leadership experience in the field of emerging high-tech growth companies. In July 2003, she joined APA Enterprises as President of its subsidiary APA Cables & Networks. In June 2007, she was appointed President and CEO. Previously, Ms. Beranek served as President of Americable, for which she had previously been its Chief Operating Officer. Throughout her career, Ms. Beranek has held a variety of leadership positions with emerging high-growth technology companies, including Transition Networks, Tricord Systems; and Digi International. She also has extensive non-profit experience including: the City of Fargo, the Metropolitan Planning Commission of Fargo/Moorhead, and North Dakota State University (NDSU). Ms. Beranek has been recognized with numerous awards, including the 2012 Women in Wireline from Fierce Telecom, 2012 regional finalist for Ernst & Young’s Entrepreneur of the Year, Stevie Awards for Business in 2011 as a finalist for Best Executive for non-services businesses and the 2009 Turn-Around of the Year, as well as the Twin Cities Business Journal’s Industry Leader Award in 2009 and Women to Watch award in 2004. Cheri has a Bachelor of Science from Southwest Minnesota State University and a Masters from North Dakota State University.

24

The Zetabyte era - it sounds high tech and low touch, but in reality, it will be anything but. Rather, the Zetabyte era will be more like the time of the Mountain Man - a time of ruggedized individualism and selfreliance. Growing up, we all saw movies of the Grizzly Adams types - the early pioneer who went out without the “burden” of a family, but expected to do it all himself. While Grizzly Adams was a fictional character, the principles taught through his character hold true.

strength, nor other super hero capabilities, our ZetaMan will carry the same confidence of the Mountain Man of old.

The Zetabyte Man (or woman) won’t wait for someone to do it for him, but thanks not only to the speed of communication, but also to the ease of communication, will move the mountain to keep progress moving forward. Although not requiring superhuman

We’ve worked with the early ‘mountain men’ of optical fiber. These pioneers knew what they wanted and weren’t afraid to go figure it out as they went along. Back in 2001, a telephone firm in Bemidji, a northern Minnesota town, named appropriately “Paul

n

North America I 2014

ZetaMan begins at the fundamentals - the network itself. The Zetabyte era will be dependent upon optical fiber, as it is the future of all communication and the only true limitless transport medium. Today there is still frustration about getting optical fiber to reach the home, or business or up the cell tower.

Bunyan Telephone”, wanted to extend some of the fiber that was delivered from the hey-day of the .com craze. We placed large cabinets, stuffed with central office panels that were never meant for the outside plant, and ‘ruggedized’ the contents. While there were some sleepless nights during that first winter, when temperatures plummeted to -20 degrees Fahrenheit (and colder), those boxes are still running today with high bandwidth delivered to the customer. Later, other mountain men from the south, wanted to do the same thing. By then, we had figured out how to downsize the cabinet size, but weren’t yet prepared for the environmental challenges of seasalt. Nothing lost - nothing gained. A few tweaks on the paint and the metal selection,

Boosting the Transport Network

and all was well. Mountain men thrive on trial and error. There are excuses on the cost or the environmental challenges. Not unlike the opponents of the first transcontinental railroad, opponents of ubiquitous broadband, put up barriers with their excuses. We’ve lost our ruggedness and independence and cry out for someone to do it for us. However, technology companies will ease this burden and deliver products that enable fiber to be installed in projects previously not viable, economically or environmentally. As a result, ZetaMan will own his own fiber. Just like today, you might own your own sprinkler system to water your lawn, ZetaMan will own the optical transport into his home. Wait, you say, the city (or other government entity) owns the water lines that come to your property for your sprinkler system to access that water. Similar, ZetaMan will see the re-emergence of cooperative ownership. Quoting Grizzly Adams, “Well, when you come to think about it, all that a person has is other folks. I reckon there’s a lot of folks in these parts that could use a helping hand”. It was that helping hand that brought electricity and telephone service. Groups of people with a capitalist heart, formed an alliance to work together to get what they wanted. Rather than wait for the government to offer a hand-out, they made it happen for themselves. We’re working with a very rural county today to bring broadband to every farm. This is an area where there isn’t a single stoplight in the community identified as the county seat. These farmers and local businessmen have created the business plan to develop a telecom cooperative that will allow them to take responsibilities for their futures and own their own broadband network. In pockets like this today, early ZetaMen are forming cooperative alliances to build their networks. This business plan has proved viable for hundreds of years and will grow in importance through the Zetabyte era. Achieving this level of independence will happen, because technology companies in the ZetaByte era will deliver products that don’t require high-skill to install. Tomorrow’s more technology-driven appliances, the internet of things, as it is called, will in some cases raise the technical prowess of the professionals that service them, but tomorrow’s infrastructure will be served by the ZetaMan. ZetaMan will be a do-it-yourself’er and a handy-man. Technology infrastructure will not be a

commodity, but it will be transparent. One will not recognize when you move from WiFi to cell to even landline, because networks must also be heterogeneous, meeting the needs of the application. Since the ZetaMan is in charge, he will dictate aesthetics. Today, environmental groups are getting their voices heard. They don’t want to see telecommunications cabinets and overhead wires littering the landscape. In some locations, including large metropolitan markets in California and elsewhere, these protests have prevented the construction of broadband services. For the Zeta era, infrastructure will be small, buried, and reliable. Distributed Antenna Service (DAS) is an exciting technology that’s opening the door to the Zeta era as it allows for greater usage and bandwidth transfer in heavily populated metro areas. Early on, the challenge was getting fiber to the antenna sites as they are commonly situated in or on structures that are encased in concrete. Early pioneers in the process knew traditional trenching as an expensive and labor-intensive process. Their traditional method of digging a one foot wide trench, placing a two to four inch conduit, backfilling with concrete and then repaving wouldn’t make it in the Zeta era. Imagine closing a lane of traffic in downtown Chicago for a week? In the Zeta era micro-trenching (slot cutting) will become the norm. In these environments, a microduct from an existing manhole will be run and placing a fiber from the manhole splice into the DAS equipment on streetlight poles. Bringing fiber to a cell tower is similar. While new construction is common for a free-standing tower, when a cell tower is on the rooftop of an existing structure, new construction wouldn’t be possible. The ZetaMan looks at problems differently why not develop technology that reinforces what’s already there. On a recent job, the cell site was on top of a ten story building with a one inch conduit running down from the rooftop to an equipment room in the basement. The connection point for the local telco was an additional 450 feet away in a manhole. This installation had been on hold for three years because of routing problems and the costs associated with core drilling 10 floors and installing a new conduit from which to pull fiber. Using pushable fiber and ruggedized microduct as an alternative, we were able to accomplish placing both the

ruggedized microduct and pushing the fiber in just about 8 hrs. ZetaMan will think for himself and do for himself. The future is usually best understood through the past. For as Eleanor Roosevelt said, “Remember always that you not only have the right to be an individual, you have an obligation to be one.” l

Connect-World now on Facebook & Twitter Connect-World, the world’s foremost discussion forum for leaders in the ICT industry, is now available on Facebook and Twitter. The world’s top ICT decision makers express their opinions in Connect-World. They use clear, non-technical, English to discuss how ICT helps shape regional and global development. The articles essentially examine the influence that ICT products and services have on the way people live and do business. With separate editions for each of the world’s regions, the reports highlight the most important ICT trends and issues influencing socio-economic growth. Connect-World is now available to follow on Twitter (http://twitter. com/#!/ConnectWorldICT) and Facebook http://www.facebook. com/connectworld.ict Also, it is still possible, for FREE, to directly access all past and present Connect-World articles, ICT Industry press releases, eLetters, ICT News and more at www.connect-world.com.

North America I 2014

n

25

Boosting the Transport Network

Optical wireless broadband: The need for bandwidth in the Zettabyte era by Gerardo Gonzalez, President and CEO, Skyfiber

The massive data traffic growth is, by all accounts, an impending bandwidth crunch crisis, which requires disruptive combinations of backhaul technologies. The pressure on radio frequency spectrum means that the limitless optical fiber gets a second look, despite the logistical challenges and the high installation costs. Microwave and Millimeter Wave (60-80Ghz) are also subject to spectrum availability. Optical Wireless Broadband can only operate where there is a line-of-sight. Hence, to meet the Zettabyte era demand, Mobile Network Operators must make use of a balanced mix of all four main backhaul options: Fiber, Microwave, Millimeter Wave and Optical Wireless Broadband.

Gerardo Gonzalez is President and Chief Executive Officer of Skyfiber. A native of Mexico City, he has over 40 years’ experience in all aspects of the global Telecommunications industry. For 10 years, he was responsible for the manufacturing and sales for Northern Telecom’s subsidiary and for the establishment of its production facility. At Movitel del Noroeste, he oversaw sales approaching $80 million (U.S.) in cellular systems, and later served as VP of operations and Managing Director. In 1994, he joined Grupo Televisa as Managing Director for Comunicaciones Mtel, (Skytel), the largest paging company in México, and as Operations Director for Cablevisión, the largest cable company in México. In 1996 he became Regional Sales Director Latin America at Advanced Techcom Inc, providing microwave turn-key solutions, and Managing Director for the sales in México. In 1998, Mr. Gonzalez joined Grupo Salinas, as General Director for Audits, and later became General Director for RadioCel, CTO Groupo Salinas, and CEO for Telecosmo, the first Internet wireless service provider in México. Mr. Gonzalez then took on the role of General Director for Damovo Mexico, in 2005, Mexico’s main Ericsson distributor. Mr. Gonzalez is also the founder and President of the Telecommunications & Consulting firms of Especialistas en omunicaciones y Servicios since 1983, which provides consulting services to key companies such as Grupo Iusacell, Avantel, Grupo Domos, Investcom-Protel, Grupo Acumen and Rockwell Switching. On June 2006, Mr. Gonzalez has appointed by the President of Mexico as a Commissioner at Cofetel (Mexican Telecommunications Regulatory entity), a role held until September 2008. He has received multiple nominations and awards, the most recently was granted on March 2009 by the National Polytechnic Institute Alumni, due to his excellent professional contribution.

In 2010, David Achim, then CIO of Skyfiber, Inc. stated that better access to faster wireless broadband networks is absolutely critical for today’s enterprises. That statement is still on target as we approach the Zettabyte era of data storage. This era will require a tremendous amount of bandwidth to handle the storage and retrieval of this data. Optical Wireless Broadband (OWB) technology requires no RF spectrum. This also means that OWB does not have the interference and security issues of Microwave and Millimeter Wave and is virtually impossible to intercept.

26

n

North America I 2014

A report by iGR Research Company in 2012 confirmed that the demand for mobile backhaul in the U.S. market will increase 9.7 times between 2011 and 2016. The unavoidable fact is that demand is increasing far faster than Mobile Network Operators can keep up. Cisco is forecasting a fourfold increase in traffic traversing the Internet by 2014. At that time it estimates traffic on the web will reach 63.9 exabytes per month - or more than 3/4 of a Zettabyte - according to its annual Visual Networking Index.

“Managing, storing and securing data is great, and you can do the best job in the world, but if users can’t easily access and use the data, is it valuable? If it’s not correct, is it useful?” - Steve Jones Interfaces April 23, 2009. According to the Cisco VNI Forecast and Methodology 2012-2017 report, “Annual global IP traffic will pass the Zettabyte threshold by the end of 2015, and will reach 1.4 Zettabytes per year by 2017. In 2015, global IP traffic will reach 1.0 Zettabytes per year or 83.8 exabytes per month, and by 2017, global IP traffic will reach 1.4

Boosting the Transport Network

Zettabytes per year or 120.6 exabytes per month”. Cisco also predicts that traffic from wireless and mobile devices will exceed traffic from wired devices by 2016. By 2017, wired devices will account for 45%t of IP traffic, while Wi-Fi and mobile devices will account for 55% of IP traffic. In 2012, wired devices accounted for the majority of IP traffic at 59%. The T1’s and E1’s of our legacy voice networks simply cannot handle the bandwidth demands that the data and video-intensive businesses of today require. Fast Ethernet speeds have become a minimum requirement for an enterprise to truly operate competitively. Operators and network owners everywhere are dealing with the same critical problems: exponentially increasing capacity demand, simultaneous decrease in revenue-per-bit and spectrum crunch. Across the industry, the quest to find a gigabit-capacity low-cost backhaul solution for mobile network expansion continues at a frantic pace. To truly differentiate from competitors, today’s Wireless Network Operators need to deploy a disruptive combination of backhaul technologies. Providers need a solution that can transform their network capacity in a fraction of the time and cost of current solutions, and can completely disarm the threat of the impending bandwidth crunch crisis. The wireless industry’s capacity crisis is looming, and Mobile Backhaul is a key component in that crisis. Though not always considered the most interesting portion of a successful wireless network, these days Backhaul is becoming arguably the most important. The onslaught of capacity demand being driven by wireless devices and the rollout of 4G/LTE, requires Service Providers to deal with a level of capacity demand far greater than could have been imagined in the days of 2G/3G mobility. The advent of urban Small Cells, although specifically intended to address this rapid growth, will actually have the effect of accelerating capacity demand, as users explore new capabilities on wireless devices facilitated by faster networks. Mobile Network Operators face a doubleedged dilemma, because at the same time that demand is increasing exponentially, revenue-per-user and revenue-per-bit is flattening out. Global data traffic surpassed global voice traffic in 2007, and established the beginning of the “Data Era” according to Unstrung Insider Targeted Analysis 2007 and 2011. Ever since that crossover, the network

operators have faced a major challenge in generating revenue from new higher-capacity services. As backhaul capacities transform from Megabits-per-second to Gigabits-persecond, the technologies and solutions chosen to provide this capacity must be capable of delivering significant economies of scale. The other key challenge impacting Wireless Operators is the rapidly diminishing availability of spectrum. Wireless spectrum is a scarce and precious resource for Service Providers. In the FCC OBI Technical Paper No. 6, October 2010 Mobile Broadband: The Benefits of Additional Spectrum it predicts a deficit of 275 MHz by 2014. Network designers go to great lengths to maximize the aggregate network throughput in their network’s limited amount of wireless spectrum. As a result, there is a substantial need for cost-effective non-RF (Radio Frequency) solutions that can move backhaul traffic off overworked wireless frequencies. According to the ITU, 4G mobile networks are defined as providing at least 100 Mbps peak capacity for high mobility applications, and 1 Gbps for stationary applications (‘Mobile Broadband Explosion’, Rysavy Research/4G Americas, August 2012). This massive jump in performance definitions from 3G to 4G is one of the key drivers for backhaul capacity needs, and is the main reason that the multiple T1/E1 copper circuits and Microwave links, which were previously sufficient for yesterday’s 3G backhaul, will be forced to transition to Fiber or highercapacity wireless backhaul solutions. Fiber continues to hold its place in the hearts and minds of the Telecommunications Industry as the preferred solution for backhaul. It is widely viewed as a timeproven technology that offers almost limitless capacity and scalability. Yet, Fiber is also one of the most expensive, difficult, and time-intensive capital expenditures a carrier can make. The reality is that pulling Fiber to every cell site is simply not feasible, not only due to cost issues, but also due to excessive logistical challenges. Time delays due to acquiring permits and lengthy construction for trenching also add to the complexities of Fiber installation.

technology operation in the 30-300Ghz range, but is generally used to discuss 60-80Ghz equipment. Like other RF technologies, Millimeter Wave must also contend with spectrum availability issues. Optical Wireless Broadband technology is the next generation of the Free Space Optics technology that originated in the 1960’s and first commercialized in the late 1990’s. Optical Wireless Broadband is a point-to-point, line-of-sight technology that delivers over 1 Gbps of bandwidth across a 1.6 km distance, for a fraction of the cost of traditional solutions. Optical Wireless Broadband is an infrared technology that contains features and capabilities including forward error correction, alignment tracking, integrated packet processing, and advanced optical laser and lensing techniques that make Optical Wireless Broadband’s reliability and capabilities much greater than traditional Free Space Optics technology. The Wireless Industry is facing a major challenge to determine how to best augment existing network capacity capabilities to accommodate the ever-increasing mobile data demand. The capacity gap that we see today will continue to grow exponentially over the next several years. To meet this demand, Mobile Network Operators must combine the four main backhaul options - Fiber, Microwave, Millimeter Wave and Optical Wireless Broadband, as four parts of a complete backhaul solution set. No single solution will address all needs, and yet each one has its own particular area of highest value. l

Although traditional Microwave has played an important role in backhaul for 2G and 3G, the demands of 4G will exceed its capability. Microwave capacity tops at around 400Mbps full-duplex due to the permitted limits placed on RF channel bandwidth. The term Millimeter Wave applies to any RF

North America I 2014

n

27

Managing Zettabyte Capacity and Performance

Improve performance management while increasing network data by Anand Gonuguntla, President & CEO, Centina Systems

Ultra-Broadband needs ultra-management tools that can cope with the volumes of data and can still provide timely performance reports. Customer experience is progressively dependent on network performance, as enterprises rely on cloud services and real-time applications. Advanced network performance systems analyze information in real-time and provide pre-emptive alarms when SLA thresholds are approached and before customers noticed a problem. They can also provide a visual presentation of customers’ own service performance, which customers can see through a portal, thus altering the perception of the service delivery level.

Anand Gonuguntla is President & CEO of Centina Systems. With over 15 years’ experience in the telecom industry, Anand co-founded Centina Systems. Prior to his current role, Anand was the Director of Systems and Software Engineering at Xtera. Anand also held management positions in software and program management at Fujitsu where he worked on FLM and Flashwave product lines. Anand holds a master’s degree in Electrical Engineering from the University of North Dakota and a bachelor’s degree in Electronics and Communications Engineering from Jawaharlal Nehru Technological University, India. He has published in Proceedings of ACM and holds a patent in network management.

Ultra-broadband - it is a term that’s been bandied about for a few years but only now the communications industry is finally starting to see its deployment. Operators are ramping up 100Gbp in the core of the network, on the access front, there’s LTE with 1Gbps broadband into the home, and enabling technologies, like carrier Ethernet, are on the rise. Ultra-broadband supports the kinds of bandwidth-intensive and Cloud-based services that businesses and consumers are looking for. However, as networks, services and technologies continue to evolve, operators face greater than ever challenges, when it comes to managing the customer experience via network performance management. With the breadth of options and service provider competition, North American customers are some of the most demanding in the world, making customer experience

more important than ever. Successful and differentiated network performance management is critical to ensuring customer experience. As networks and services continue to evolve and services migrate to cloud-based offerings, depending on both public and private infrastructure, network reliability will ultimately drive success and subscriber growth. End user escalate

communications

requirements

Communications networks are increasingly the lifeblood of businesses in many ways, yet ironically, the rise of cloud services, true mobile broadband and bring-yourown-device (BYOD) means that businesses are increasingly free from traditional network borders. Companies today are looking to efficiently stay competitive and bolster innovation, as well as effectively

accommodate the realities of the always on, always-mobile, borderless aspects of today’s work reality. New applications are driving a multinetwork landscape as well. According to TRAC Research’s 2013 Network Performance Monitoring report, unified communications applications are making a major impact on network quality and visibility, and many organizations reported that VoIP is the No. 1 IT initiative in terms of impact on network performance. In general, organizations said that these and other real-time applications are of greatest concern when it comes to network performance. Customers expect a steady quality of network performance to support their strategic applications, no matter what the underlying technology may be - Service

North America I 2014

n

29

Managing Zettabyte Capacity and Performance

Level Agreements (SLAs) don’t become more flexible simply because a business has migrated from private line to Ethernet. Quite the opposite: TRAC’s report showed that 42% of organizations reported that improving the quality of user experience is one of their top strategic goals. TRAC’s report shows that end users see the network as a strategic asset, with network performance having a major impact on all key business processes and on the effectiveness of business users. According to Forrester Research, a key factor impacting churn rates is how the customer perceives the quality of service delivery. Operators that can’t deliver the network quality needed to support business’ key strategic initiatives will risk losing subscribers, brand equity and revenue. The rise of the Big Data challenge There is, simply put, a data explosion at work. In an ultra-broadband world, the amount of data that devices generate about the health of the network and the performance of devices and services grows exponentially. The issue is not simply one of scale, but also complexity. Networks are increasingly dynamic and heterogeneous, with both legacy and IP components and a variety of access technologies. There are peak applications usage periods and multiple locations within a business to take into account. Yet operators need real-time visibility across the breadth of the network infrastructure, to ensure appropriate customer experience. Many service providers are saddled with legacy performance management systems that are just not up-to-the-task of capturing, processing and analyzing this type of volume of information. Old approaches tend to be static. They are built for understanding siloed data sources from network elements that support hardwired services. Forrester noted that if individual components in the network layer are running effectively, an operator may believe that, say, an 80% level of QoS is being delivered to customers. However, the customer may feel that they are only getting 8% on the agreed upon QoS. Legacy approaches cannot resolve this discrepancy between perceived performance levels, leading to declining loyalty and loss of business. A new dawn for performance management In North America, poor customer experience and network performance can be the company’s demise, as negative reviews travel at the speed of light today. With unlocked mobile phones, no-contract, month-to-month services and race-to-zero-price competition, performance management can be the

30

n

North America I 2014

differentiator that allows service providers to maintain and justify margins and grow their subscriber base through guaranteed SLAs. Fortunately, there is a whole new set of functionality for network and performance management systems. For one, operators can make use of solutions that allow them to quickly gather and process the data they need in near real-time. That includes systems that will proactively and automatically perform notifications and remediation of network issues to address problems before customers notice that there is a problem. This kind of proactive monitoring allows for early detection of problematic congestion and usage levels, as well as monitoring of the key performance indicators (KPIs) and their impact on the service availability for end users or wholesale customers. Progressive alarming offers operators a real-time view of network performance, as set thresholds of bandwidth usage are met. The ‘root cause analysis’ remains one of the key challenges in managing network performance, with TRAC noting that 64% of organizations reported that their network landscape has become more complex over the last 12 months. Additionally, 46% of organizations reported that the inability to identify a root cause of performance issues in a timely manner is the key challenge for network performance. Next-gen network performance solutions offer the ability to isolate where in the network a problem may lie: at the customer premise, the access network, or in the core. This allows operators to offer better customer service overall. Trending and capacity planning have also become critical to effective operations and performance management. Flexible, customizable reports that make use of real-time, holistic visibility across the network give operators the ability to apply network intelligence to any number of business processes, from the NOC (Network Operations Center) to the CFO’s office. Being able to proactively manage SLAs, process and report on the volumes of data and network performance, and monitor real-time SLA conformance will become more important for service providers to differentiate their service offerings as time goes on. In an ideal implementation, a CSP (Communication Service Provider) has, through the implementation of service assurance, the proper visibility, alarming and reporting capability in place to efficiently offer SLAs in the first place, to drive increased market share while preserving revenue streams and profit margins.

Once customer SLA contracts are defined, the CSP should be able to track SLA parameters across systems and networks, and monitor the quality and availability of services. One way to do that is the automation of notifications of impending SLA threshold breaches. The progressive alarming allows CSPs to automatically determine where poor performance is impacting business as metrics approach SLA thresholds. That helps NOC personnel to better prioritize remedial actions and carry them out. Next-gen network performance management offers the ability to visually map operational performance to customer services and clearly show the impact of network degradation and process faults on the agreed SLAs. This provides a barometric view into customer satisfaction levels. This visual GUI can be turned over to the enterprise IT personnel, so they can see through a portal how their services are performing. Using this kind of subscriber intelligence is the next frontier of ensuring the customer experience. “We’ve long maintained that operators would begin taking a more strategic approach to their subscriber data as they looked to better manage the customer experience and generate more revenue per user,” said Shira Levine, Directing Analyst for service enablement and subscriber intelligence at Infonetics Research. “Subscriber intelligence solutions will grow in popularity because they enable operators to better pull together and analyze subscriber data, to gain a unified, holistic view of their customers and how they’re using the network.” Rob Rich, managing director of TM Forum Insights Research, added: “With the explosion of digital services increasingly impacting the communication landscape, operators are taking a closer look at their subscriber data, to better understand their customers, upsell new services, combat churn and deliver a more relevant customer experience. The…use of subscriber data is gaining much-needed traction, and suppliers have a real opportunity to partner and develop analytics solutions to improve monetization and increase customer loyalty.” Faced with an explosion in network capacity, applications and smart devices, service providers need highly scalable solutions that can integrate and capture all the necessary performance information, as well as process, analyze and present the data in a meaningful way to the NOCs and other stakeholders throughout the enterprise. By implementing a next-generation performance management solution, service providers can proactively assure the customer experience, compete more effectively and increase revenues. l

Managing Zettabyte Capacity and Performance

Mastering virtualization challenges by Alastair Waite, Head of Enterprise Data Centre Business, EMEA, TE Connectivity

Networks are as strong as their weakest point, and that point is the physical cabling. To enhance the ability of the physical layer to support virtualization and cope with the massive growth of data traffic, fibre or copper connectors can be fitted with EEPROMs that allow the physical layer to communicate with the management layer. This granular view at the cabling level offers a whole new perspective on how traffic can be controlled, achieving virtualization at the lowest network layer.

Alastair Waite is the Head of Data Centre Business Line, EMEA. Alastair joined TE Connectivity in September 2003 as a Product Manager for the company’s Enterprise Fibre Optic division. Since that time he has held a number of key roles in the business, including Head of Enterprise Product Management, for EMEA, and Head of Market Management. Since May 2011, Alastair has responsibility for the Data Centre business in EMEA, ensuring that TE Connectivity has strategic alignment with its customers in this market segment. Prior to joining TE Connectivity, Alastair was a Senior Product Line Manager for Optical Silicon at Conexant Semiconductor, where he had global responsibility for all of the company’s optical interface products. Alastair has a BSc in Electronic Engineering from UC Wales.

Data growth is rising exponentially as we see consumers and business adopt feature rich platforms and applications, with an expectation that the content will be readily available 24/7. This desire for ubiquitous data is driving a huge demand for data centres and data centre networking equipment, which come with high price tags and short shelf lives, consuming large amounts of (costly) energy. By cutting costs, without increasing physical resources, companies can exponentially increase workload capacities and keep up with data demand by virtualizing their existing hardware. However, despite the many obvious benefits of virtualization, it can also present challenges regarding the actual location

32

n

North America I 2014

of data, which in turn raises security, traceability and potential disaster recovery concerns for users who have mission critical, or highly sensitive, data needs. Deploying innovative solutions within the physical layer can bridge the gap between the benefits of the virtual world and the security of the physical world. An accepted standard Why has virtualization become such an accepted standard in the data centre? Apart from the obvious user and business benefits mentioned above, virtualization has been adopted by businesses to support two key initiatives:

1. Agility - The ability to dynamically control the resources that a physical server offers and make it part of a “pool” of computing power that can be easily harnessed to work on the processes that businesses need at any given time. 2. Efficiency - Instead of having many physical servers dedicated to a single business unit or process, fixed physical servers are configured to become virtualized, meaning a single server can replace the workload many servers. This triggers a reduction in energy costs (power and cooling) and frees up expensive floor space. High availability In addition to these two initiatives, maintaining high availability and fast

Managing Zettabyte Capacity and Performance

response times from a virtualized network is critical to keeping consumers and internal stakeholders happy. It also helps disguise the fact that they are using a ‘pooled’ resource. From Layers 2 and up in the Open Systems Interconnection (OSI) stack, control and flexibility can be easily achieved. However, a network is only as strong as its weakest link, and in many cases this is the physical layer (also known as the cabling), which happens to be the foundation that all data centre operations are built on. Since it is passive, the physical layer presents problems to network architectures that need to understand how and where things are connected. Today, this can happen logically, but logical data gives no indication of the physical routes packets take between two points. Did the data flow between servers in adjacent racks, or did it flow via different buildings, or even via different countries? Was the path taken declared ‘secure’ by the data owner, or was the path shared with other unknown users? Both of these questions are becoming increasingly pertinent as virtualization becomes more prevalent in data centre networks around the globe. Being able to monitor and communicate with each connection point in the physical layer is critical to answering these complex questions and solving audit/compliance challenges. One way of monitoring the physical layer cabling is Connection Point Identification (CPID) technology, where an Electrically Erasable Programmable Read-Only Memory (EEPROM), housed in the body of a fibre or copper connector, can allow the physical layer cabling to communicate with the management layers of the network when inserted into a CPID-enabled patch panel. In this scenario, automatic associations can be made between the connected devices in the path that the packets are flowing through. The interconnecting points, which are supported by an EEPROM, allow management software to interrogate the physical layer cabling to understand facts about its length, datacarrying capacity, or to even use CPID data to allocate a physical route to high priority/ high value traffic, while another route can be dedicated to low priority/low value traffic. Similarly, operations will be able to distinguish between fibre and copper channels within the physical layer, a critical factor in identifying suitable routes for supporting future growth paths for higher data rates. Having this granular view of cabling offers a

whole new perspective to the physical layer and allows the network owner to consider it as a value-adding asset, as opposed to a simple device to interconnect servers, switches and storage devices. Previously, we discussed how virtualized networks aggregate resources into ‘pools,’ which means that next-generation networks will have to be built ready to transmit data packets at 40 and 100Gb/s. Today, the medium of choice for architects and engineers to achieve those throughput rates would be parallel optics, via either 4 x 10 Gb/s or 10 x 10 Gb/s channels, due to the lower cost of the optical modules at those data rates. Remember that fibre is not a full duplex technology. Unlike its copper cousin, fibre requires eight lanes (4 Tx/4Rx) for 40Gb/s communication and 20 lanes (10 Tx/ 10Rx) for 100Gb/s. These data rates can be achieved via a pre-terminated fibre network based on a 24 Multiple Fibre PushOn (MPO) connector technology. 100GbE optical modules that are currently available on the market already incorporate a 24 fibre MPO connector interface, making it simple and easy to build out a future proofed network ready for throughput-hungry virtualization activities today. Not planning to build a network ready for 40 or 100Gb/s may prove to be costly in the long run, both in terms of the CAPEX required to re-configure the existing physical layer to meet demand, and in terms of lost revenue through network downtime while this activity is being conducted. A recent report published by the IEEE in North America has revealed that servers with 100GbE I/O ports will begin to ship in 2015, and by 2020 will make up more than 15% of all server port speeds shipped. When one considers that all these 100GbE enabled servers will aggregate at the core of the data centre, the next generation of switching platforms will need to be prepared to accept all this data leading to the possibility of Zettabyte “highways”. Virtualization makes sense Virtualization is an enabling technology that makes sense in so many ways, supporting IT initiatives while keeping financials costs in balance. It allows a business to scale to meet customer demands without requiring the same linear increase in physical resources. However, as in life, achieving great things

and stepping to the next level of performance and reward requires focused control and ability. Controlling the physical layer with CPID and enabling it for 40 and 100Gb/s throughput will help support network virtualization that delivers the capacity and bandwidth required to stay ahead in the Zettabyte era. l

Connect-World now on Facebook & Twitter Connect-World, the world’s foremost discussion forum for leaders in the ICT industry, is now available on Facebook and Twitter. The world’s top ICT decision makers express their opinions in Connect-World. They use clear, non-technical, English to discuss how ICT helps shape regional and global development. The articles essentially examine the influence that ICT products and services have on the way people live and do business. With separate editions for each of the world’s regions, the reports highlight the most important ICT trends and issues influencing socio-economic growth. Connect-World is now available to follow on Twitter (http://twitter. com/#!/ConnectWorldICT) and Facebook http://www.facebook. com/connectworld.ict Also, it is still possible, for FREE, to directly access all past and present Connect-World articles, ICT Industry press releases, eLetters, ICT News and more at www.connect-world.com.

North America I 2014

n

33

Managing Zettabyte Capacity and Performance

Feeding the two-headed broadband monster: Scaling the Internet’s foundation to keep pace with demand by Geoff Bennett, Director of Solutions & Technology, Infinera

Broadband is a monster that keeps growing - from the access end, by user demand for content and streaming video, and from the centre, by Cloud based data centres that duplicate volumes of data and transport it between distributed locations. The transport network must cope with this massive increase of traffic though timely delivered scalability with instantly activated additional DWDM capacity, and through converged DWDM and OTN that allow multiplexing at the core as well as at the edge. Further into the future, carrier SDN may displace the popular GMPLS, enabling end-to-end, multi-vendor service provisioning.

Geoff Bennett is the Director of Solutions & Technology for Infinera, a leading manufacturer of Intelligent Transport Network solutions. He has over 20 years of experience in the data communications industry, including IP routing with Proteon and Wellfleet, ATM and MPLS experience with FORE Systems, and optical transmission and switching experience with Marconi, where he held the position of Distinguished Engineer in the CTO Office. Geoff is a frequent conference speaker, and is the author of “Designing TCP/IP Internetworks”, published by VNR. Geoff has a BSc in Polymer Engineering.

Internet growth is proving to be a twoheaded monster. First is the growth we see in access network technologies - like ever faster residential broadband and the move from 2G to 3G and now to 4G cellular networks. However, the second aspect is hidden from most Internet users - the move to Cloud-based data centres and storage architectures. If you ask most service providers today what their biggest broadband headache is - it is how to make their networks pay their way in the face of dramatically increasing demands. The growth of access technology Most experience the Internet through an access technology. That could be an Asymmetric Digital Subscriber Line (ADSL), cable TV, or Fibre to the Home (FTTH) technology where we live, or any kind of carrier Ethernet technology where we work, and there’s a variety of wireless and cellular

34

n

North America I 2014

technologies when we’re on the move. As those access technologies ramp up in capability, they pour more and more traffic into the network core. Surveys of broadband usually indicate that we’re moving towards saturation points for the number of broadband households, especially in countries with long-established Internet usage such as the USA, or countries with highly organised Internet adoption policies such as Japan or South Korea. What these surveys don’t reflect is the increasing use of those broadband connections by the individual - exemplified by the everincreasing use and quality of streaming video as an application. Neither do those surveys show that each user is no longer a single household PC, but is communicating via multiple computers, as well as smartphones, tablets and remotely connected sensors which have been dubbed ‘the Internet of Things’.

Like most marketing terms, the Internet of Things (IoT) is deliberately vague in both composition and quantity. In terms of composition, the IoT could be exemplified by fixed telemetry devices such as home security systems, or by mobile telemetry in the form of wearable devices. In terms of quantity or bandwidth consumption, IoT is also vague. A home security system may simply transmit status information back to a security monitoring centre, such as: Is the alarm working? Is it going off? Which room is signalling the intrusion? Are other alarms in the area going off too? All these status conditions could be signalled with a very small amount of network capacity. At the other end of the bandwidth consumption scale the security system could be transmitting real time, high-definition (HD) security video from multiple cameras to a server in the security centre, so that the video

Managing Zettabyte Capacity and Performance

is safely stored and can be used as evidence in a legal action. This level of traffic would fill a typical residential broadband connection continuously several times over.

over the Internet. So, we’re seeing an equal drain on broadband resources by two data hungry sources: consumer technology and cloud data centres.

Wearable technology has a similar range of bandwidth consumption. At one end, a fitness device worn on the wrist may record and transmit the Global Positioning System (GPS) coordinates, instantaneous heart rate and cumulative steps taken for an individual that day. All this would require a few handfuls of bits per second (bps) for a useful update rate. Google Glass has pointed the way to wearable devices that (one day) may transmit continuous HD video of what the user is seeing to be stored in the cloud, effectively forever.

How do we feed the Broadband Monster?

Growth in the cloud The other head of the broadband monster is cloud data centres. Most people are now users of cloud-based storage services and whether it’s SkyDrive, DropBox, iCloud for work or personal use, there are many ways to store limited amounts of information free of charge in a secure data centre. There are many forecasts that discuss the growth projections for cloud-based storage and, like any hot technology, it’s important not to fall into the trap of believing the hype. A report last year by Gartner predicted that the amount of data that consumers will store is set to rise from about 7% in 2011 to 36% in 2016. There will also be a significant rise in the absolute volume of data stored: a report commissioned by IEEE Study Group 802.3 predicts that data storage will increase from 130 Exabytes in 2005 to 7910 Exabytes in 2015. While this is data storage and not data that is 100% transmitted over the Internet backbone, there is an increasing trend towards Cloud-based data centre storage, which does have a significant impact on network traffic. Cloud data centres are carefully virtualised. Every function, whether it’s storage or processing, is spread over more than one physical resource. This virtualisation allows data centre service providers to ensure that storage and processing resources are fully protected against individual failures, as well as moving data closer to the end user in order to reduce latency effects for timecritical applications. A clear consequence of this copying of information is that, not only does the data have to get to the cloud data centre in the first place, but once it’s there it will be replicated to multiple other physical data centres, and all this traffic has to pass

Whilst most users experience the Internet through access technologies, many people assume that the core of the Internet is based solely on very expensive, high capacity core routing platforms. This is partly true, but in order to communicate over long distances, these core routers need to be connected by a powerful, long haul transport network based on switches that are capable of filling a single pair of fibres with eight terabits per second (Tb/s) or more of data, and sending it thousands of kilometres between cities, countries, or even continents. In order to feed the monster in a scalable, yet cost-effective way, these switches have to have certain key characteristics, namely: scalability, converged functionality, and delivering high levels of automation. Scalability is an obvious requirement. Service providers have to be able to turn up huge amounts of capacity in as short a time as possible, but in a way that is extremely cost-effective. Historically, with Dense Wavelength Division Multiplexing (DWDM) transmission technology, this has been a challenge, because each time a wavelength is ‘lit up’ on an optical fibre (i.e. activated), there is a process of designing that wavelength (which may take hours to days), ordering the equipment for each end and waiting for delivery (which may take weeks to months), installing this equipment (which may take hours to days), and turning up the service (typically tens of minutes to hours). However, this process has evolved. In order to be ‘cash flow-effective’, service providers only buy the line cards for DWDM capacity, as and when they need this capacity. So, if their forecasting process is anything less than perfect, they may be left waiting for several months for delivery of equipment. A more recent approach is to create high capacity super-channel line cards with, for example, 500 gigabit per second (Gb/s) of bandwidth that is all brought into service on day one. However, service providers only pay for this capacity as they need it - in 100Gb/s chunks. This ‘instant bandwidth’ capability is now one of the most popular ways to address the 100Gb/s and beyond marketplace, because it allows service providers to use time as a

weapon in order to become more competitive. Converged network platforms are now essential, and in fact a recent report by Infonetics indicates that 90% of the service providers they questioned plan to deploy integrated DWDM/Optical Transport Network (OTN) platforms. The reason for this is that the services that use the core network will always be relatively lower in data rates than the long haul super-channels. So, in order to aggregate these lower data rate services, it is essential to have some kind of multiplexing capability, not only at the edge of the network, but also in the core too. Integrated OTN switching is the ideal solution for service providers because it is a technology that is designed for their own needs, and it offers a wealth of new functionality - for example with Fast Shared Mesh Protection capabilities that will drive down the cost of service protection. Automation inherently reduces the operational costs of transport networks, and with the right underlying network technology it’s possible to use carrier-grade control planes to minimise manual operations, such as truck rolls. Generalised Multi-Protocol Label Switching (GMPLS) has been a highly successful control plane for many years, and is an example of a distributed control plane (i.e. the intelligence is distributed into each network element). Today, however, there is an interesting trend to adapt Software Defined Networking (SDN) protocols and architectures for the carrier network. ‘Carrier SDN’ will allow service providers to solve problems in end-to-end, multi-vendor service provisioning that are difficult to solve with distributed protocols, such as GMPLS. In the future, we may even see Carrier SDN allowing service providers to build networks without expensive core routers altogether. Conclusion The biggest challenge today for service providers is being able to feed the broadband monster in two different ways: coping with the dramatic increases in broadband access traffic, and dealing with the impact of Cloudbased storage architectures. An intelligent transport network delivers key benefits for scale, convergence and automation that will help to reconcile the problems and allow service providers to gain a distinct competitive advantage. l

North America I 2014

n

35

Fast Data and Asset Analytics

Big Data in the Zettabyte Era by Mike Hummel, Co-founder and CEO, ParStream

“Forget size, speed is what matters” - fast data means not just swift downloads, but also timely information based on dynamic data. Analysis of big data has been hampered by slow tools and procedures that transfer data to central repositories, instead of performing it locally, in real time. As an example, performing analytics at each cell tower avoids transporting the vast amount of accumulated records and provides fast response to current network events. This timely analytics offer fresh insights on users’ behaviour and needs, enabling communication service providers to compete on every customer by improving their experience.

Mike Hummel is the co-founder and CEO of ParStream. He previously co-founded Empulse, a portal solutions and software consulting company now specializing in Web 2.0 projects. Mike began his career in managing large-scale software integration projects serving logistics organizations at Accenture, Germany. He holds a degree in Electrical and Electronic Engineering from University of Hatfield, UK; a degree in Computer Integrated Manufacturing from Cranfield University, UK; and earned a diploma in Technical Computer Science in Esslingen, Germany.

36

Benjamin Franklin said that there are only two certainties in life: death and taxes. That famous quote may need to be updated for the 21st century. One sure-fire certainty in today’s ‘always connected’ world is that there’s always data being generated. And lots of it. Harnessing the power of this data can result in amazing opportunity for businesses, but it can also be their hardest challenge yet.

The rise and rise (and rise) of data

Here’s a quick snapshot of what happens in 60 seconds: 2,000,000 Google searches; 680,000 Facebook updates; 300,000 Tweets and American consumers are estimated to have spent around US$272,000 in shopping. If this all happens within the span of a minute, imagine the amount of data being generated over weeks, months and years.

It has also been estimated that around 90% of data in existence today was generated in the last two years. So the amount of data that will be generated over the next couple of years will be exponential to today’s volume. In fact, it is predicted that the amount of global data could rise by as much as 50% yearover-year. Needless to say, all this provides organisations with major opportunities if

n

North America I 2014

The plethora of devices and gadgets connected to the Internet mean that every click, swipe and tap is producing data. No wonder then that some estimate the world to possess about 1.8 Zettabytes of data. To put it into perspective, that’s about 250 billion DVDs. Happy viewing!

they know the proper method of leveraging the power of the data. Along with the opportunities, there are also challenges. The good news is that a number of businesses appreciate the mountain of data they are sitting on and are looking to analyse it, in order to gain a competitive advantage. Telecommunications in Zettabytes One industry that certainly has vast amounts of data is telecommunications. From texting and phone calls to people’s online shopping habits, and from how we interact with social media to watching streaming videos and downloading music, mobile devices have provided communications service providers (CSPs) with a wealth of readilyavailable data from their networks. Mining

Fast Data and Asset Analytics

this information can help CSPs drive new revenue streams, reduce churn and maximise operational efficiencies. CSPs have to manage hundreds of terabytes of data that is being generated each day in the network so analysing all this data in real-time is easier said than done. It is a known fact that operators collect vast amounts of data at cell towers. However, most operators currently transfer data to a central data warehouse for analysis, which is time-consuming and resourceconsuming. Since traditional databases provide limited import bandwidth, operators can only “sample” the data since they can only access and analyse a fraction of the data at any one time. Traditional database platforms are only able to analyse a finite amount of data within a specific period of time, making real-time response and analysis on newly imported data virtually impossible. That’s because traditional database tools were not designed to manage data in the Zettabyte era. When every millisecond counts CSPs have multiple sources of structured and unstructured data. Structured data refers to data that is organized in a pre-defined manner. There’s a systematic method of how this data is recorded and accessed, and has the advantage of being easily queried and analysed. Unstructured data refers to information that doesn’t reside in a rowcolumn format. Most organizations naturally have semi-structured data - a combination of structured and unstructured data - which is the main source of actionable intelligence. CSPs are able to derive incremental value by gaining insights from analysing massive amounts of seemingly unrelated information. From customer activity such as churn and cross-sell, to merchant activities such as mobile marketing campaigns, and infrastructure events such as dynamic bandwidth control and network monitoring, gathering insights from the data drives revenue, decreases costs, and ultimately improves profit. It might sound cliché, but in telecommunications, time is money. Today’s CSPs operate in an ultracompetitive marketplace where every customer and every second matters. That’s why it is more important than ever for CSPs to have access to the latest data to support their decision-making.

Thankfully, it’s no longer the norm to make queries and receive results in a few hours. Data scientists can now analyse data while continuously importing new data to produce real-time results. It is worth bearing in mind that most existing big data analytic platforms cannot import and analyse data at the same time. Platforms in the future need to combine what they are doing now - analysing stored data - and combine it with real-time analytics to really have a clear picture of what the data is saying. One area where CSPs can learn how to manage and analyse data is science - fast data is crucial as data can be generated in vast amounts very quickly. An example of where science is generating vast amounts of data can be found at CERN (European Organization for Nuclear Research). In the last 20 years CERN has generated over 100 petabytes, 75% of which has come from the last three years. So uncovering the mysteries of the universe is going to require big data analytics and fast data analytics. The tools that are required for CERN will eventually find themselves in other areas of science, for instance imagine how cancer treatment can benefit from analysing the data from thousands of patients, and the break through this can offer. It is therefore not only CSPs that can benefit from fast data. Forget size, speed is what matters Big Data, as fast data, can be seen to be of great importance for science, it also offers many advantages that have a day to day impact on people’s lives. As Google has predicted that more people have access to a mobile phone than a toothbrush, our mobile lives are extremely important. So, how can CSPs utilise Fast Data to drive their revenues and improve user’s experience? Fact is, if operators turned their data into analytical insights faster, they could generate significant premium revenues and optimize cost-savings. For example, Fast Data can provide a telecom operator OTT (Over-the-Top) revenues through premium analytical services for business partners used in geo-fencing, re-targeting, etc. Analysing data at each cell tower, or radio access controller, locally rather than sending it all to a central data warehouses for analytics, makes data available earlier and frees up network bandwidth

with network data, can enable imperative insights. Customer data also let operators provide customer service in real-time. An operator that uses Fast Data to analyse customer behaviour will have instant information about their customer’s current needs and the service quality provided, and operators can react immediately if there is a misalignment. Customers demand instant satisfaction - Fast Data gives operators the means to provide it. The advantages for network users can be seen improved service, with networks able to know their habits, and tailor the network to match their needs. This could mean discovering that they need more cells in area than another to deal with an increase in usage, or that during certain times they need to optimize their network as there is greater strain on the network. Anyone who has been to a major city and tried to use their data allowance has come across painfully slow download and uploads speeds. Big data and fast data can help solve this by highlighting areas that need attention and giving the operator the best possible picture of how their network is working. In the future, network users will benefit from this information and be able to offer an improved service, at the lowest possible cost. In data we trust No wonder some people say that data never sleeps. Interestingly, it is not just organisations that place their faith in data to find answers to complicated questions. The famous statistician, W. Edwards Deming was quoted as saying: “In God we trust. All others must bring data”. In the 21st century we will have to update Benjamin Franklin’s finding - not only death and taxes, but also data is a certainty in life. It is time for CSPs to harness the full potential of data to power their futures. l

Many operators choose to analyse technical network data, such as performance and capacity. Adding customer data, correlated

North America I 2014

n

37

Fast Data and Asset Analytics

Explosive growth of network data: Are operators ready to control their network CAPEX? by Vinod Kumar, Chief Operating Officer, Subex

The paradigm shift from collecting to connecting data brings a deluge of traffic volumes and prompts ever-greater investment in infrastructure. Smart spending is the need of the hour, when a new wave of technology comes before reaping the benefits from the previous one. Network Assets may get stranded, under-utilized or ignored while budgets are spent on (perhaps) ineffective additional equipment. An appropriate asset assurance management system will enable operators to optimize redeployment or disposal of retired assets, thus saving CAPEX and OPEX, and to forecast when new investment should be made, thus optimize spending.

Vinod Kumar is the Chief Operating Officer and overall responsible for managing the portfolio development & innovation, client acquisition & relationships and fulfillment teams. Prior to this, he worked in the capacity of Group President of the Company, and has previously handled the role of President and Senior Vice President - Sales at the Company, where he was directly responsible for the worldwide revenue generation efforts as well as the day-to-day operations of the Company’s sales organization, including sales, sales operations, alliances, and channels. Mr. Kumar joined the Company in October 1997 to develop and implement the Company’s sales strategy. Prior to joining the Company, he spent five years as a marketing executive with Crompton Greaves, and also worked at Ashok Leyland Limited. Mr. Kumar holds a bachelor of technology degree in electrical and electronics from CET, University of Kerala.

Production of data is expanding at a feverish pace. Experts believe that there would be a substantial increase in the annual data generation by 2020. This would mainly be due to rapid increase in data generation by individuals and corporations across the globe, starting from healthcare to gaming industry. As the paradigm shift happens from collecting to connecting data, businesses are searching for relationships between these data sets and reveal valuable new insights. Dramatic growth rate in data volumes has compelled industries across the world to start re-defining their business strategies to deal with the huge data flow. Telecom industry is no exceptional in this matter. Traffic volumes are being driven by the ever-growing number of connected people and connectable devices.

38

n

North America I 2014

The trend toward multiple device ownership, an abundance of highly diversified and mostly free online content, and increasingly widespread consumer access to fixed and mobile broadband networks capable of supporting high-bandwidth services like streaming video, music and gaming all contribute to the surge of data volumes. The total number of people connected to the Internet is expected to surpass 2.7 billion in 2013, while the total number of applications downloaded over all types of devices will exceed 50 billion.

and mobile devices providing a higher quality user experience are driving faster uptake of gaming and video calling, both of which are expected to go over 40% year-on-year growth between 2010-2015. Multiple players are now operating in the same markets, but under different regimes – for example, traditional voice providers in competition not just with players in adjacent markets, such as ISPs and cable operators, but also with content and application providers, such as OTTs, opening up multiple channels for network data and information to enter into the systems.

A recent ITU report states network data continues to generate 90% of all consumer traffic, with the largest volumes coming from mobile file sharing, video streaming, video calls and online gaming. New smartphones

These multiple data sources from new services and devices are creating new usage patterns and revenue models which in turn forcing telecom operators to spend huge sums of their budgets on installing newer

Fast Data and Asset Analytics

technology network elements and the latest infrastructure. Today operators are investing heavily in new network infrastructure for advanced telecom services like LTE/4G, IPX, etc. without adequate visibility on revenue growth. Much of the Telecom industry’s recent focus has been placed on CEM (Customer Experience Management) and related analytics that are derived from the network data. Certainly, customer acquisition and retention programs are critical to driving revenue. Network augments and migrations to new technologies are an unavoidable “price to pay” and the lion’s share of management’s attention is placed on squeezing as much revenue traffic onto pipes and spectrum as possible. Continually shrinking margin along with the fact that telecom generates only 6% profit on investment that costs 9% of the budget, put great strain on the capital available for spending. Given the rate at which technology is changing, with the transition towards advanced services like 4G LTE and IPX, operators today don’t have enough time to reap the rewards of their investments before the next technology arrives. However, data quality within technical OSS’s is notoriously poor. As a result, network assets can become stranded, under-utilized and/or lost. Smart spending therefore is the need of the hour. One of the main objectives of big data is developing the best insights from the source data available. Unfortunately, poor data quality in sources, particularly in the telecom industry, is an unavoidable fact of life. Operators typically struggle with poor data integrity in BSS and OSS applications. It is widely understood that data integrity issues dramatically increase OPEX and CAPEX. Hence, it is necessary to take proactive approaches to manage data integrity with the goals of improving data quality both for the purposes of analytics and for enhancing operational efficiency. It is essential to fully understand the network data sources, both structurally and semantically. When drawing analytical conclusions based on alignment and comparison of large data sets, a very common misstep is drawing the wrong conclusions due to “false positives”. The higher the number of false positives, the less trust there is in your conclusions and the more cost is entailed in separating the “wheat from the chaff”. A recent survey points out that 20% of the assets fail to return cost of capital and 5-15%

of these network assets are “stranded”. At the crux of the problem is the unfortunate reality that operators don’t have an accurate picture of what assets and inventory they already own, let alone how these assets are being used. Effective capital expenditure and network asset lifecycle management are hence rapidly becoming a big boardroom issue for telecoms operators. The ability to understand what capital is stranded in the network is based on visibility. ERP (Enterprise Resource Planning) systems consistently lack views into deployed assets. Similarly, inventory platforms have a good (yet almost always incomplete) view into what is deployed. What isn’t known are factors around capacity and utilization rates, lost or vacant assets, or status of all ‘tagged’ assets. This, coupled with a clearly orchestrated and managed retirement and resale process, positions the operator to not only “connect” data from ERP and Network sources, but to also act on that data in a way that is poised to save the average operator tens of millions of dollars in capital expense and increase free cash. Due to the increasing growth of data volumes coming from advanced systems and services, there is a growing recognition that network costs must be better managed, but there is also a frustration that lack of visibility and insights undermine the ability to do so. Hence a solution that provides intelligence throughout the asset lifecycle to both finance and operations managers, with the analytics they need to promote more efficient use of network capital expenditure is a “must have”. In most operators today, attempts are being made to manage these challenges. For instance, Network Planning typically has significant traffic data and statistics which are used for planning and budgeting. Similarly, Supply Chain commonly has systems that manage ordering, receiving, stocking, and overall management of assets prior to deployment. What the operators lack, however, are monitoring and controls to help optimize the complete end-end asset lifecycle. Network analytics applied at each stage of the asset lifecycle can result in significant annual capital savings for the operator. The CAPEX problem requires complete, holistic views into current assets as well as the consumption and placement of those assets. This problem also requires comprehensive analytics that are not only descriptive (show current states, trending etc.), but also predictive, to accurately forecast

asset exhaustion, procurement triggering, necessary asset warehouse levels, impacts of failure and growth rates on sparing levels, and retirement strategies. Asset Assurance is a discipline which is gathering significant interest as operators turn their attention to managing and reducing CAPEX. Asset Assurance solutions are designed to fully manage and provide detailed visibility into network data obtained from multiple sources, which in turn facilitates asset optimization and supporting capital spending practices, complete with workflow, dashboards and embedded analytics. With Asset Assurance, the operator will have complete end-to-end asset lifecycle visibility, and control over asset disposition. Proactive asset management services affect workflow and analytics elements, e.g. by initiating workflow to ensure that all the applicable network data and assets are procured and deployed when needed. Asset Assurance provides various stakeholders with a holistic, enterprise wide view of network assets, clearly brining out stranded and underutilized assets. It helps to reconcile and make assets current, record and track them in various financial and network operational systems. It also provides an upto-date tracing of capital spend versus budget, realized “avoidance”, and establishment of predicted capital needs based on network analytics, enabling operators to track, manage and understand time-to-value i.e. when assets will produce revenue. Basically, it enables operators to optimize redeployment or disposal of retired assets. Asset Assurance will benefit operators in: • • • • • •

significant CAPEX optimization enhanced network resource utilization improved network planning & operations accurate financial reporting optimal capital expenditures The right capital investments for maximum return.

In summary, an effective asset assurance program will provide operators complete confidence that their network will grow to meet market demands while also guaranteeing optimal value for every dollar of capital budget spent. Asset Assurance has the potential to help operators to translate insights that are derived from various network data sources into actionable intelligence, and make better business decisions. l

North America I 2014

n

39

40

n

North America I 2014

LEADING EDGE IN

BIG DATA

EMC. Providing industry-leading innovation for the Telco Revolution.

EMC2, EMC, and the EMC logo are registered trademarks or trademarks of EMC Corporation in the United States and other countries. © Copyright 2014 EMC Corporation. All rights reserved. 295420