future-proofing the virtual network infrastructure (VNI) layer

0 downloads 106 Views 855KB Size Report
CSPs are increasingly looking to transform their network infrastructure as well as operational processes and culture wit
White paper

An overlooked pillar of NFV/SDN: future-proofing the virtual network infrastructure (VNI) layer August 2015 Glen Ragoonanan and Gorkem Yigit

.

An overlooked pillar of NFV/SDN: future-proofing the virtual network infrastructure (VNI) layer | i

Contents 1.

Executive summary

2. CSPs data centre hardware must be refreshed and future-proofed to deliver baseline NFV/SDN benefits 2.1 VNI and data centre standardisation is important for future-proofing vNGNs 2.2 A platform-migration deployment strategy lends to VNI future-proofing

1

2 3 5

3. 3.1 3.2

Hardware pricing transparency and open collaborations are critical to keep VNI TCO down 5 Hardware and software price bundling can obscure NFV/SDN business cases 6 Ecosystems and partnerships will be crucial for NFV/SDN success 7

4.

Lenovo Flex System Carrier-Grade Chassis Profile

5.

Conclusion and recommendations

8 10

About the authors

11

About Analysys Mason

12

List of figures Figure 1: Key industry VNI recommendations [Source: Analysys Mason, 2015] ........................................... 1 Figure 2: NFV revenue by type, worldwide, 2013–2018 [Source: Analysys Mason, 2014] ............................ 3 Figure 3: SDN revenue by type, worldwide, 2013–2018 [Source: Analysys Mason, 2014] ............................ 3 Figure 4: Summary of main network virtualisation deployment strategies [Source: Analysys Mason, 2015] . 5 Figure 5: Evolution of traditional physical network architecture to a vNGN architecture [Source: Analysys Mason, 2015] .................................................................................................................................................... 6 Figure 6: Illustrative comparison of cumulative costs of various approaches to network virtualisation [Source: Analysys Mason, 2015] ...................................................................................................................... 7 Figure 7: Intel® ONP Server Reference Architecture environment enables innovative third-party solution development [Source: Analysys Mason, 2015] ................................................................................................ 8 Figure 8: Overview of Lenovo Flex System Carrier-Grade Chassis and Lenovo XClarity Administrator [Source: Lenovo, 2015] .................................................................................................................................... 9

© Analysys Mason Limited 2015

Contents

An overlooked pillar of NFV/SDN: future-proofing the virtual network infrastructure (VNI) layer | 1

1. Executive summary This white paper outlines the imminent importance of the design, implementation and operational investment considerations of virtual network infrastructure (VNI) for communications service providers (CSPs) to adopt network functions virtualisation (NFV) and software-defined networking (SDN) technologies. CSPs that are active in NFV/SDN recognise the value of VNI such as Telefónica’s UNICA. Figure 1 provides the key recommendations the industry must consider before investing in the VNI layer for NFV/SDN. Figure 1: Key industry CSPs need new, future-proofed VNI data centre builds for NFV/SDN

VNI recommendations [Source: Analysys Mason, 2015]

Hardware and software price bundling can negatively impact VNI total cost of ownership (TCO)

Industry collaboration is critical for commercial NFV/SDN success, starting with the crucial VNI layer

CSPs need new, future-proof VNI data centre builds for NFV/SDN Existing legacy proprietary hardware cannot be virtualised, and thus CSPs need new VNI-compliant and NEBS (network equipment building system)-certified hardware (server, storage and networking) that can provide high performance and reliability, and process various low latency workloads, such as computing, networking, signalling, analytics and even media processing. As such, we forecast that NFV and SDN hardware will outpace software sales over the next five years due to its pivotal role in NFV/SDN (see Section 2 for the detailed analysis). Hardware and software price bundling can negatively impact VNI TCO Vendors may seek to price bundle hardware and software solution stacks to compensate ‘lower’ hardware sales with higher software opex, hence increasing their margins and compromising CSPs’s expected VNI TCO. Also, vendors that develop hardware and software solution stacks tend to focus their R&D and integration efforts on their own solution primarily, which can foster VNI vendor lock-in. Both are contradictory to the ETSI NFV industry specification group (ISG) objectives of better infrastructure TCO and open multi-vendor interoperability. Section 3 provides further details. Industry collaboration is critical for commercial NFV/SDN success, starting with the crucial VNI layer Industry collaboration between CSPs and vendors is essential for creating common network virtualisation frameworks and open, standard interfaces to accelerate the advancement of NFV/SDN. However, today, these efforts remain scattered despite collaboration and convergence efforts of standards bodies, open-source communities and vendor ecosystems. Intel has the most successful ecosystem with over 160 members to date, mostly because it is the only ecosystem with a strong focus on VNI. Section 3.2 provides further details.

© Analysys Mason Limited 2015

1: Executive summary

An overlooked pillar of NFV/SDN: future-proofing the virtual network infrastructure (VNI) layer | 2

2. CSPs data centre hardware must be refreshed and future-proofed to deliver baseline NFV/SDN benefits CSPs are increasingly looking to transform their network infrastructure as well as operational processes and culture with network virtualisation technologies, namely NFV, SDN and cloud computing, driven by the need for increased service agility, cost optimisation and operational flexibility in the highly competitive emerging communications-enabled digital economy landscape. NFV and SDN are steadily moving to commercialisation, leading towards an IP-centric hybrid virtualised next-generation network (vNGN) – a combination of virtualised and traditional physical network assets – that bridges the gap between IT and telecoms. Traditionally, network functions are locked into proprietary vendor-specific hardware appliances. This vendorembedded software on dedicated hardware was a form of vendor lock-in which led to complex siloed CSP networks that makes it costly and time-consuming to design, integrate, maintain, scale and develop new services. However, with NFV/SDN technologies CSPs can decouple the network functions with virtual network functions (VNFs), standardise their hardware on carrier-grade commercial off-the-shelf (COTS) servers and realise the IT cloud virtualisation benefits of agility, cost-efficiency, a flat development and operations (DevOps) organisation. To adopt NFV/SDN, CSPs will need to transform their existing hardware infrastructure into a new virtualised network infrastructure (VNI). VNI is a foundational component of vNGN architecture (see Section 3) because: 

existing legacy proprietary hardware and obsolete servers cannot be virtualised and host VNFs, and thus CSPs will need new VNI-compliant and NEBS (network equipment building system)-certified hardware (server, storage and networking) that can provide high performance and reliability, and process various low latency workloads, such as computing, networking, signalling, analytics and even media processing.



VNI must be future-proofed so that CSPs can defer their infrastructure capex by having standardised hardware which can be reused for different workloads during an operational lifespan of at least seven years and provide IT data centre standardisation benefits, as discussed in Section 2.1.

Figure 2 and Figure 3 below show Analysys Mason’s forecast of the hardware refresh in NFV and SDN over five years from 2013 to 2018, before the upstream software layers are fully adopted as the NFV/SDN architectures, underlining the trend of a bottom-up implementation approach in which the VNI layer is implemented first. We forecast that NFV hardware will grow to USD786 million by 2018, driven by large CSP data centre infrastructure investments in future-proof, flexible, virtualisation-ready and robust VNI platforms to underpin CSPs’s vNGN environments. Similarly, SDN hardware spending by CSPs and data centre providers (DCPs) will reach USD1.4 billion in 2018 as SDN requires hardware refresh for OpenFlow and OpenDaylight support and to host virtual switches and routers (vSwitch, vRouters) on a NPU-centric COTS hardware. NFV activity is expected to be stronger in CSPs than SDN over the forecast period, but it will require more systems integration because of its software immaturity compared with SDN, which was introduced in 2010 for DCPs. As a result, we forecast NFV hardware to account for 32% of the total NFV market in 2018 compared with services (51%) and software (17%), while for SDN, including both CSPs and DCPs, it will be 47%, 20% and 33% for hardware, software and services, respectively, with a higher overall spend by DCPs.

© Analysys Mason Limited 2015 benefits

2: CSPs data centre hardware must be refreshed and future-proofed to deliver baseline NFV/SDN

An overlooked pillar of NFV/SDN: future-proofing the virtual network infrastructure (VNI) layer | 3

Figure 2: NFV revenue

1400

by type, worldwide, 2013–2018 [Source:

1200

Analysys Mason, 2014]

USD million

1000 800 600 400

200 0

2014

2015

2016

2017

2018

58

86

139

244

453

786

Software (CAGR 72%)

28

43

68

117

218

418

Services (CAGR 67%)

95

144

231

407

687

1229

USD million

2013

Hardware (CAGR 69%)

1600

Figure 3: SDN revenue

1400

by type, worldwide, 2013–2018 [Source:

1200

Analysys Mason, 2014]

1000 800

600 400 200 0 Hardware (CAGR 68%) Software (CAGR 66%) Services (CAGR 43%)

2013

2014

2015

2016

2017

2018

104

156

242

457

864

1405

47

71

106

193

362

593

168

250

349

490

709

1003

2.1 VNI and data centre standardisation is important for future-proofing vNGNs CSP networks are heavily regulated, critical infrastructures with strict service-level agreements (SLAs); NFV/SDN VNI must therefore deliver telco-standard scalability, performance and reliability. As such, VNI will play an important role in ensuring service continuity and performance in vNGNs by providing:  

carrier-grade level availability with at least five 9s (99.999%), and up to seven 9s (99.99999% ) real time/low latency processing throughput to support intense, mission-critical CSP workloads such as computing, networking, signalling, analytics and even media processing

© Analysys Mason Limited 2015 benefits

2: CSPs data centre hardware must be refreshed and future-proofed to deliver baseline NFV/SDN

An overlooked pillar of NFV/SDN: future-proofing the virtual network infrastructure (VNI) layer | 4





highly reliable, managed and robust hardware that can provide disruption-free support, online plug-and-play (discovery, validation and self-management), hot swapping, upgrades repair, and remote monitoring and control hardware features NEBS Level 3 certification, which ensures VNI hardware and data centre environment are future-proofed for at least seven years – NEBS-complaint hardware typically has a ten-year lifespan and can handle telecoms workloads in non-stop processing mode. NEBS Level 3 has strict specifications for fire suppression, thermal margin testing, vibration resistance (earthquakes), airflow patterns, acoustic limits, failover and partial operational requirements (such as chassis fan failures), failure severity levels, radio frequency emissions and tolerances, and testing/certification requirements.

In addition to the carrier-grade requirements the VNI must possess, CSPs must also focus on their data centres to further improve the total cost of ownership (TCO) of their future-proofed VNI by adopting the following IT data centre best practices: 







Increase hardware and rack standardisation: for uniformity and seamless scaling of hardware and easier calculation of floor space from hardware and rack standardisation. Most CSP data centres today consist of fragmented and incompatible hardware platforms, such as legacy servers, purpose-built equipment, vendorspecific ATCA, standard rack-mounted servers, etc. This lack of standardisation increases the costs of hardware procurement, upgrade and maintenance, and optimisation of the standard data centre opex pain points, e.g. power, cooling and floor-space inefficiencies. VNI (compute, storage, and networking) densification with blade architecture: will be needed to optimise CSPs power and cooling per square meter of floor space. The IT world migrated to standard COTS blade chassis architectures to improve processing densification and opex efficiency. Rack servers led to higher power, cooling and floor-space consumption as well as increased rack space consumption because dedicated top-of-rack switches were needed to interconnect the multiple disconnected rack servers. Similarly, high-density, blade chassis architecture form factors with open interfaces should be implemented consistently across CSP VNI. However, many network equipment providers (NEPs) are building rack servers for VNI because it is simpler and faster to market than building a commercial NEBS Level-3 blade chassis architecture. Modular and reusable hardware: will provide CSPs with a reusable shared pool of homogenous virtualised computing, storage and networking resources that can be scaled up/down and (de-) commissioned to VNFs autonomously based on network and/or customer demand, thus maximising resource utilisation to defer capex and reduce opex. This can lay the foundation for a successful platformmigration network virtualisation deployment strategy for CSPs, as detailed in Section 2.2. EIA/TIA-942 data centre best-practice standards: CSPs have used this standard in their IT data centre build and can also extend it and standardise it for its VNI data centre build. The EIA/TIA-942 data centre build standardisation and management specifications can further improve CSP data centre standardisation and reduce expensive data centre build, expansions and maintenance as there are numerous suppliers (global and local) that are trained in these specifications and thus high competition will continue to increase pricing pressure on data centre services.

Telefónica has recognised the cost benefits of refreshing its global data centres to virtualisation-capable standardised platforms, and has spent 76% of its capex on network future-proofing. As a result of this, Telefónica achieved the following year-on-year results as of the first quarter of 2015: reduced its total physical servers by 12%; closed five data centres; and increase virtualisation (IT and network) by 11%. Similarly, other Tier 1 CSPs are moving to future-proof their VNI over the next three years as per our NFV and SDN forecast (see Figure 2 and Figure 3, above).

© Analysys Mason Limited 2015 benefits

2: CSPs data centre hardware must be refreshed and future-proofed to deliver baseline NFV/SDN

An overlooked pillar of NFV/SDN: future-proofing the virtual network infrastructure (VNI) layer | 5

2.2 A platform-migration deployment strategy lends to VNI future-proofing Figure 4 summarises the three main NFV/SDN deployment strategies being considered, as identified by Analysys Mason primary research with CSPs, in which the platform-migration strategy has the largest number of long-term benefits. Figure 4: Summary of main network virtualisation deployment strategies [Source: Analysys Mason, 2015] Deployment strategy

Description

CSPs considering this strategy

Service-led strategy

Investment for a new service allows the introduction of NFV for the service in a

Lifecycleupgrade strategy

Investing in upgrading end-of-life infrastructure with VNFs to replace physical infrastructure. This approach does not require the CSP to have an explicit business case for network virtualisation if it can get the upgrade done with the same or a lower budget

15%

Platform-

This is an IT-like approach. The CSP defines a scalable virtualisation platform

5%

migration strategy

reference architecture and migrates infrastructure (VNFs) and services onto the platform over time. Migration is triggered by a combination of infrastructure and service investments as in the above two strategies, but avoids the past practice of creating deployment silos. Telefónica’s UNICA platform and AT&T’s Universal Services Platform, a convergent digital voice platform, are examples of this strategy

80%

greenfield-like implementation isolated from existing network complexities. One CSP deployment driving this strategy is Korea Telecom’s VoLTE service built on a virtualised IMS, and launched in 2014

We expect that CSPs will use a combination of the deployment strategies in Figure 4 above to meet their specific business and market needs in a timely manner, but so far CSPs are gravitating towards the service-led approach to quickly seize the benefits of the new service launch. However, the platform-migration strategy will avoid the pitfall of recreating network siloes and will thus reduce data centre infrastructure and operations cost duplication. A successful platform-migration strategy will require significant investment to design and build a future-proof, multi-vendor, multi-domain reference vNGN platform architecture that can evolve and adapt to future network and service requirements. A sound VNI is needed as the foundation of this reference vNGN platform architecture. The VNI should support rapid deployment of different NFV/SDN use cases and applications by allowing CSPs to use different combinations of hypervisors, VNFs and MANO solutions from different vendors without incurring significant integration costs and avoiding vendor lock-in, as detailed in Section 3.

3. Hardware pricing transparency and open collaborations are critical to keep VNI TCO down Today, the CSP network environment typically consists of monolithic hardware-based network siloes from multiple NEPs, each with their own dedicated element/network management systems (E/NMS), managed by legacy operational support systems (OSS). NFV/SDN technologies provide CSPs with an opportunity to build cross-domain VNI platforms and remove the pains of these traditional physical network silo architectures in vNGNs. However, Figure 5 contradicts ETSI NFV ISG’s original driver of capex savings from lower-cost VNI

© Analysys Mason Limited 2015

3: Hardware pricing transparency and open collaborations are critical to keep VNI TCO down

An overlooked pillar of NFV/SDN: future-proofing the virtual network infrastructure (VNI) layer | 6

hardware with the transition into vNGN, as multiple new software layers will be introduced, namely orchestration (service and network orchestrators), VNF managers, virtual infrastructure managers (VIMs) and hypervisors. Moreover, new vNGN OSS solutions will also be needed to support the co-existence of VNFs and physical network functions in vNGNs. As a result, increasing software opex is a major concern for the NFV/SDN business case and vendors are uncertain how to price these new software layers, as NEPs will seek to recover hardware losses to COTS hardware with hardware and software bundling (see Section 3.1). Traditional physical network

vNGN

Figure 5: Evolution of traditional physical network architecture to

vNGN OSS (policy control, automation, VNI adaptors, OSS rearchitecting)

VNFs VIMs

Hardware

Hypervisors, NOS Infrastructure manager Hardware

Mason, 2015] Management and orchestration (MANO)

E/NMS

[Source: Analysys

NFV infrastructure (NFVI)

OSS

VNI orchestrators and controllers VNF manager(s), NG-NMS

a vNGN architecture

3.1 Hardware and software price bundling can obscure NFV/SDN business cases As identified in our primary research, CSPs are uncertain about the benefits of network virtualisation in terms of TCO, and in particular whether an expected 33% VNI capex saving would be lost in the longer term due to the greater opex of virtualisation software. CSPs believe that, potentially, the cumulative cost of traditional physical network hardware and software with lower O&M costs could be less than the TCO of a vNGN, which offers an initial capex saving but incurs higher O&M costs. However, leading NEPs and IT vendors that provide telecoms hardware and software have the most to lose in terms of hardware infrastructure revenue, and are building their own server hardware portfolio to protect their hardware businesses. But hardware and software price bundling is of major concern for CSPs as vendors can increase overall costs to recover ‘lower’ hardware sales with higher software opex costs. This increases their margins and compromises CSPs’s expected TCO from a COTS, cross-domain VNI platform architecture. To date, little in-depth analysis of the costs and benefits of network virtualisation have been done by the industry. Our high-level analysis, illustrated in Figure 6 below, suggests that retaining the existing physical network and systems while also investing in ‘silo’ network virtualisation could lead to overspending. In contrast, using a holistic service-agility approach to vNGN migration can realise the maximum benefits of network virtualisation. As such, for conclusive NFV/SDN business cases each hardware and software component cost must be transparent to ensure CSPs can clearly evaluate and justify their vNGN investments.1

1

Analysys Mason will be doing NFV/SDN business cases in 2015 thanks to better availability of data and demands for this by CSPs.

© Analysys Mason Limited 2015

3: Hardware pricing transparency and open collaborations are critical to keep VNI TCO down

An overlooked pillar of NFV/SDN: future-proofing the virtual network infrastructure (VNI) layer | 7

Figure 6: Illustrative comparison of cumulative costs of various approaches to network virtualisation [Source: Analysys Mason, 2015]

Cumulative costs

Service-agility cost savings

Service-agility revenue benefits will be additional to cost savings highlighted

Time Traditional physical network

Silo virtualised network

Service-agility approach for vNGNs

Traditionally, vendors that develop hardware and software solution stacks evidently concentrated primarily on their R&D and integration with their own solution and other major vendor solutions as dictated by CSPs. If CSPs continue to procure single-vendor, monolithic hardware and software solutions for their VNI they will be encouraging vendor lock-in and repeating the errors of the past.

3.2 Ecosystems and partnerships will be crucial for NFV/SDN success Most Tier 1 and Tier 2 CSPs are currently trailing NFV/SDN with proof of concepts (POCs) and few limited launches, and are steadily progressing to commercialisation stage with scalable, operational, multi-vendor vNGNs. Industry collaboration between CSPs and vendors is key in creating common network virtualisation frameworks and open, standard interfaces to accelerate the advancement of NFV/SDN, however the efforts remain scattered despite the collaboration and convergence efforts of standards bodies (ETSI NFV ISG, TM Forum) and open-source communities (OPNFV, OpenDaylight, OpenStack, Open Networking Foundation). To fully achieve the benefits of network virtualisation, CSPs need pre-integrated and interchangeable hardware and software components across the layers of vNGN (VNI, VNF, MANO and vNGN-OSS) that can be procured independently from a large variety of suppliers. However, there are a number of vendors that are developing their own ecosystems to accelerate their solutions and development of hardware and software solution stacks such as HP’s OpenNFV, Alcatel-Lucent’s CloudBand Ecosystem Program, Amdocs’ Network Cloud Ecosystem, Cyan’s Blue Orbit (now part of Ciena’s Agility Matrix ecosystem) and Huawei’s NFV Open Labs. The most successful vendor ecosystem to date because of lack of ecosystem conflicts and its x86 Intel architecture virtualisation capabilities is the Intel Network Builders Program, which is at least 12 months ahead of ARM architecture for NFV and SDN technologies. The Intel Network Builders Program was created to accelerate the vNGN network transformation by making it easier to build, enhance and operate VNI through cross-domain collaboration for innovation, reducing development efforts, advancing open standards, increasing interoperability and ultimately increasing service agility. To date, the Intel Network Builders Program is the most comprehensive initiative with its 160+ members including cloud, enterprise, IT, NEPs, BSS/OSS, systems integrators, hardware, VNF, hypervisor, DPI and service assurance vendors. It is also the only collaboration effort with a strong VNI focus. The main 2015 milestone in the Intel Network Builders Program is the Intel

© Analysys Mason Limited 2015

3: Hardware pricing transparency and open collaborations are critical to keep VNI TCO down

An overlooked pillar of NFV/SDN: future-proofing the virtual network infrastructure (VNI) layer | 8

Open Network Platform Server (Intel® ONP Server) Reference Architecture (RA) server (see Figure 7) that provides a hardware/software template for creating and showcasing SDN and NFV server solutions. The Intel® ONP Server Reference Architecture Guide provides instructions for building the ONP Server system and software, test scripts and a set of benchmark performance test results. Figure 7: Intel® ONP Server Reference Architecture environment enables innovative third-party solution development [Source: Analysys Mason, 2015]

4. Lenovo Flex System Carrier-Grade Chassis Profile Lenovo Flex System Carrier-Grade Chassis provides a compelling VNI solution that ensures maximum interoperability and easy integration with industry-standard hypervisors and MANO solutions, and provides VNI hardware pricing transparency. The chassis combines the design, efficiency, modularity and density benefits of the enterprise blade chassis with carrier-grade performance, reliability and availability, which is essential to realise the business and operational goals of NFV/SDN. Lenovo also provides Lenovo XClarity Administrator which helps CSPs simplify, automate and speed the provisioning of Lenovo infrastructure while reducing opex required for ongoing administration. XClarity is a virtualised IT application that easily integrates into Lenovo servers and the Flex System, providing network-based auto discovery of systems, real-time monitoring and alerts, updates and configuration, and deployment of industry-standard hypervisors and operating systems onto bare metal compute nodes and servers. Figure 8 below provides the details of the Lenovo Flex System Carrier-Grade Chassis key solution components.

© Analysys Mason Limited 2015

4: Lenovo Flex System Carrier-Grade Chassis Profile

An overlooked pillar of NFV/SDN: future-proofing the virtual network infrastructure (VNI) layer | 9

Figure 8: Overview of Lenovo Flex System Carrier-Grade Chassis and Lenovo XClarity Administrator [Source: Lenovo, 2015]

Lenovo XClarity Administrator

Integrators and REST APIs

• • • • • •

Centralised resource management

Carrier-grade/enterprise modes Discovery and inventory Monitoring Firmware updates and compliance Configuration management Deployment of OS to bare metal servers

Flex System Carrier-Grade Chassis • • • • • • • • • •

• • • • • •

High physical density Positional awareness Virtual FC and NIC address Mixed processor support Hot add, hot swap Zoned cooling

• • • • • •

11U chassis, 14 bays 4 switch modules Redundant CMMs, power and cooling -48V DC power NEBS Level 3, ETSI Zone 4 earthquake rating Additional dust filtering 2 additional air flow slots in bottom of chassis Additional air ducting to direct air Recessed bottom to utilise current Flex Rail kits and fit into an industry standard rack

Ease of cabling Ease of insertion and removal High bandwidth, low latency networking Integrated management Ethernet Designed to support the future Additional filtering

Lenovo Flex System Carrier-Grade Chassis and Lenovo XClarity Administrator provide the following benefits to CSPs that are planning to transform their networks into vNGNs: 



Ensure carrier-grade performance and availability in vNGN: The chassis is ruggedised and futureproofed with NEBS Level 3 and ETSI certification levels, and is operable within Earthquake Zone 4 areas. The chassis also provides online add/swap features and supports -48V DC power operation, as required for many CSP data centre environments. Reduce opex through virtualisation (cloud) densification: Flex System is optimised for network virtualisation, providing greater memory capacity for virtualisation density as well as high physical density (14 half-wide 2 socket blade or 7 full wide 4 socket blade or 1 scalable 8 socket blade) to keep power, space and cooling costs down by reducing physical footprint. The chassis also allows for a multitude of server (compute), storage and networking combinations to meet the varied needs of CSPs.

© Analysys Mason Limited 2015

4: Lenovo Flex System Carrier-Grade Chassis Profile

An overlooked pillar of NFV/SDN: future-proofing the virtual network infrastructure (VNI) layer | 10









Defer capex with a future-proof VNI: Flex System is a VMware- and RedHat-certified chassis that provides CSPs with the flexibility to repurpose it for future applications and service environments: it fully supports Intel’s ten-year silicon roadmap, which Lenovo has executed for the past ten+ years, in spite of not yet being part of the Intel Network Builder Program. Lenovo actively participates in OPNFV, OpenDaylight and Open Networking Foundation (ONF) industry groups today, with plans to expand these partnerships in future, ensuring compliance with industry standards and interoperability with large thirdparty ecosystems. Standardised hardware and rack dimension: Flex System allows CSPs to get IT data centre standardisation benefits from having standardised hardware dimension that can fit in commodity data centre racks and can even be collocated with CSPs IT data centre racks. Open management interfaces to XClarity: Lenovo provides open northbound REST APIs and Integrator software plugins from XClarity to extend existing automation, orchestration and service management solutions to Lenovo hardware. XClarity integration into VIM such as VMware vCenter and Microsoft System Center enable systems management from the VIM, and increases services uptime by eliminating host maintenance windows through automated rolling updates and reboots. Services uptime can be enhanced further by triggering workload evacuation from impacted hosts based on user-defined platformlevel events in ESXi and Hyper-V clusters. SDN-enabled high bandwidth, low latency networking: Flex System provides OpenFlow-embedded and OpenDaylight compatible integrated, pluggable network switches that enable node to node (east–west) communications, eliminating the need for an additional top of rack Ethernet switches.

Lenovo is well positioned to providing future-proofed VNI platform that can facilitate a successful platformmigration NFV/SDN deployment to CSP – it has no hardware and software bundling agenda, is open to multiple partner collaboration, and is experienced in NEBS, virtualisation and hardware densification (blade) focus.

5. Conclusion and recommendations The key findings of this white paper are: 

NFV/SDN data centre build needs to be based on future-proof VNI that is modular, standardised, resilient, robust (NEBS Level 3 certified) and has high virtualisation density (can host many virtual instances, e.g. VNFs, VMs per rack unit), which can provide reduced power, cooling, floor-space opex and ease data centre design, deployment, expansion/upgrade and operations (see Section 2).



A platform-migration deployment strategy is the most beneficial to CSPs moving to a vNGN, where this strategy will require a VNI platform architecture that is future-proof, multi-vendor and multi-domain.



Hardware and software price bundling can obscure the NFV/SDN business case and result in higher TCO. As such, for conclusive NFV/SDN business cases, each hardware and software component cost must be transparent to ensure CSPs can clearly evaluate and justify their vNGN investments.



Ecosystems and partnerships will be crucial for NFV/SDN success. CSPs, vendor, standards bodies and open-source communities collaboration is needed to realise scalable, operational, multi-vendor vNGNs.

© Analysys Mason Limited 2015

5: Conclusion and recommendations

An overlooked pillar of NFV/SDN: future-proofing the virtual network infrastructure (VNI) layer | 11

About the authors Glen Ragoonanan (Principal Analyst) is the lead analyst for Analysys Mason’s Infrastructure Solutions, Service Delivery Platforms and Software-Controlled Networking research programmes. He joined Analysys Mason in 2008 and has worked as a consultant on projects on next-generation IT and telecoms networks, systems and technologies for incumbents, new entrants, private companies, regulators and public-sector clients. His primary areas of specialisation include operations and business support systems (OSS/BSS) solution architecture and integration for business process re-engineering, business process optimisation, business continuity planning, procurement and outsourcing operations and strategies. Before joining Analysys Mason, Glen worked for Fujitsu, designing, delivering and managing integrated solutions. Glen is a Chartered Engineer and projectmanagement professional with an MSc from Coventry University. Gorkem Yigit (Research Analyst) is part of the Telecoms Software research team, contributing to the Service Delivery Platforms, Software-Controlled Networking, Infrastructure Solutions and Service Assurance research programmes. He started his career in the telecoms industry with a graduate role at a leading telecoms operator, before joining Analysys Mason in late 2013. He has also written an academic paper about market acceptance of cloud enterprise resource planning (ERP) software and he earned a cum laude MSc degree in Economics and Management of Innovation and Technology from Bocconi University (Milan, Italy).

Analysys Mason does not endorse any of the vendor’s products or services discussed in this white paper.2

2

Published by Analysys Mason Limited • Bush House • North West Wing • Aldwych • London • WC2B 4PJ • UK Tel: +44 (0)20 7395 9000 • Fax: +44 (0)20 7395 9001 • Email: [email protected] • www.analysysmason.com/research Registered in England No. 5177472 © Analysys Mason Limited 2015 All rights reserved. No part of this publication may be reproduced, stored in a retrieval system or transmitted in any form or by any means – electronic, mechanical, photocopying, recording or otherwise – without the prior written permission of the publisher. Figures and projections contained in this report are based on publicly available information only and are produced by the Research Division of Analysys Mason Limited independently of any client-specific work within Analysys Mason Limited. The opinions expressed are those of the stated authors only. Analysys Mason Limited recognises that many terms appearing in this report are proprietary; all such trademarks are acknowledged and every effort has been made to indicate them by the normal UK publishing practice of capitalisation. However, the presence of a term, in whatever form, does not affect its legal status as a trademark. Analysys Mason Limited maintains that all reasonable care and skill have been used in the compilation of this publication. However, Analysys Mason Limited shall not be under any liability for loss or damage (including consequential loss) whatsoever or howsoever arising as a result of the use of this publication by the customer, his servants, agents or any third party.

© Analysys Mason Limited 2015

About the authors

An overlooked pillar of NFV/SDN: future-proofing the virtual network infrastructure (VNI) layer | 12

About Analysys Mason Analysys Mason is a trusted adviser on telecoms, media and technology (TMT). We work with our clients, including communications service providers (CSPs), regulators and end users, to:   

design winning strategies that deliver measurable results make informed decisions based on market intelligence and analytical rigour develop innovative propositions to gain competitive advantage.

We have more than 220 staff in 12 offices and are respected worldwide for exceptional quality of work, independence and flexibility in responding to client needs. For 30 years, we have been helping clients in more than 100 countries to maximise their opportunities.

Consulting Our focus is exclusively on TMT. We support multi-billion dollar investments, advise clients on regulatory matters, provide spectrum valuation and auction support, and advise on operational performance, business planning and strategy. We have developed rigorous methodologies that deliver tangible results for clients around the world. For more information, please visit www.analysysmason.com/consulting.

Research We analyse, track and forecast the different services accessed by consumers and enterprises, as well as the software, infrastructure and technology delivering those services. Research clients benefit from regular and timely intelligence in addition to direct access to our team of expert analysts. Our dedicated Custom Research team undertakes specialised and bespoke projects for clients. For more information, please visit www.analysysmason.com/research.

© Analysys Mason Limited 2015

About Analysys Mason