4 Steps to Shorter Time-to-Revenue

New network architectures based on NFV/SDN aim to increase agility and innovation, while reducing cost and complexity for computing at the network edge. This calls for NFV infrastructure software optimized for higher compute density and networking performance, leveraging Commercial Off-The-Shelf (COTS) hardware.

By Fredrik Ehrenstråle

So what should operators, system integrators, and service providers focus on to achieve shorter time-to-revenue, and how?

These steps will facilitate their journey:

Step 1: Ensure flexible service deployment

There are three typical deployment scenarios.

  • Customer promise only: all services virtualized on the customer premise, using the Central Office for control.
  • PoP/Central Office only: all services at the central office, making the Customer Premise Equipment (CPE) as simple as possible.

These two scenarios are typical starting points to building virtual CPE (vCPE), and normally the services are moved into the Central Office.

  • Distributed: allows for maximum flexibility and provides full capacity for Service Deployment Flexibility. A more flexible model allows the system to choose location of the VNF based on the characteristics and function of the VNF.

Step 2: Avoid vendor lock-in with standard APIs and protocols

NFV drives requirements on optimal resource use, and levels of freedom to choose vendor and change vendor without jeopardizing legacy investments. Standard APIs make it easier to build systems that support multiple vendors and reduce the risk of vendor lock-in.

  • DPDK provides architecture independent and optimized networking performance.
  • OPNFV is key to build NFV on open source and the integration between the central office NFVi platform and the orchestration.
  • NETCONF is a standard network management protocol and a good option for VNF lifecycle management and CPE to central office integration.

Step 3: Use centralized control and remote management to lower OpEx

Orchestrating the edge has multiple integration points. The least footprint intense option is to integrate with standard APIs and protocols such as NETCONF and REST directly in the orchestrator or in the NFV platform. The choice of integration depends on the implementation of orchestration and NFV platform. A standalone CPE should provide multiple ways of integrating with a central office to support a broad range of implementations.

As an alternative, a OpenStack based central office typically integrates with the customer premise using OpenStack internal APIs and services.

Step 4: Leverage white box and COTS-based implementations to lower CapEx

Providing on-demand deployment of network functions previously required purpose built appliances. This equipment is replaced by cost effective Commercial Off-The-Shelf (COTS) hardware, with highly integrated SoCs (on both ARM and x86 architectures), high performance processing, exceptional power efficiency, and hardware accelerators abstracted by standard APIs.

The virtualized networking performance depends on the hardware capacity at the customer premise, and there are several ways of ensuring that the software utilizes the hardware in the most optimal way:

  • Minimize resource utilization. Decrease the kernel size and remove unnecessary services to reduce RAM and CPU footprint. Using containers reduces memory utilization in the VNFs.
  • Use optimized open source implementations to build the data path. DPDK, ODP, OVS and OFP are all a good foundation to build a data path on.
  • Use software partitioning and core isolation to optimize the system performance. Isolating specific tasks removes system level bottlenecks and enables the performance of the platform.