3 good (technical) reasons to why you need an IP fast path
By Tomas Hedqvist
By Tomas Hedqvist
Why have Nokia, ARM, Cavium, Marvell and others invested in Open Fast Path (OFP), an open source project aiming to provide an accelerated IP stack implemented in Linux user space?
Major hardware and network equipment vendors engaging in open source projects is hardly a new phenomenon, but why is there a need for a separate IP stack?
First of all there is a need for speed, obviously. Networked applications in areas such as mobile infrastructure (RAN backhaul, BBU, RNC, EPC nodes etc.), networking infrastructure (routers, virtual switches etc.), and network functions (edge caches, session border controllers, firewalls, web servers, load balancers etc.) all depend on a high performance data plane. Growing volumes of data, higher line rates, and an increased workload per packet introduced for example by IPSec, drives a need for solutions with higher throughput and less latency.
But technical challenges prevent Linux' kernel stack from forwarding and terminating IP traffic as efficiently as needed in the next generation of network applications. Here are three reasons (from a technical perspective) to why OFP is needed and what technical challenges it solves.
The Linux kernel stack, as a general purpose IP stack, is designed to handle all possible cases and protocols including the odd corner cases. This makes it very versatile, but also slows it down. For the control plane this is normally fine since it needs to be flexible and support a variety of protocols while throughput is generally of less concern. In the data plane all is about high throughput and low latency though.
Support for protocols and use cases has been limited in OFP to allow stronger optimizations. It limits flexibility, but for a data plane terminating UDP traffic for example, you don't need all that flexibility. You just want to terminate as many UDP packets as quickly as possible. Whatever falls outside the scope of the fast path will be handled by the slow path, but this will typically be traffic less sensitive to latency and throughput performance.
OFP is implemented in user space. This is important as it avoids any latencies incurred by the kernel-/user-space barrier and by internal handling in the kernel. Any system calls done in the kernel stack will require a context switch, and frequent locks in the kernel stack will add to latency and reduced throughput. OFP uses ODP or DPDK to access IP packets directly from user space, and by bypassing the kernel, passing packets directly from the network hardware to the application, it is able to improve throughput and latency considerably.
Another characteristic of OFP is that it scales linearly over an increased number of cores, connections or endpoints. Already a 10Gb line rate equals a maximum of 14,88 Mpps (with IPv6), which corresponds to a total processing time of roughly 67 ns per packet. It is not difficult to see that it is impossible to process packets at line rate on a single core, or even a fair number of cores. Scaling packet processing in the kernel stack is not a trivial issue though, and while there are techniques with different pros and cons, they all have in common that they require very complex configurations, and sometimes also hardwired dependencies on hardware that limits portability.
So what makes Nokia, Marwell, Cavium, ARM, Linaro and Enea join forces in the OFP project? A specialized fast path that scales linearly solves many of the shortcomings in the kernel IP stack, and OFP addresses just that. It has proven to provide performance many times that of the slow path and is now a stable project used in high profile applications. For example, Nokia and Cavium demonstrated a 5G backhaul based on OFP at Linaro Connect last year. And Enea has provided an optimized, commercial version of OFP, "Enea Fast Path" to several customers integrating it in their solutions for 5G access and core networks.
More about Open Fast Path here
More about Enea Fast Path and benchmark reports here