Kernel bypass networking with FD.io and VPP

By on 17 Apr 2020

Category: Tech matters

Tags: , , ,

2 Comments

Blog home

Over the last few years, I have experimented with various flavours of userland, kernel-bypass networking. In this post, we’ll take FD.io for a spin.

I will compare the result with those in my last blog in which I looked at how much a vanilla Linux kernel could do in terms of forwarding (routing) packets. I observed that on Linux, to achieve 14Mpps I needed roughly 16 and 26 cores for a unidirectional and bidirectional test. In this post, I’ll look at what I need to accomplish this with FD.io

Userland networking

The principle of userland networking is that the networking stack is no longer handled by the kernel, but instead by a userland program. The Linux kernel is incredibly feature-rich, but for fast networking, it also requires a lot of cores to deal with all the (soft) interrupts.

Several of the userland networking projects rely on Data Plane Development Kit (DPDK) to achieve incredible numbers. One reason why DPDK is so fast is that it doesn’t rely on interrupts. Instead, it’s a poll mode driver, meaning it’s continuously spinning at 100%, picking up packets from the network interface controller (NIC).

A typical server nowadays comes with quite a few CPU cores, and dedicating one or more cores for picking packets of the NIC is, in some cases, entirely worth it — especially if the server needs to process lots of network traffic.

So, DPDK provides us with the ability to efficiently, and extremely quickly, send and receive packets. But that’s it! Since you’re not using the kernel, we now need a program that takes the packets from DPDK and does something with it; like, for example, a virtual switch or router.

FD.io

FD.io is an open-source software data plane developed by Cisco. At the heart of FD.io is something called Vector Packet Processing (VPP).

The VPP platform is an extensible framework that provides switching and routing functionality. VPP is built on a ‘packet processing graph.’ This modular approach means that anyone can ‘plugin’ new graph nodes. This makes extensibility rather simple, and it means that plugins can be customized for specific purposes.

FD.io can use DPDK as the drivers for the NIC and can then process the packets at a high-performance rate that can run on a commodity CPU. It’s important to remember that it is not a fully-featured router, that is, it doesn’t really have a control plane; instead, it’s a forwarding engine. Think of it as a router line-card, with the NIC and the DPDK drivers as the ports. VPP allows us to take a packet from one NIC to another, transform it if needed, do table lookups, and send it out again. There are APIs that allow you to manipulate the forwarding tables, or you can use the Command Line Interpreter (CLI) to, for example, configure static routes, vlan, vrf’s and so forth.

Test setup

I’ll use mostly the same set up as in my previous test. Again I’ll use two n2.xlarge.x86 servers from packet.com and our DPDK traffic generator. The set up is as below.

Test setup
Figure 1 — Test set up.

I’m using the VPP code from the FD.io master branch and have installed it on a vanilla Ubuntu 18.04 system following these steps.

Test results — packet forwarding using VPP

Now that I have my test set up ready to go, it’s time to start testing!

To start, I configured VPP with ‘vppctl’ like this:

set int ip address TenGigabitEthernet19/0/1 10.10.10.2/24
set int ip address TenGigabitEthernet19/0/3 10.10.11.2/24
set int state TenGigabitEthernet19/0/1 up
set int state TenGigabitEthernet19/0/3 up
set ip neighbor TenGigabitEthernet19/0/1 10.10.10.3 e4:43:4b:2e:b1:d1
set ip neighbor TenGigabitEthernet19/0/3 10.10.11.3 e4:43:4b:2e:b1:d3 

(Note, I need to set static ARP entries since the packet generator doesn’t respond to ARP.)

That’s it! Pretty simple right?

Ok, time to look at the results just like we did before for a single flow test, both unidirectional and bidirectional, as well as a 10,000 flow test.

VPP forwarding test results.
Figure 2 — VPP forwarding test results.

Those are some remarkable numbers! With a single flow, VPP can process and forward about 8Mpps, not bad. Perhaps, the more realistic test with 10,000 flows, shows us that it can handle 14Mpps with just two cores. To get to a full bidirectional scenario where both NICs are sending and receiving at line rate (28Mpps per NIC) we need three cores and three receiving queues on the NIC. To achieve this last scenario with Linux, we needed approximately 26 cores. Not bad, not bad at all!

Traffic generator on the left, VPP server on the right. This shows the full line-rate bidirectional test: 14Mpps per NIC, while VPP uses three cores.
Figure 3 — Traffic generator on the left, VPP server on the right. This shows the full line-rate bidirectional test: 14Mpps per NIC, while VPP uses three cores.

Test results — NAT using VPP

In my previous post, I noted that when doing SNAT on Linux with iptables, I got as high as 3Mpps per direction needing about 29 CPUs per direction. This showed me that packet rewriting is significantly more expensive than just forwarding. Let’s take a look at how VPP does Network Address Translation (NAT).

To enable NAT on VPP, I used the following commands:

nat44 add interface address TenGigabitEthernet19/0/3
nat addr-port-assignment-alg default
set interface nat44 in TenGigabitEthernet19/0/1 out
TenGigabitEthernet19/0/3 output-feature 

My first test is with one flow only in one direction. With that, I’m able to get 4.3Mpps. That’s exactly half of what we saw in the performance test without NAT. It’s no surprise this is slower due to the additional work needed. (Note, with Linux iptables I was seeing about 1.1Mpps).

A single flow for NAT isn’t super representative of a real-life NAT example where you’d be translating many sources. So for the next measurements, I’m using 255 different source IP addresses and 255 destination IP addresses as well as different port numbers; with this setup, the NAT code is seeing about 16k sessions.

I can now see the numbers go to 3.2Mpps; more flows means more NAT work. Interestingly, this number is exactly the same as I saw with iptables. There is, however, one big difference: with iptables the system was using about 29 cores. In this test, I’m only using two cores. That’s a low number of workers, and also the reason I’m capped. To remove that cap, I add more cores and validate that the VPP code scales horizontally. Eventually, I need 12 cores to run 14Mpps for a stable experience.

VPP forwarding with NAT test results
Figure 4 — VPP forwarding with NAT test results.

Below is the relevant VPP config to control the number of cores used by VPP. Also, I should note that I isolated the cores I allocated to VPP so that the kernel wouldn’t schedule anything else on it.

 cpu {
     main-core 1
     # CPU placement:
     corelist-workers 3–14
     # Also added this to grub: isolcpus=3-31,34-63
 }dpdk {
    dev default {
       # RSS, number of queues
       num-rx-queues 12
       num-tx-queues 12
       num-rx-desc 2048
       num-tx-desc 2048
    }
    dev 0000:19:00.1
    dev 0000:19:00.3
 }plugins {
    plugin default { enable }
    plugin dpdk_plugin.so { enable }
 }nat {
    endpoint-dependent
    translation hash buckets 1048576
    translation hash memory 268435456
    user hash buckets 1024
    max translations per user 10000
  } 

Conclusion — it’s crazy fast!

In this post, I looked at VPP from the FD.io project as a userland forwarding engine.

VPP is one example of a kernel bypass method for processing packets. It works closely with and further augments DPDK.

The VPP code is feature-rich, especially for a kernel bypass packet forwarder. Most of all, it’s crazy fast. I needed just three cores to have two NICs forward full line-rate (14Mpps) in both directions. Comparing that to the Linux kernel, which needed 26 cores, I measured an almost 9x increase in performance.

I noticed the results were even better when using NAT. In Linux, I wasn’t able to get any higher than 3.2Mpps for which I needed about 29 cores. With VPP I can do 3.2Mpps with just two cores and get to full line rate NAT with 12 cores.

I think FD.io is an interesting and exciting project, and I’m a bit surprised it’s not more widely used. One of the reasons is likely that there’s a bit of a learning curve. But if you need high-performance packet forwarding, it’s certainly something to explore! Perhaps this is the start of your VPP project? If so, let me know!

Adapted from original post which appeared on Medium.

Andree Toonk is Senior Manager Network Engineering at OpenDNS, now Cisco Cloud Security, and Founder of BGPMon.

Rate this article

The views expressed by the authors of this blog are their own and do not necessarily reflect the views of APNIC. Please note a Code of Conduct applies to this blog.

2 Comments

  1. posamma

    i am using 18.04 and i added interfcaes
    sudo service vpp stop
    sudo ifconfig enp0s8 down
    sudo ifconfig enp0s9 down
    sudo service vpp start
    moved to vpp
    Problem is whenever restart are shutdown interfcaes are not present in vpp can anyone suggest

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

Top