The team behind Google’s Bottleneck Bandwidth and Round-trip propagation time (BBR), are requesting feedback from networkers on a host of recently released Linux TCP BBR patches they are testing.
BBR is a congestion control algorithm developed by Google for optimizing how network packets travel through servers in order to avoid jamming certain routes, resulting in higher throughput, lower latency, and better quality of experience across Google services.
As stated in a post on the public bbr-dev list, the developers are interested in receiving feedback on the code, and are particularly interested in hearing how the patches affect throughput over low-RTT paths where there is a WiFi hop — according to their own testing, the patches are resulting in improvements where the TCP sender is on Ethernet and the receiver is on a WiFi network.
The team is also interested in any comparative results for throughput numbers for BBR and its predecessor algorithm, CUBIC.
Testing for increases in throughput for WiFi, lower queuing delays
There are two main efforts reflected in the patches, which are designed to be applied on top of the Linux net-next branch following testing: an increase in throughput for WiFi and lower queuing delays (see extract from the post below):
1: Higher throughput for WiFi and other paths with aggregation
Aggregation effects are extremely common with WiFi, cellular, and cable modem link technologies, ACK decimation in middleboxes, and LRO and GRO in receiving hosts. The aggregation can happen in either direction, data or ACKs, but in either case, the aggregation effect is visible to the sender in the ACK stream.
Previously, BBR’s sending was often limited by cwnd under severe ACK aggregation/decimation because BBR sized the cwnd at 2*BDP. If packets were ACKed in bursts after long delays then BBR stopped sending after sending 2*BDP, leaving the bottleneck idle for potentially long periods. Note that loss-based congestion control does not have this issue because when facing aggregation it continues increasing cwnd after bursts of ACKs, growing cwnd until the buffer is full.
To achieve good throughput in the presence of aggregation effects, this new algorithm allows the BBR sender to put extra data in flight to keep the bottleneck utilized during silences in the ACK stream that it has evidence to suggest were caused by aggregation.
2: Lower queuing delays by frequently draining excess in-flight data
In BBR v1.0 the ‘drain’ phase of the pacing gain cycle holds the pacing gain to 0.75 for essentially 1*min_rtt (or less if inflight falls below the BDP).
This patch modifies the behaviour of this ‘drain’ phase to attempt to ‘drain to target’, adaptively holding this ‘drain’ phase until inflight reaches the target level that matches the estimated BDP (bandwidth-delay product).
This can significantly reduce the amount of data queued at the bottleneck, and hence reduce queuing delay and packet loss, in cases where there are multiple flows sharing a bottleneck.
For more information on BBR developments and to contribute feedback, join the BBR Development Google Group, which also has details on how to compile and build a net-next kernel with TCP BBR for Linux. And be sure to watch a recent update on BBR work at Google, presented at IETF 101.
The views expressed by the authors of this blog are their own and do not necessarily reflect the views of APNIC. Please note a Code of Conduct applies to this blog.