The OpenBSD packet filter (PF) was introduced a little more than 20 years ago as part of OpenBSD 3.0. In this post, we’re continuing on from Part 1 of PF features and tools that I have enjoyed using.
A configuration that learns from network traffic seen and adapts to conditions
With PF, you can create a network that learns. Fairly early in PF’s history, it occurred to the developers that the network stack collects and keeps track of information about the traffic it sees, which could then be acted upon if the software became able to actively monitor the data and act on specified changes. So the state tracking options entered the pf.conf repertoire in their initial form with the OpenBSD 3.7 release.
A common use case is when you run an SSH service or really any kind of listening service with the option to log in, you will see some number of failed authentication attempts that generate noise in the logs. Password guessing, or as some of us say, password groping, can turn out to be pretty annoying even if the miscreants do not actually manage to compromise any of your systems. So to eliminate noise in our logs we turn to the data that is anyway available in the state table, to track the state of active connections, and to act on limits you define such as the number of connections from a single host over a set number of seconds.
The action could be to add the source IP that tripped the limit to a table. Additional rules could then subject the members of that table to special treatment. Since that time, my Internet-facing rule sets have tended to include variations on:
table <bruteforce> persist
block quick from <bruteforce>
pass inet proto tcp from any to $localnet port $tcp_services \
flags S/SA keep state \
(max-src-conn 100, max-src-conn-rate 15/5, \
overload <bruteforce> flush global)
… which means that any host that tries more than 100 simultaneous connections or more than 15 new connections over five seconds is added to the table and blocked, with any existing connections terminated.
It is a good practice to let table entries in such setups expire eventually. How long entries stay is entirely up to you.
At first, I set expiry at 24 hours, but with password gropers like those caught by this rule being what they are, I switched a few years ago to four weeks at first, then upped again a few months later to six weeks. Groperbots tend to stay broken for that long. And since they target any service you may be running, state tracking options with overload tables can be useful in a lot of non-SSH contexts as well.
A point that observers often miss is that with this configuration, you have a firewall that learns from the traffic it sees and adapts to network conditions.
It is also worth noting that state tracking actions can be applied to all TCP traffic and that they can be useful for essentially all services.
The buzzwordability potential in the learning configurations is enormous, and I for one fail to see how the big names have failed to copy or imitate this feature and greytrapping, which we will look at later, and capitalize on products with those features.
The article Forcing the password gropers through a smaller hole with OpenBSD’s PF queues has a few suggestions on how to handle noise sources with various other services. More on queues in a few moments.
The adaptive firewall and the greytrapping game
At the risk of showing my age, I must admit that I have more or less always run a mail service. Once TCP/IP networking became available in some form for even small businesses and individuals during the early 1990s, once you were connected, it was simply one of those things you would do. Setting up an SMTP service (initially wrestling with sendmail and its legendary sendmail.cf configuration file) with accompanying pop3 and/or imap service was the done thing.
Over time the choice of mail server software changed, we introduced content filtering to beat the rise of the trashy, scammy spam mail and, since most clients ran that operating system, mail-borne malware. But even with state-of-the-art content filtering, some unwanted messages would make it into users’ inboxes often enough to be annoying.
So when OpenBSD 3.3 shipped with the initial version of spamd it was quite a relief for people of my job category, even if that only would load lists of known bad senders’ IP addresses and stutter at them one byte per second until the other side gave up.
Later versions introduced greylisting — answering SMTP connections from previously unknown senders with a temporary local error code and only accepting delivery if the same host tried again — which reduced the load on the content filtering machines significantly. The real fun started with the introduction of greytrapping in the version of spamd(8) that shipped with OpenBSD 3.7.
Greytrapping is yet another adaptive or learning feature. The system identifies bad actors by comparing the destination email address in incoming SMTP traffic from unknown or already greylisted hosts with a list of known invalid addresses in the domains the site serves. The spamdb(8) command was extended to add features to add addresses to and delete from the spamtrap list.
Greytrapping was an extremely welcome new feature, and I adopted it eagerly. Soon after the feature became available, I set up greytrapping. Initially, I fished out the spamtrap addresses from my mail server logs — from entries produced by bounce messages that themselves turned out to be undeliverable at our end since the recipient did not exist — and after a few weeks I started publishing both the list of spamtraps and an hourly dump of currently trapped IP addresses.
The setup is amazingly easy. On a typical gateway in front of a mail server, you instrument your /etc/pf.conf with a few lines, usually at the top:
table <spamd-white> persist
table <nospamd> persist file "/etc/mail/nospamd"
pass in on egress proto tcp to any port smtp \
divert-to 127.0.0.1 port spamd
pass in on egress proto tcp from <nospamd> to any port smtp
pass in log on egress proto tcp from <spamd-white> to any port smtp
pass out log on egress proto tcp to any port smtp
Here we even suck in a file that contains the IP addresses of hosts that should not be subjected to the spamd treatment.
In addition, you will need to set up the correct options for spamd(8) and spamdlogd(8) in your /etc/rc.conf.local:
spamd_flags="-v -G 2:8:864 -n "mailwalla 17.25" -c 1200 -C /etc/mail/fullchain.pem -K /etc/mail/privkey.pem -w 1 -y em1 -Y em1 -Y 158.36.191.225"
spamdlogd_flags="-i em1 -Y 158.36.191.225"
The IP address here designates a sync partner; check out the spamd(8) man page for the other options. If you’re interested, you can get the gory details of running a setup with several mail exchangers in the In The Name Of Sane Email: Setting Up OpenBSD’s spamd(8) With Secondary MXes In Play – A Full Recipe article.
You probably do not need to edit the configuration file /etc/mail/spamd.conf much, but do look up the man page and possibly references to the bsdly.net blocklist. Finally, reload your PF configuration, start the daemons spamd(8) and spamdlogd(8) using rcctl, and set up a crontab(5) line to run spamd-setup(8) at reasonable intervals to fetch updated blocklists.
The number of trapped addresses in the hourly dump has been anything from a few hundred in the earliest days, then thousands, and even at times, in the hundreds of thousands. For the last couple of years, the number has generally been in the mid to low four digits, with each host typically hanging around longer to try delivery to an ever-expanding number of invalid addresses in their database.
Just a few weeks ago, the list of ’imaginary friends’ rolled past 300,000 entries. The article The Things Spammers Believe – A Tale of 300,000 Imaginary Friends tells the story with copious links to earlier articles and other resources while Maintaining A Publicly Available Blacklist – Mechanisms And Principles details the work involved in maintaining a blocklist that is offered to the public.
It’s been good fun, with a liberal helping of bizarre as the number of spamtraps grew, sometimes with truly weird content.
Traffic shaping you can actually understand
You’ve heard it before — traffic shaping is hard. Hard to do and hard to understand.
Traditionally traffic shaping was available on all BSDs in the form of ALTQ, a codebase that its developers labelled experimental and contained implementations of several different traffic shaping algorithms. One central problem was that the configuration syntax was inelegant at best, even after the system was merged into the PF configuration.
In OpenBSD, which runs development on a strict six-month release cycle, the code that would eventually replace ALTQ was introduced gradually over several releases.
The first feature to be introduced was always-on, settable priorities with the keyword prio.
A random example shows that this configuration prioritizes ssh traffic above most others (the default is 3):
pass proto tcp to port ssh set prio 6
While this configuration (below) makes an attempt at speeding up TCP traffic by assigning a higher priority to low-delay packets, typically ACKs:
match out on $ext_if proto tcp from $ext_if set prio (3, 7)
match in on $ext_if proto tcp to $ext_if set prio (3, 7)
Next up, the newqueue code did away with the multiple algorithms approach and settled on the Hierarchical fair-service curve (HFSC) as the most flexible option that would even make it possible to emulate or imitate the alternative shaping algorithms from the ALTQ experiment.
HFSC queues are defined on an interface with a hierarchy of child queues, where only the ‘leaf’ queues can be assigned traffic. We take a look at a static allocation first:
queue main on $ext_if bandwidth 20M
queue defq parent main bandwidth 3600K default
queue ftp parent main bandwidth 2000K
queue udp parent main bandwidth 6000K
queue web parent main bandwidth 4000K
queue ssh parent main bandwidth 4000K
queue ssh_interactive parent ssh bandwidth 800K
queue ssh_bulk parent ssh bandwidth 3200K
queue icmp parent main bandwidth 400K
You then tie in the queue assignment, here with match rules:
match log quick on $ext_if proto tcp to port ssh \
queue (ssh_bulk, ssh_interactive)
match in quick on $ext_if proto tcp to port ftp queue ftp
match in quick on $ext_if proto tcp to port www queue http
match out on $ext_if proto udp queue udp
match out on $ext_if proto icmp queue icmp
This is definitely the way to add queueing to an existing configuration, and in my view also a good practice for configuration structure reasons. But you can also tack on queue this_or_that_queue at the end of pass rules.
There are two often-forgotten facts about HFSC traffic shaping I would like to mention.
Traffic shaping is more often than not a matter of prioritizing which traffic you drop packets for, and no shaping at all takes place before the traffic volume approaches one or more of the limits set by the queue definitions.
One of the beautiful things about modern HFSC queueing is that you can build in flexibility, like this:
queue rootq on $ext_if bandwidth 20M
queue main parent rootq bandwidth 20479K min 1M max 20479K qlimit 100
queue qdef parent main bandwidth 9600K min 6000K max 18M default
queue qweb parent main bandwidth 9600K min 6000K max 18M
queue qpri parent main bandwidth 700K min 100K max 1200K
queue qdns parent main bandwidth 200K min 12K burst 600K for 3000ms
queue spamd parent rootq bandwidth 1K min 0K max 1K qlimit 300
The min and max values are core to that flexibility. Subordinate queues can ‘borrow’ bandwidth up to their own max values within the allocation of the parent queue. The combined max queue bandwidth can exceed the root queue’s bandwidth and still be valid. However, the allocation will always top out at the allocated or the actual physical limits of the interface the queue is configured on.
For bursty services such as DNS in our example, you can allow burst for a specified time where the allocation can exceed the queue’s max value, still within the limits set on the parent queue.
Finally, the qlimit sets the size of the queue’s holding buffer. A larger buffer may lead to delays since its packets may be kept longer in the buffer before being sent out to the world.
And if you noticed the name of that final, tiny queue, you probably have guessed correctly what it was for. The traffic from hosts that were caught in the spamd net was really horrible, as this systat queues display shows:
1 users Load 2.56 2.27 2.28 skapet.bsdly.net 20:55:50
QUEUE BW SCH PRI PKTS BYTES DROP_P DROP_B QLEN BOR SUS P/S B/S
rootq on bge0 20M 0 0 0 0 0 0 0
main 20M 0 0 0 0 0 0 0
qdef 9M 6416363 2338M 136 15371 0 462 30733
qweb 9M 431590 144565K 0 0 0 0.6 480
qpri 2M 2854556 181684K 5 390 0 79 5243
qdns 100K 802874 68379K 0 0 0 0.6 52
spamd 1K 596022 36021K 1177533 72871514 29
It was good, clean fun. And that display did give me a feeling of ‘mission accomplished’.
There are several other tools in the PF toolset such as carp(4) based redundancy for highly available service, relayd(8) for load balancing, application delivery and general network trickery, PF logs and the fact that tcpdump(8) is your friend, and several others that I have enjoyed using but I decided to skip since this was supposed to be a user group talk and a somewhat dense article.
I would encourage you to explore those topics further via the literature listed under the Resources heading for more on these.
Who else uses PF today?
PF originated in OpenBSD, but word of the new subsystem reached other projects quickly and there was considerable interest from the very start. Over the years, PF has been ported from the original OpenBSD to the other BSDs and a few other systems, including
- FreeBSD
- NetBSD
- DragonFlyBSD
- Apple‘s MacOSX, IOS (via FreeBSD)
- Blackberry (via NetBSD)
- Oracle, in Solaris 11.3 as one of two options, from Solaris 12 the only packet filter, replacing IPF. Also, see this blog post by yours truly.
Other than Oracle with their port to Solaris, most ports of the PF subsystem happened before the OpenBSD 4.7 NAT rewrite, and for that reason, they have kept the previous syntax intact.
There may very well be others. There is no duty to actually advertise the fact that you have incorporated BSD-licensed code in your product.
If you find other products using PF or other OpenBSD code in the wild, I am interested in hearing from you about it. Please comment or send an email to nix at nxdomain dot no.
Resources for further exploration
Finally, I’d like to leave you with a list of resources that further explore my favourite things about the OpenBSD packet filter tools:
- If you are more of a slides person, the summary for a SEMIBUG user group meeting is available. A version without trackers but ‘classical’ formatting is also available.
- The PF User’s Guide
- The Book of PF by Peter N M Hansteen
- Absolute OpenBSD by Michael Lucas
- Network Management with the OpenBSD Packet Filter toolset, by Peter N M Hansteen, Massimiliano Stucchi and Tom Smyth (a PF tutorial, this is the EuroBSDCon 2022 edition). An earlier, even more extensive, set of slides can be found in the 2016-vintage PF tutorial.
- That Grumpy BSD Guy blog posts by Peter N M Hansteen
- OpenBSD Journal news items about OpenBSD are generally short with references to material elsewhere.
Peter N M Hansteen is a puffyist, daemon charmer, and penguin wrangler.
Adapted from original post which appeared on BSDLY.
The views expressed by the authors of this blog are their own and do not necessarily reflect the views of APNIC. Please note a Code of Conduct applies to this blog.