Friday, May 15, 2026

OpenSMTPD Is The Mail Server For The Future

OpenSMTPD profile image, see https://www.opensmtpd.org/
Image credit: the OpenSMTPD project

© 2026 Peter N. M. Hansteen

The SMTP mail server for the 21st century and onwards is OpenSMTPD, which is developed as an integral part of OpenBSD, but available in a portable variety too.

It was one of those things that I had fully intended to do years ago, but I only got around to actually doing once there was a definite deadline to get it done.

The time has come, as OpenBSD 7.9 will leave the exim package behind, and exim users will need to find a replacement before upgrading. This article describes my transition to OpenBSD's own OpenSMTPD mail server.


OpenBSD 7.9 will leave the exim package behind, and exim users will need to find a replacement.

OpenSMTPD (smtpd) is in the base system.


When OpenSMTPD was first introduced in the OpenBSD base system in OpenBSD 4.6 in October 2009, I had already been running a mail service for some years.

At the time I still found it convenient to keep using exim as the real mail server, protected by OpenBSD spamd in the incoming signal path and with a combination of spamassassin and clamav for content filtering.

It seemed quite tempting to me to play around with at the new smtpd at the time, but the initial version of the new mail server was not yet considered quite ready for prime time.


Note: This piece is also available without trackers but classic formatting only here.

The pace of development was quite hectic in the early years, and by the time smtpd replaced the classic sendmail as the default mail server in OpenBSD with the November 2014 OpenBSD 5.6 release, I had just completed the third edition of The Book of PF and I was interested, but the writing had been quite a drain on my energy.

And of course, the mail server setups I had running for myself and friends I thought of as complex enough that moving to something else would require quite some preparation and testing. So I would leave looking into the new mail server software properly for another day, soon to come, I was sure.

An Old Setup, Maintained With Much Love and Care

There are some hints of what that setup did (and still does) in the 2012 piece In The Name Of Sane Email: Setting Up OpenBSD's spamd(8) With Secondary MXes In Play - A Full Recipe (also tracked, prettified), but the main features are:

  • Two (originally three) separate sites, each with their own domains, where the other site(s) provide secondary MX duty for the other(s), each with a spamd-instrumented OpenBSD machine as the Internet-facing part of the mail setup
  • The OpenBSD machines perform spamd greylisting and greytrapping, but also provide content filtering on behalf of another set of domains with their own, not necessarily Internet-exposed, mail servers that receive the filtered mail relayed to them by the Internet-facing mail services.

This setup, with OpenBSD spamd in a greylisting and greytrapping setup in front and content filtering as the second stage before finally relaying to the protected mail hosts, worked well enough that we simply kept the systems running with only routine system and package upgrades and minor adjustments to configurations as needed.

In short, domains to be served came and went, but the spamd, exim and clamav+spamassassin combination stayed, on the ever reliable OpenBSD platform.

Time To Move On, Wait, Then Finally ...

Over the years, there were several episodes with medium to severe security flaws discovered in the exim codebase, but the OpenBSD package was generally well maintained and fixes tended to appear within a reasonable time.

From time to time OpenBSD developers and port maintainers discussed dropping support and removing exim from the package system, but it was only in early 2026 it finally happened.

OpenBSD 7.9 will ship without an official exim package.

So it was finally time for even this holdout to move to something else.

And Of Course, A False Start

Other OpenBSD users had kept telling me how good OpenSMTPD had become, so I decided now was the time, so I dug out some old notes and started experimenting.

Those old notes turned out to be utterly useless, and for a reason: The OpenSMTPD 6.4 release was the result of a major code overhaul that also changed important parts of the smtpd.conf syntax.

Unfortunately a majority of the third party guides out there that turn up early in search results still use the old syntax, and as a consequence, are useless, at least to users on OpenBSD or other platforms that have kept their code reasonably in sync. A useful rule of thumb is, if you find yourself reading an OpenSMTPD guide that is dated before 2020, do yourself a favor and move on to something newer.


If you find yourself reading an OpenSMTPD guide is dated before 2020, do yourself a favor and move on to something newer.

The Task At Hand: The Analysis

But to the problem at hand. The setup I was setting out to convert, was one that needed to accommodate

  • Inbound mail for users in the local domains, where we are the primary mail exchanger
  • Inbound mail for users in the domains where we are the secondary mail exchanger
  • Inbound mail for the users in the domains where we are the primary public-facing mail exchanger, but where we actually only relay after greylisting and filtering
  • Outbound mail from the local domains
  • Outbound mail from networks we have chosen to trust enough to relay for

The mail exchanger (MX) records for all domains involved were already in place, as was other relevant DNS information such as SPF, DKIM and DMARC records. TLS certificates and a regime for maintaining them was already in place, using LetsEncrypt tools.

That analysis converted to smtpd.conf logic would be:

  • We keep the existing /etc/mail/aliases file, the formats are compatible
  • OpenSMTPD conveniently has tables, which can be either simple lists or key-value pairs. Our tables are:
    • domains_local lists the domains we receive mail for to handle locally
    • relay-for_domains lists the domains we only filter for, then relay to
    • domain_relays is the list of domains and their final destination mail exchangers as list of key-value pairs
    • relay_from_ips is the list of IP addresses and networks we allow relaying for

Now The Actual Implementation

So I set to work from that specification, and at the end of that afternoon I had reached the conclusion that

  • setting up for TLS was the easiest, with simple pki statements
  • listen statements actually do the work in very limited space
  • routing to local delivery and forwarding is easy with a combination of action and match rules
  • for filtering, clamav had not actually been of much use to my users (no Windows users among them), and that the filtering options that made use of spamassassin for the back end were either not functional or I was too dense to make any sense of them.
    I ended up testing a modern alternative, rspamd, which is available via the OpenBSD package system.
  • dkimproxy looked like a good candidate for signing outgoing messages, so I tested that for a while, but on the advice of Martijn van Duren, I switched to dkimsign, which is available on OpenBSD as the package opensmtpd-filter-dkimsign.

In addition to smtpd, which is already in base, this configuration requires the packages opensmtpd-filter-dkimsign and opensmtpd-filter-rspamd.

Installing both via pkg_add have the packages pull in all required dependencies.

With the prerequisites in place,

disable and stop exim
doas rcctl disable exim && doas rcctl stop exim

disable and stop clamav
doas rcctl disable clamav && doas clamav stop clamav

disable and stop spamassassin
doas rcctl disable spamassassin && doas rcctl stop spamassassin

At some point, you should remove the packages with

doas pkg_delete packagename

and follow the steps outlined in the package delete message.

Don't remove the exim configuration, though, until you have copied the useful parts across to your new /etc/mail/smtpd.conf.

I ended up with this configuration (lightly edited for brevity)

---- /etc/mail/smtpd.conf
table aliases file:/etc/mail/aliases

table domains_local {
"bsdly.com",
"bsdly.eu",
"bsdly.net",
"bsdly.no",
"bsdly.org",
"bsdly.se",
"nxdomain.no",
# plus a lot of other domains, elided here for brevity
}

table relay_for_domains {
"nuug.no",
"blug.linux.no"
# again more domains in the real smtpd.conf, left out here
}

table domain_relays {
"nuug.no" = "smtp://mx1.nuug.no",
"blug.linux.no" = "smtp://mail.lamasti.net"
# again more domains in the real smtpd.conf, left out here
}

table relay_from_ips {
127.0.0.1
::1
# The rest are fictional, RFC5737 and RFC3849
192.0.2.0/24
198.51.100.0/24
203.0.113.0/24
2001:DB8::/32
}

filter "rspamd" proc-exec "filter-rspamd"
filter dkimsign_rsa proc-exec "filter-dkimsign -d bsdly.net -s x -k /etc/mail/dkim/private.rsa.key" user _dkimsign group _dkimsign

pki skapet.bsdly.net cert "/etc/mail/certificate.pem"
pki skapet.bsdly.net key "/etc/mail/privkey.pem"

listen on socket

listen on all port 25 tls pki skapet.bsdly.net filter "rspamd"
listen on all port 465 smtps pki skapet.bsdly.net filter "rspamd"
listen on all port submission tls pki skapet.bsdly.net

action "local_mail" mbox alias <aliases>
action "relay_domain" relay domain <domain_relays> filter "rspamd"
action "outbound" relay filter dkimsign_rsa

match from local for local action local_mail
match from any for domain <domains_local> action local_mail
match from any for domain <relay_for_domains> action relay_domain
match from src <relay_from_ips> for any action outbound
match from local for any action outbound
----

The specific domain names and IP addresses will be different for the secondary site, as it will for any configuration you will set up.

After some logs and messages observation, I also ended up with minor modifications to the rspamd config,

---- /etc/rspamd/local.d/actions.conf
reject = 10; # final reject
discard = 15;
add_header = 6; # mark spam
greylist = null; # do not greylist, we have spamd for that

# Custom action (referenced by force_actions), no own threshold
phishing = {
flags = ["no_threshold"];
}
----

That is the entire configuration. With the somewhat longer list of domains and networks, the net length of my configuration now is

$ grep -vc \# /etc/mail/smtpd.conf
104

104 lines, while the previous exim config with comment lines stripped out ran to

$ grep -vc \# /etc/exim/configure
380

380 lines.

The smtpd.conf configuration is readable on par with pf.conf, with similar readability features.

At any time after installing the packages and disabling the previous services, enable and start the new services.

To (re)enable smtpd as the default mail server after running with exim, run

$ doas /usr/local/sbin/exim-disable

to restore /etc/mailer.conf to its original state.

If all else fails, you can easily retrieve a pristine version from the OpenBSD CVS.

To enable the new services, run

$ doas rcctl enable smptpd && doas rcctl start smtpd
$ doas rcctl enable redis && doas rcctl start redis
$ doas rcctl enable rspamd && doas rcctl start rspamd

You should see activity fairly soon by monitoring /var/log/maillog, such as

$ tail -n 500 -f /var/log/maillog

When you are satisfied that mail flows in and out and is relayed where you want it to, it is safe to remove the exim, clamav and spamassassin packages and follow the instructions in the pkg_delete messages to free up some space.

And yes, considerably more complicated configurations are possible, especially in the filtering department.

But I was pleasantly surprised at both how simple the transistion has proven to be and the prospect for having truly maintenance and enhancement friendly setup going forward.

The transition process has showed me that OpenSMTPD product is solid, like the wider OpenBSD environment. Making OpenSMTPD the default mail server software was no doubt one of those extremely good decisions the OpenBSD project has made, and even latecomers like myself applaud the decision.

OpenSMTPD and OpenBSD both are characterized by their developers' ability to not only learn from earlier iterations of development of the operating system and the mail server component, but also to come up with new, and some times radically different, approaches to known problems that result in a more secure and more useable product. To my mind, this is the mail server and the operating system for the future.

If you are interested in setting up smtpd with more filters or other ones, quite a few are available, including such things as opensmtpd-filter-dnsbl, which pulls in DNS blocklists from the sources you specify.

OpenSMTPD is available on a wide variety of platforms, including various Linux distributions and BSDs such as FreeBSD via its -portable variety.

I have kept this configuration rather minimal, mostly because in my experience, the greylisting and greytrapping spamd is a very efficient and low maintenance outer shield for any mail service. If you are interested greytrapping, the infrequently updated Eighteen Years of Greytrapping - Is the Weirdness Finally Paying Off? (also tracked, prettified) provides more reading material via its numerous links than you could reasonably take in during even a long evening.

If you would rather have a book that covers more networking topics with OpenBSD and FreeBSD as the platform, and includes a fairly extensive treatment of spamd, The Book of PF, now in its fourth edition, is for you.

Good night and good luck!

I want to thank Martijn van Duren for useful advice while working on this article.


OpenSMTPD is the Mail Server for the Future is © 2026 Peter N. M. Hansteen (published 2026-05-15)
You might also be interested in reading selected pieces via That Grumpy BSD Guy: A Short Reading List (also here).

Saturday, April 25, 2026

The implementation of the Carrier Pigeon Internet Protocol, RFC1149, 25 years later

© 2005, 2026 Peter N. M. Hansteen

The pigeon and the first packet to be transferred at the RFC1149 implementation, pre-flight. Edited from Karl Magnus Kolstø's original picture.

Historical note: We implemented the Carrier Pigeon Internet Protocol, RFC1149, with the full scale test performed on April 28, 2001. The following are my lightly edited notes for a talk I gave to the Adelaide Unix user group in October 2005 on the way to participating in the AUUG 2005 conference in Sydney with the first PF tutorial, which in turn is a precursor to The Book of PF (now in its fourth edition).


Note: This piece is also available without trackers but classic formatting only here.

Good evening. My name is Peter Hansteen. I was part of the project group at the Bergen Linux User Group which was the first, and to my knowledge the only group to implement and test Internet communications via avian carriers as specified in the internet draft standard called RFC 1149.

In fact, my laptop - not this one, but a Toshiba which I used every day for another couple of years after the RFC1149 implementation, is probably the only computer in existence which has been pinged via carrier pigeon.

Now the obvious question is, why would anyone want to do such a thing? Well, the purpose of this talk is among other things to answer that question, but first I think it will be useful to briefly explain what an RFC is, and how the Internet standards process works.

This is a fairly technically oriented audience, so I assume you have seen some RFCs and heard references to these documents. Anyway, the RFCs, actually "Requests for comments" are not formally standards, but quite a lot of them would pass for one. A large number of RFCs are either "Current best practices" or "Recommended standards". The standards, recommendations and best practices codified in the RFCs are created and maintained by the Internet Engineering Task Force, which among other things holds conferences three times a year in locations all over the world and acts as the coordinating force for technical matters related to the Internet. You can read all about it at the IETF web site https://www.ietf.org.

The last time I looked (on October 7th, 2005), there 4234 RFCs in total, the last one dated October 2005, while RFC number 1, written by S. Crocker and entitled "Host Software", is dated April 7th, 1969. Dang, it looks like they missed the April's Fool opportunity that year.

Most RFCs are written simply because they are needed, usually to resolve some particular issue. Taken together, they are a large part of the reason why the Internet works and is usable today. Some supersede others, and yet others may pretty much cancel each other out, but taken together, they form the sum of what the Internet is at the specification level.

As I said in the publicity blurb for this talk, there has been quite a lot of experimental stuff on the net since the earliest days, and in some cases the code which implements a proposed standard is ready to the degree that code ever is before the RFC specification is done. In other cases, it takes years for a specification to be successfully implemented and tested. I don't have the complete statistics, but at some point in the future I'm pretty sure that an Internet historian will be able to generate a really nice graph of the data.

Now I can feel you are dying to ask, why did it take so long for RFC 1149 to join the ranks of implemented, if not recommended standards? The document was issued on April 1st, 1990, and the first full scale test of the first implementation took place on April 28th, 2001. Why did it take so long?

To answer that question, we need to take a look at the document itself, dated April 1st, 1990. The heart of the specification is what you find in the "Frame Format" section which states

"The IP datagram is printed, on a small scroll of paper, in hexadecimal, with each octet separated by whitestuff and blackstuff. The scroll of paper is wrapped around one leg of the avian carrier. A band of duct tape is used to secure the datagram's edges. "
and
"Upon receipt, the duct tape is removed and the paper copy of the datagram is optically scanned into a electronically transmittable form."

That sounds easy, doesn't it? Any hacker would have the necessary ingredients, which are

  • at least two computers (check)
  • at least two printers (check)
  • at least two scanners (check)

- and

  • enough carrier pigeons homed on where the computers, printers and scanners are located.

The last bit is the tough one. That, and of course, if it isn't already mind-numbingly obvious, it was all a joke. The first clue is the date, April 1st. There are several other RFCs date April 1st in various years, and most them were intended to be serious, but they also include such gems as RFC 3514 from April 1st 2003, which specifies how and when to set the "evil bit" for network packets.

That one, if I remember correctly, was in fact implemented the same day in FreeBSD-current, which could then again boast to be the most RFC compliant operating system out there. Looking at RFC 1149, it is pretty obvious that it is nowhere near having any practical value whatsoever. Almost any other way to transfer network packets you can think of will be faster. But the specification is remarkably clear and well written. I remember reading RFC1149 for the first time as an appendix to some TCP/IP training material i was translating in 1993 or 1994. After a few grins, I remember thinking that it could probably be done.

So it was a fairly famous document when we decided to do an actual implementation. The Bergen Linux User Group had resumed activity in the spring of 2000 with regular meetings scheduled for the last thursday of every month except summer and Christmas break months June, July and December.

The core of organizers have tended to meet at my office for planning activities most Thursdays. We would have a more or less formal meeting, planning the next few meetings, talking about possible activities and speakers, and so on. Just around the corner from where I work was an Irish bar, "The Harp", that is to say Irish themed and stocking Guinness stout as well as the regular Norwegian Ringnes pilsner beer. That's where we would usually go for a few beers after we were done with the planning bit.

The idea of actually implementing RFC 1149 probably first came over beers at The Harp. This was around the time when most of the serious planning activity in BLUG was about Linux kernel uberhacker Alan Cox' visit, which was scheduled for lat April 2001. We thought that Alan would be a major attraction by himself, but we also thought we needed something spectacular in order to attract attention and get people to turn up at the meetings. So somebody mentioned April first RFCs, and since the idea seemed a good one the next morning too, we decided that RFC1149 was the one, if we could only find pigeons. We were surprised how easy that was. Typing "brevduer bergen" into a web search engine (probably google) turned up the web site of a pigeon racing club in Bergen, with enough contact information for us to make contact.

The first RFC1149 Birds of a feather session took place at Svein Arne Rosendal's house in suburban Bergen on the evening of march 6th 2001. The minutes of that session is available on the web. Basically we were three BLUG people, Karl Magnus Kolstø, Vegard Engen who ended up writing the code and myself. We were able to persuade the pigeon people that it would be a great idea to participate in the project, and they went on to list a few likely candidates from among the club members.

The idea was that it would be better if the two pigeon homes were rather close together, simply because a flying time could become a factor. I think it took only a few days before they had located two pigeon racers who lived within reasonable range of each other. I think the "as the bird flies" distance was around three kilometers, across the small mountain Løvstakken. The peak of Løvstakken is 477 meters above sea level, but the most likely flying route for the pigeons would take them from an elevation of roughly 150 meters up to perhaps 300 then down to around 50 meters above sea level for the outgoing packets and of course the other way around for the return traffic.

We set the date and time, and went on to planning the other parts of Alan's visit - these included feeding the penguins at Bergen Aquarium, a fjords sigthseeing day, the Thursday user group meeting with Alan giving a talk, and finally the pigeons test. Vegard wrote pigeonware in late March or early April.

Now for the technical details - Linux kernels version 2.4 and later feature a TUN/TAP device interface which makes it relatively easy create applications such as pigeonware which handles network traffic via a userspace daemon.

The pigeonware README file lists the package requirements:

- Linux kernel v2.4.x with the Universal TUN/TAP Interface as a module.
- GOCR (http://jocr.sourceforge.net)
- Printer supported under Linux
- Scanner supported under Linux
- Pigeons
- Some luck
  

What we used were Vegard's laptop and mine, both Toshibas, and both running Debian GNU/Linux. The printers were HP LaserJet clones, I forget which exact makes, the scanners were Agfa USB scanners which worked flawlessly from the moment they were plugged in. On my machine, I needed to do a dist-upgrade from stable to testing and a more recent kernel than the stock Debian one. I remember some dependencies needed to be resolved by hand, but it did work out in the end.

Now we had had some serious discussion of what kind of traffic we wanted to test with. Would not sending an email message to delivered by pigeon be an appropriate test, for example? Well, to set up a full TCP/IP connection for an SMTP session, with all handshakes and responses, we calculated it would take roughly 25 network packets - that is 25 - pigeons before we start transferring the actual message. So after a bit of discussion we decided that a ping session, all ICMP, would be sufficient to prove that the technique worked. Ping packets are small, too, so we wouldn't need to fiddle with oversize MTUs or anything.

This is one of the actual packets we generated during a test run:

45 00 00 54 00 00 40 00 40 01 20 A7 0A 00 03 02 0A 00 03 01 
08 00 FC 36 84 6B 01 00 CF 15 E7 3A CF 09 06 00 08 09 0A 0B 
0C 0D 0E 0F 10 11 12 13 14 15 16 17 18 19 1A 1B 1C 1D 1E 1F 
20 21 22 23 24 25 26 27 28 29 2A 2B 2C 2D 2E 2F 30 31 32 33 
34 35 36 37 
  

This is from the pre-pigeon dry run testing at my office. We tested the setup, by plugging it all in except the pigeons, disconnected the computers from all other networks, ran the setup scripts and then initiating a ping session. It would take a few seconds for the first page to be printed. To simulate real world conditions, one of the other volunteer helpers would take the sheet of paper, circle the table a couple of times before handing it off to be scanned. The packet was scanned at the destination machine, and after a few moments the return packet would be printed.

Some pictures from the preparatory testing (courtesy Vegard Engen)

The man from the pigeon racing club had indicated a filling station which would be a good place to meet at 10:30, giving us a reasonable amount of time to drive people and equipment to the two pigeon homes and set up in time for the test scheduled for 12 noon. While at the filling station, Vegard in vain tried to explain the project to one of the national TV networks. In fact, as the conversation went on, it sounded to us (who heard only one part of course) that the poor soul at the other end grew more frustrated and confused the more information he was given.

Arriving at the first pigeon home (Bråtet Terrasse), at first nobody was home. This lead to a slight anxiety, almost to the point where various plan B options were considered, when the man we had been waiting for turned up. Setting up the main base with Vegard's laptop, a scanner and a printer started almost immediately, and a smaller group headed for Lyngbøveien to set up the satellite base -- the place to be pinged.

Setting up in Lyngbøveien was a matter of only a few minutes, and the satellite crew ran a couple of test packets saved from the night before to check that the setup was still working. Noon passed, and -- OK, it felt a bit like cheating -- we rung the main base via GSM phone to find out if the first packet had been sent. Not yet, we were told, the documenting took a little more time than exptected.

Several more mobile phone calls followed, relaying information about the number of pigeons sent and the rough location of the largish flock they had joined. After a little more than an hour, a pigeon turned up, only to land at the top of the roof (Kjell's house is a three-story affair) and proceeding to clean the odd wing feather.

Kjell made several attempts at getting the bird to find its way to the pigeon den, and finally succeeded. At last, the initial packet had arrived. The tape was rather sticky, and the strip of paper had been rolled, then flattened a bit, making the optical character recognition somewhat unreliable. We were to discover that hand smoothing (say, with the help of a ruler or even the edge of a table), then attaching the strip of paper to a full A4 sheet of paper before placing it on the scanner improved character recognition significantly.

The next four packets arrived simultaneously within minutes of the first one, leading to some congestion in the scanning queue. When the fourth packet had been sent, we were startled to hear intensive flapping of wings.

The two remaining carrier pigeons had managed to escape without being fitted out with a payload, and we unexpectedly found ourselves in a NO CARRIER condition.

Two more packets arrived, but we were unable to respond. Nothing left to do but unplug and start packing. Our ride arrived after a little while, and after thanking our host, we set off to unload the equipment at the office and returning some of it to the owners. The participators headed in different directions for a couple of hours, with a plan to reconvene at Håvard's to help Alan clean out the tax-free quota. Suffice to say, the combined efforts of several people got us most of the way there.

Vegard's rfc1149 writeup was published Sunday afternoon, and a slashdotting made our previously ~200 hits a day web aquainted with the feeling of more than a million hits on the Monday. Number of hits has been tapering off after that, but stayed in the hundreds of thousands per day for the next couple of weeks.

Soon after the event, Eric Raymond's Jargon file's The Meaning of ‘Hack’ section was updated with a description of the event, naming the implementation "a wonderful hack story for the new millennium".

We also got some attention in the IT oriented press internationally and even mainstream media such as the BBC ran stories about the event. My own favorite is still the Salon item. As far as we are aware, no Norwegian language publication of any kind ever carried a story about this somewhat odd innovation.

Vegard went to the Internet Engineering Task Force conference later in the same year, and received a commemorative plaque on behalf of the group:

The IETF RFC1149 implementation plaques

There were even some art projects that referenced our implementation, and I think even some that reused the pictures that were available on the BLUG RFC1149 web. (An archived copy of the original site can be found here).


The notes in full prose form end there. I was likely in a hurry to catch a plane, but I mention (as I do in the slides) that we had at least thought of some followup work:

	
Future developments

* more implementations
       needed to get on the Standards track
* Interspecies handoff protocols - relay runners?
      

In 2011, I wrote a short commemorative article, RFC1149: Ten Years of In-Flight Internet (also here) to note the ten year anniversary, where I noted that

The CPIP WG activities have proceeded at a more leisurely pace in recent years. In 2005 I went to the AUUG 2005 conference to do an early version of the PF tutorial, and en route I made a presentation in Adelaide about the project (slides and accompanying notes are still avaliable).

We're still looking for independent, interoperable implementations, though. Preferably on other free operating systems besides Linux. If we can entice our old pigeon partners to participate, we're more than willing to arrange for interoperability tests.

The world needs this to be on the IETF Standards Track.

Soon after the ten years mark, we did some re-enactments, with people (mainly children) acting as packet carriers, but I have been unable to find any resources about these events available on the web.

Again, independent implementations for other free operating systems would be most welcome, and the original implementers are still available and willing to participate in any interoperability tests.


The implementation of the Carrier Pigeon Internet Protocol, RFC1149, 25 years later is © 2026 Peter N. M. Hansteen (published 2026-04-25)
You might also be interested in reading selected pieces via That Grumpy BSD Guy: A Short Reading List (also here).

Friday, March 6, 2026

The Book of PF, 4th Edition: It's Here, It's Real

© 2026 Peter N. M. Hansteen

The Book of PF 4th edition, posed on the author's laptop

The long wait is over. Fresh copies of The Book of PF, 4th Edition arrived here today. Which means: I'll bring some to upcoming conferences!

When the email message with package tracking information turned up, I thought at first that this was yet another dirty, scammy, spammy phishing thing.


Note: This piece is also available without trackers but classic formatting only here.

It had been years since the last time I had had a UPS delivery, and since then I had only seen obvious scammy phishing things pretending to delivery messages that all in fact looked very much like the crummy fakes they turned out to be.

But this one looked subtly different, in that it had both my address and my somewhat unusual name correctly spelled.

In addition, the URLs were on close inspection all for actual UPS.com resources.

And finally, the shipment had a total weight of 10.4 kg, which sounded about right for the stack of The Book of PF, 4th Edition author copies I was expecting to receive soon after the print run was ready to ship.

Now, a few days later, I have just taken delivery of package, this time delivered to my door by a delivery man who was very excited to be delivering a box of books to their author. The box was slightly heavier than my regular backpack:

The box containing the Book of PF 4th edition author copies

and after taking the box inside, I was able to extract the 15 copies, as expected.

This is me, holding my first physical copy of the fourth edition to come out of the box:

Peter Hansteen and the first of the The Book of PF 4th edition author copies to come out of the box

This means that the process that started almost two and a half years ago is finally, tangibly complete. I have the shipment of author copies of The Book of PF, 4th Edition in hand.

Regular readers of this column as well people on relevant mailing lists will be aware, as will my followers on various social media, that people who ordered the book early have already started receiving their copies of the refreshed The Book of PF. And some have posted their (so far only positive as far as I am aware) reactions to their media of choice.

But for one reason or the other, my expected stack of author copies were not among the very early deliveries.

Among the likely reasons are the fact that I am located in Bergen, Norway, which, at roughly 62 degrees North and in The Other West Coast fjords country, is somewhat off the beaten track for high speed deliveries.

And as any good business would do, No Starch Press will put a priority on delivering the product to customers who put in preorders or were on the spot once the thing had been declared ready.

Now I can tell you all yet again, the thing exists, and I am delighted to see the actual physical copies.

I know some readers were expecting there to be an auction for the first signed copy like there was for the third edition.

I have to disappoint you there. Unfortunately, side effects of decisions made by politicians on the other side of the Atlantic have made it impractical to set up an equivalent auction this time around.

Instead, I would encourage readers who can afford to do so to make a direct a donation to the OpenBSD Foundation or to the OpenBSD Project directly, in order to support the OpenBSD project and the immensely valuable work they do for free software.

If you're more of a FreeBSD person, the FreeBSD Foundation will be happy to hear from you as well.

Once you have done the rounds of your favorite free operating system supporting foundation and have your (or your bosses') credit card back in its regular location, you can think forward to one or more of the upcoming regional BSD conferences.

The main ones this year are,

  • AsiaBSDCon, March timeframe, alternates between Tokyo (JP) and Taipei (TW). AsiaBSDCon 2026 will be March 19-22, 2026 in Taipei, Taiwan.

    This will include the first scheduled Network Management with the OpenBSD Packet Filter Toolset session of the year, on March 19th, 2026.

  • BSDCan, Mid May to mid June, Ottawa (CA). BSDCan 2026 will be June 17-20 in Ottawa, Canada.

    This, too, will have a Network Management with the OpenBSD Packet Filter Toolset session, on June 18th. Details to follow closer to the date of the conference.

  • EuroBSDcon, September timeframe, each year in a new European city. EuroBSDCon 2026 will be September 10-13 in Brussels, Belgium. The call for papers is open from March 1st, 2026 and runs until June 20th.

All three conferences will welcome submissions during their Call for Papers period for talks, tutorials and other types of sessions as well as general participation by people regardless of geographic or other origin.

And unless truly unexpected events take place, I will be at all three this year, bringing copies of The Book of PF, 4th edition.

I look forward to seeing you at these events.

If you are interested in coming to a regional BSD conference as a speaker, volunteer or other participant, see What is BSD? Come to a conference to find out! (also here) for some information.

Through all of those events, our resident philosopher took it all with her usual calm:

Our residen philosopher, taking all that book stuff with her usual calm

If you want to get the book, and using the link to buy from No Starch Press is not practical, I would encourage you to support your local bookstore and see if they can order it in.

The Book of PF, 4th Edition: It's Here, It's Real is © 2026 Peter N. M. Hansteen (published 2026-03-06)
You might also be interested in reading selected pieces via That Grumpy BSD Guy: A Short Reading List (also here).

Sunday, February 1, 2026

The Rest Is Trash

© 2026 Peter N. M. Hansteen

We are now halfway through the nineteenth year of greytrapping, still tracking and collecting from the wealth of imbecility out there

My logs tell me that in early September of 2025, I added the following entries to the list of spamtraps (or raw version)

!!!!!!!!der!rest!ist!muell!@bsdly.net
!!!!!!!!el!resto!es!basura!@bsdly.net
!!!!!!!!the!rest!is!trash!@bsdly.net
    
on a whim and to emphasize to anyone who would actually care to read any part of the file that the content is, for most purposes, utterly worthless.

Note: This piece is also available without trackers but classic formatting only here.

Planting a clue that obvious may or may not have helped somebody out there, but it felt somehow appropriate. And while writing this article, I added a French equivalent for those who would feel that one was needed.

As I have mentioned earlier, there are a number of ways to find potential spamtrap material. In the early days, I simply extracted entries from the mail server logs with some very crude regular expression match using grep on the log files.

Later I set up to extract from the spamd greylist instead. That turned out to be a little shortsighted, since it would not then catch the attempted deliveries from a host that had already entered the blocklist. That thought only struck me again some years later, and in the 2022 piece Harvesting the Noise While it's Fresh, Revisited (also here) I run through how I amended the collection method a little.

The retrospective piece Eighteen Years of Greytrapping - Is the Weirdness Finally Paying Off? (also here) offers descriptions of a few other methods of creating spamtrap entries.

Then in the early days of January 2026, another episode involving a mail operator you will have heard about led me to write A Major Mail Provider Demonstrate They Likely Do Not Understand Mail At All (also here).

That episode also led me to realize that with a significant part of the traffic from at least that operator coming in over IPv6 or from a set of pre-cleared hosts such as the mail exchangers of the too big to block operators, there would be nothing logged by spamd anyway, so I set about collecting yet again candidate spamtraps from the real mailserver logs.

It turned out that by not looking at delivery attempts via IPv6, I had indeed been missing quite a significant number of potential imaginary friends.

As a result of the expanded scope of my noise harvesting, combined with some light massaging of the finds using tools such as rot13 and rev, the total number of added spamtraps for the month of January 2026 was

5665771

a total of five million, six hundred and sixty-five thousand, seven hundred and seventy-one added, so the total by end of January 2026 was more than twenty-two and a half million.

I will keep updating the retrospective piece Eighteen Years of Greytrapping - Is the Weirdness Finally Paying Off? (also here) with monthly numbers, and those who are interested can even follow the progress by monitoring the files in the log directory.

And since I know some of my returning readers feel that a greytrapping article is not complete without a graph, I have made one for the year so far,

Line graph of hosts in spamd-greytrap 2026 up to publication date

The data that went into producing the graph is available as 2026-traplistcounts.txt.

I take the numbers so far as an indication that the trend towards fewer routable IPv4 addresses are in play continues, with some traffic moving to IPv6 while an increasing number of spam senders are likely behind NAT, carrier grade or otherwise.

But there is another annoying phenomenon that turns up occasionally. From this perch it looks like one or more somebots or somebodies mistake a greylisting and greytrapping spamd for an open relay, and something like this happens.

A host somewhere (in this case apparently in China) tries to relay via my setup, in this case using a significant subset of the spamtraps as their From address.

Some of these episodes have come with a rate of new connections and sheer number of attempts that it leads me to think there may be more than one host actually active, likely behind a NAT device.

One recent episode annoyed me enough that it had me toy with appending state tracking with overload to the pf.conf rule that feeds our spamd, so the rule would be


pass in on egress inet proto tcp from any to any port smtp \
divert-to 127.0.0.1 port spamd flags S/SA keep state \
(max-src-conn 1200, max-src-conn-rate 255/10, overload <webtrash> flush global, pflow)

which in means that any spam source that insists on trying to fill our state table will earn themselves an entry in the <webtrash> table, to be blocked by us and consumers of the exported blocklist for the next six weeks.

This has the slight disadvantage of letting the spamd-greytrap entry expire 24 hours later, but on the other hand the rules in place here and possibly other places mean that the host will not be able to communicate at all with any system that uses our exported blocklists for at least six weeks.

But of course, our log files will be incrementally quieter for that period of time.

If you have any comments you want to share, please let me know either in comments to this piece (where available) or followups to the post where you found this article referenced.

Update 2026-02-07: I added a few more translations of the phrase "the rest is trash", prepended with !!!!!!!! and interspersed with further !s where I it seemed appropriate. If you feel more languages need to be covered, please let me know in comments or send me an email, with the desired translation.


In other news, The Book of PF, 4th edition is inching nearer to delivery in physical form. See the earlier piece Yes, The Book of PF, 4th Edition Is Coming Soon (also here) for the story of why that edition needed to exist.

The next scheduled Network Management with the OpenBSD Packet Filter Toolset session will be March 19, 2026 at AsiaBSDCon 2026 in Taipei, Taiwan.

Papers and other sessions selection for BSDCan 2026 should be complete very soon after this piece has been published, so if you submitted, look for incoming BSDCan mail.

The call for papers be for EuroBSDcon 2026 will open March 1st, 2026 and run until June 20th.

If you are interested in coming to a regional BSD conference as a speaker, volunteer or other participant, see What is BSD? Come to a conference to find out! (also here) for some information.


The Rest Is Trash is © 2026 Peter N. M. Hansteen (published 2026-02-01)
You might also be interested in reading selected pieces via That Grumpy BSD Guy: A Short Reading List (also here).

Thursday, January 8, 2026

A Major Mail Provider Demonstrate They Likely Do Not Understand Mail At All

© 2026 Peter N. M. Hansteen

Screenshot of part of a mail bounce message, saying '  peter.hansteen@gmail.com
    host gmail-smtp-in.l.google.com [2a00:1450:4025:402::1b]
    SMTP error from remote mail server after pipelined end of data:
    550-5.7.1 [2a03:94e0:182c::1      19] Gmail has detected that this message is
    550-5.7.1 likely suspicious due to the very low reputation of the sending
    550-5.7.1 domain. To best protect our users from spam, the message has been
    550-5.7.1 blocked. For more information, go to
    550 5.7.1  https://support.google.com/mail/answer/188131 a640c23a62f3a-b842a23243csi674652166b.80 - gsmtp'

So it happened again. A major mail provider proved that they do not, in fact, understand how modern email works.

I've been running mail services for longer than I care to remember. It started out back when I was running a small business on the edge of tech, mainly dealing with software localization and documentation writing.


Note: This piece is also available without trackers but classic formatting only here.

The company I had started with a few colleagues were in close cooperation with another business that worked in similar, but not primarily overlapping fields of interest.

Then during the early to mid 1990s, Internet and proper SMTP Internet email became available, and as one did at the time, we set up with a mail service, running at first on an early Red Hat Linux.

After a while we moved to a Debian setup, and over time, unlike most small businesses of the period, chose to not go for a Microsoft solution but rather moved to a setup based on the other free alternatives, with a combination of OpenBSD and FreeBSD for services.

When spam became annoying enough, we configured content filtering of various kinds, which lowered the noise level for a while. Later we discovered that our OpenBSD firewalls could also handle greylisting and tarpitting via spamd, and immediately saw that our mail feed became cleaner yet again.

Those of us who were in the server room when the greylisting was turned on, could not help noticing that the fan noise from the mail server just disappeared.

But one thing we did not get entirely rid of was bounce messages from other sites, directed to user names that had never existed in any of the domains we served. Clearly, one or more groups out there were sending messages with faked addresses in our domains.

So I was very happy when I found that in the OpenBSD 3.3 release, spamd offered a new feature called greytrapping, which meant we could actually put those fake addresses to good use.

From that point on, a high level view of mail delivery to our systems work like this:

  1. When a message arrives, spamd checks whether we have received mail from that host in recent times. Mail from known sending hosts is sent on to the mail server.

    If the message comes from a host we have not seen SMTP traffic from earlier, spamd answers one byte per second, finally presenting a Temporary local error, please retry later code and message. The host is greylisted. If the host returns with the same set of sender IP address, from and to addresses, it will be let through. Except in one circumstance which we will come back to very soon.

    If the message comes from a known bad sender IP address, spamd answers in a subset of SMTP at a rate of one byte per second, until the sending side gives up.

  2. When a message comes from a greylisted host, spamd checks whether the to address is in the list of spamtraps. If if the message matches this criterion, the sending IP address is added to teh the list of hosts with a TRAPPED status and will receive the one byte per second treatment until the sending party gives up. Addresses that enter the spamd-greytrap set stay there until 24 hours after the time of last contact.

    If the to address is not in the list of spamtraps, spamd adds the sending IP address to the set of likely valid senders, and passes the message on the real mail server.

  3. The real mail server performs various checks on incoming mail, including whether the sending domain has valid SPF, DKIM and DMARC information and several kinds checks of message contents against known indicators of spam or malware.

  4. If the message passes all those validity and content checks, the mail server check whether the to address matches a valid recipient in the domains we handle.

  5. If the to address does not match a valid recipient, the message is rejected with a bounce message back to the sending party.

  6. If the to address does match a valid recipient, the message is delivered according to the configuration that is in place for that recipient.

The main difference between this setup and any mail server you will have heard about, is that we have a list of spamtraps. The source for spamtraps was originally the invalid addresses that turned up in our mail server logs.

Later, with greylisting in place, the obvious selection criterion was checking the greylist for addresses that did not match any local recipient or an existing spamtrap. Over time the number and kinds of sources expanded a bit. You can read all you want about that and more in the retrospective article Eighteen Years of Greytrapping - Is the Weirdness Finally Paying Off?

I have an hourly cron job that runs a script that exports the list of trapped IP addresses for use elsewhere and produces a report of various useful items, including a list of possible new spamtraps, extracted from the greylist.

The overnight batch on January 8th, 2026 looked like this:

fuchigamikamogawa522@bsdly.net
itsukiishibashimitaka@bsdly.net
kinoshitario0310@bsdly.net
kuromotoasuka_1007@bsdly.net
kyokomatsudakita@bsdly.net
motohashi_katsuyama@bsdly.net
namikiakari-funabashi@bsdly.net
namikidaisukeyamagata@bsdly.net
nomurajunichi_1958@bsdly.net
numanoenergy@bsdly.net
otaniasami@bsdly.net
rikoizawa2004@bsdly.net
sakakihifumi@bsdly.net
seoreina-1991@bsdly.net
shimamura_asahikawa@bsdly.net
sugimototaro.0321@bsdly.net
teshigawaraishikawa@bsdly.net
tokitakota.1968@bsdly.net
tsuboneyuji_mori@bsdly.net
ueki-miyagi633@bsdly.net
yamabenoboru@bsdly.net

The local parts or user names are not what you would expect to find in a business based in Norway.

But these Japanese-sounding names are quite typical of the backscatter we have been seeing here during the last two or three years. Most likely somebody is running spam or phishing campaigns aimed at Japanese users.

The bounce messages do not ever reach an inbox, but they do turn up in the greylist dump in the hourly message. There, the <> in the fourth column reveals that the messages were indeed bounce messages.

The afternoon batch looked similar:

aoki-1990600@bsdly.net
fumiakihachiya0827@bsdly.net
hachiyaakira-stormchaser@bsdly.net
hanamurachihiro2000@bsdly.net
isaacclark.sarahclark@www.bsdly.net
izawanaoki0819@bsdly.net
junanzai0902@bsdly.net
kagawa.1965@bsdly.net
kashiwagi1967@bsdly.net
kawamurasoma1971@bsdly.net
kenmiyagawa1953@bsdly.net
kodama_1985@bsdly.net
kusunokiotsu@bsdly.net
liammartin.norakumar@lfja.org
machiasuka_1977@bsdly.net
masuda1955@bsdly.net
matsuoka-1976@bsdly.net
monmagenji654@bsdly.net
mutojunichi-noon@bsdly.net
nakaya-2017@bsdly.net
oscartanaka.zoeevans@lfja.org
rukanakagome1981@bsdly.net
ryuseiterasawa2021@bsdly.net
shinjiohno.futtsu@bsdly.net
tabatayusuke_0302@bsdly.net
tamuraryuji1995@bsdly.net
tsukuda2016@bsdly.net
usuda.ishikawa.star@bsdly.net
yoshidakatsuo@bsdly.net
yukitakahagi94770@bsdly.net

The addresses in both of these batches were added to our spamtraps, affectionately known as our imaginary friends, along with a number of synthetic entries as described in the longer article Eighteen Years of Greytrapping - Is the Weirdness Finally Paying Off?.

But that day, after processing new spamtraps and a bit of overnight mail, I sent a message to a business contact of mine that uses a Google as their mail service provider. That produced a bounce message, some of which is quoted in this graphic:

Screenshot of part of a mail bounce message, saying '  peter.hansteen@gmail.com
    host gmail-smtp-in.l.google.com [2a00:1450:4025:402::1b]
    SMTP error from remote mail server after pipelined end of data:
    550-5.7.1 [2a03:94e0:182c::1      19] Gmail has detected that this message is
    550-5.7.1 likely suspicious due to the very low reputation of the sending
    550-5.7.1 domain. To best protect our users from spam, the message has been
    550-5.7.1 blocked. For more information, go to
    550 5.7.1  https://support.google.com/mail/answer/188131 a640c23a62f3a-b842a23243csi674652166b.80 - gsmtp'

I had put my own gmail.com address on Cc:, in part due to various earlier episodes with that provider. The diagnostic was the same for the other recipients.

This means that after several years of mostly managing to deliver messages sent from our systems to their intended recipients in Google managed domains (at random times deciding to put mail from here in their users' Spam folders), somebody decided it was time to disregard our domains' published SPF and DMARC information.

Their "very low reputation of the sending domain" is is likely down to a very poor understanding of how modern mail delivery works.

More likely than not, the volume of messages sent with faked sender addresses claiming to be from our domains and a source IP address in the great elsewhere is vastly larger than the actually valid messages sent from valid users, all of which will come from the hosts listed in our published records.

The existence of spamtraps should not be a surprise either, after all we have been doing greytrapping for more than eighteen years.

I would posit that this is a mail services provider that has demonstrated that they do not, in fact, understand SMTP mail at all.

Fortunately, posting the data and a description of the incident to a mailing list for mail administrators indicates that persons who work for that operator read that list, since the problem lessened a bit a few hours later. My messages now only land in the recipients' Spam folders.

I would like to invite a debate about incidents of this type. The big operators can be quite nasty to smaller players, as we can see from this episode and the earlier one you can find by following links in this article.

What are the sensible standards of behavior (aka netizenship) to expect from mail service providers?

Should, for example, we consider making the large operators (or smaller ones, for that matter) liable for damages for mishandling their service offerings like this?

Followup in comments to this article (where possible) or to the social media post that lead you to find this article.


Update 2026-01-09: One small batch of data might be of interest to my core readership. The output of a grep for "Unknown user" in the as-yet-unrotated mail server log is preserved in this file. My reading of this is that even the big names do not actually value their SPF/DMARC checks much at all.
Update 2026-01-10: A few hours after I had posted to the limited-distribution mailing lists for mail operators, and soon enough after I had published the original version of this article that it likely they did not read it yet, I was contacted by people who were plausibly claiming to be close to the actual developement and maintenance of the gmail codebase.

My impression, or at least the way I read what was in that communication, is that

  • Gmail is a large code base that has been evolving over a long time

  • The system is complicated enough and with enough factors (in the hundreds) involved that feeding the same message through the system several times is likely to produce different results each time

  • The messages the system produces for external parties to see are unspecific at best and may in fact point to factors other than those that actually determined the pass/no pass decision

  • The code has passed through many hands, and I at least get the impression that nobody currently there can honestly say they understand all aspects of the system

Despite all of this, they trust the system absolutely, claiming that it has a negligible false positive rate.

The last bit I at least think is a delusion that is sustained by the fact that they have made it pretty much impossible to file a problem report. It likely is easier for paying customers, but the only way in I have found is to post my gripes in public.

And yes, for every incident (there have been quite a few over the years) I have used side channels to contact my GOOG-using connections and ask them to file a problem report with as much details as possible. That seems to help, sometimes.

I had thought up a really snappy and harsh one-liner to characterize all of this, but I think I'll save that for another occasion.


A Major Mail Provider Demonstrate They Likely Do Not Understand Mail At All is © 2026 Peter N. M. Hansteen (published 2026-01-08)
Max Stucchi and I will be giving a PF tutorial at AsiaBSDC0n 2026, and I welcome your questions now that I'm revising the material for that session. See this blog post for some ideas.

For more information about the BSD conferences, see What is BSD? Come to a conference to find out! (also tracked).

For a broader overview and retrospective of mail and greytrapping, you may be interested in reading Eighteen Years of Greytrapping - Is the Weirdness Finally Paying Off?, which links to this piece and a number of other related resources.

You might also be interested in reading selected pieces via That Grumpy BSD Guy: A Short Reading List (also here).

Separately, pre-orders of The Book of PF, 4th edition are now open. For a little background, see the blog post Yes, The Book of PF, 4th Edition Is Coming Soon (also here). The latest information I have is that physical copies should be ready to ship by the end of January 2026.


Sunday, December 28, 2025

Why 451 is Good for You - Greylisting Perspectives From the Early Noughties

© 2025 Peter N. M. Hansteen

A software vendor was using SMTP spamware to send license keys customers had paid thousands for. A measured rant was in order, and turned out to be quite enlightening.

While looking through directories of old material, I stumbled upon what was most likely the main part of a complaint sent to a software vendor for failing to deliver a license key file the company I worked for then had paid some thousands of dollars for.

The file as I found it was dated August 2010, but was likely a preserved copy of a draft that was written some time before the first edition of The Book of PF (now in its fourth edition) was published, in response to the non-delivery incident. A quick investigation had me conclude from my spamd logs that their side did not play well with greylisting.


Note: This piece is also available without trackers but classic formatting only here.
I have revisited the handling sites that do not play well with greylisting theme a number of times, such as the 2018 piece Goodness, Enumerated by Robots. Or, Handling Those Who Do Not Play Well With Greylisting (also here).

But I found these early notes interesting enough that I include them here, with only minor redactions to protect the (relatively) innocent:

SWCrafters' reaction to finding out that their messages do not get through, essentially blaming "inaccurate spam filtering" was not unexpected, but I will take the opportunity to explain a few things about how Internet email works and how this makes their position at odds with reality.

Even though Internet services are offered with no guarantees, usually described as 'best effort' services, a significant amount of effort has been put into making essential services such as SMTP email transmission fault tolerant, making the 'best effort' one with as close as does not matter to having a perfect record for delivering messages.

The EXECUTIVE SUMMARY of this message is that the matter which trips up the delivery of SWCrafters' license-carrying emails is the fact that their email sending software's best effort at delivery falls significantly short of what current Internet standards require.

The current standard for Internet email transmission is defined in RFC2821, which in section 4.5.4.1, "Sending Strategy", states

"In a typical system, the program that composes a message has some method for requesting immediate attention for a new piece of outgoing mail, while mail that cannot be transmitted immediately MUST be queued and periodically retried by the sender."
and
"The sender MUST delay retrying a particular destination after one attempt has failed. In general, the retry interval SHOULD be at least 30 minutes; however, more sophisticated and variable strategies will be beneficial when the SMTP client can determine the reason for non-delivery."
Contrast this with the application which sends the SWCrafters license information messages, which according to the data I have avaliable opens two SMTP sessions within a second of each other (the time resolution I have in my logs at the moment), apparently discarding the message without delivery afterwards.

RFC2821 goes on to state that

"Retries continue until the message is transmitted or the sender gives up; the give-up time generally needs to be at least 4-5 days."
After all, delivering email is a collaborative, best effort thing, and he RFC states clearly that if the site you are trying to send mail to reports it can't receive anything at the moment, it is your DUTY (a MUST requirement) to try again later, after an interval which is long enough that your unfortunate communication partner has had a chance to clear up whatever was the problem.

A sending strategy which relies on every receiver to be receptive at all times, discarding undelivered messages after only one unsuccessful attempt, possibly makes sense if the data you are sending is unimportant or if your intended targets are unlikely read or even want to receive the messages you send.

If on the other hand the data you are sending matter to either you or the intended recipient, it is in everybody's interest that you use the fault tolerance features which a compliant SMTP mail system offers.

To put this in context, you need to remember that the SWCrafters license messages are the result of some SWCrafters customer ordering at least a thousand dollars' worth or software licenses, with no real upper limit on the dollar value of a single message. The system used to send these messages apparently does not understand SMTP status messages, discarding undelivered messages without a trace. Essentially, the system you are using treats the data your customers expect to receive in exchange for thousands of dollars paid as discardable.

The "greylisting" technique which is in use at justgottahave.faith and other sites means that our systems expect any SMTP sender to understand SMTP status codes and to respect "451 temporary error please try again later" messages. The hows and whys are detailed at https://www.greylisting.org/, with a tutorial which contains a 'close enough' description of how it's done at justgottahave to be found at https://home.nuug.no/~peter/pf/, with the particulars starting at https://home.nuug.no/~peter/pf/en/spamd.html.

We do content filtering as well, but this particular application never managed to get its data sent far enough to encounter content filtering until its IP address got whitelisted (listed as 'known good', or if you will, not having to conform to normal criteria).

Greylisting works extremely well, and since it is both standards compliant (essentially insisting on compliance) and simple to implement you should expect it to be deployed at the next site you are trying to send email to.

If I remember correctly, the other side found a way to send the missing license codes with something that did handle SMTP status codes correctly a short time after the mail that included some version of these notes was sent.

I had originally intended to make the URLs in the text here clickable, but changed my mind when I discovered that the current operators of greylisting.org have decided that a large language model (the current iteration of what passes for artificial intelligence needed to be included in the processing. That will perhaps serve as a sign that the world does move on, if not necessarily in useful directions at all times.

If you want to explore the ins and outs of greylisting and the related phenomenon greytrapping, my recent piece Eighteen Years of Greytrapping - Is the Weirdness Finally Paying Off? (also here) is a way to start. For the greylisting part, the notes above capture the main points.


Why 451 is Good for You - Greylisting Perspectives From the Early Noughties is © 2025 Peter N. M. Hansteen (published 2025-12-28)
You might also be interested in reading selected pieces via That Grumpy BSD Guy: A Short Reading List (also here).

Separately, pre-orders of The Book of PF, 4th edition are now open. For a little background, see the blog post Yes, The Book of PF, 4th Edition Is Coming Soon (also here). The latest information I have is that physical copies should be ready to ship by the end of January 2026.


Friday, November 28, 2025

A Bus Ride and the (At Least) 3x UX FAILs

© 2025 Peter N. M. Hansteen

Norway is digital to a fault. That is why attempting to buy the ticket for a bus ride can reveal a cascade of user experience (UX) failures.

Most days, I either take a half hour stroll to get to my main customer's offices, or work from home. But occasionally, I need to visit my employer's offices. On those days, I take the bus for an easy 20-ish minutes ride.


Note: This piece is also available without trackers but classic formatting only here.
This week, a few meetings and an internal session on UX Design were scheduled at my employer's site, so after my usual morning routine of making coffee, feeding the cat and going over overnight mail and news, I got ready to head out to the office.

On the way out the door, I opened the tickets app from our local bus company Skyss on my Android phone, selected the single ticket option, went on as usual to select Vipps as the payment method and cleared the authentication steps before locking and putting the phone away.

Some unrelated alert buzzed on the phone and had me unlock it again, only to see that the payment had failed with an Unknown error message.

I had been on the way out the door when the transaction was initiated, so I suspected that perhaps the network change from my home WiFi to Telia 5G had somehow disrupted connectivity. That would be a rare occurence, but has happened.

So I tried completing the transaction again, only to get the same result. After a couple of more tries, the bus turned up and I got on my way.

So yes, I had technically taken a bus ride without paying. That means I in principle owe Skyss something like NOK 41.32 and would be at risk of getting fined something like NOK 950 if caught by the ticket inspectors without a proof of payment.

No inspectors turned up, however, so my day went on to some customer work performed remotely from the office, meetings and finally the main item of the day which was a short, compact, intensive but also quite interesting and inspiring session on UX design work.

The UX session concluded, and we went on to some socializing over pizza and refreshments.

Then, naturally, came the time for my bus ride back home. Once more I tried to purchase a single ride ticket via the app, only to be presented with the exact same error.

Unknown error.
And no way to get any details on what the actual error was.

So I got on the bus, again without completing a transaction, so my debt to Skyss would now have roughly doubled, and again I ran the risk of getting fined, should the inspectors turn up.

At this point my main suspect for the source of the failure was the Vipps app.

For context, the Vipps smartphone app is very close to being the default payment method in Norway, even more so for transactions involving online payments. Any failures or problems of any kind involving the Vipps service are almost guaranteed to make headlines with strongly worded articles and aggressively ugly comment threads.

So when I got back home, I opened the Vipps app on my phone, only to find that instead of its usual transaction UI I was presented with a question about whether I was a politically exposed person, with the options to answer basically, "Yes", "I was one previously", and "No".

But no way to bypass the prompt and perform a payment or other transaction.

The answer was obvious, but once I entered the answer, I was only taken to a screen with a single option, Update, presumably to update the app to a newer version.

Pushing the Update button took me and my Android phone to the Play store entry for the Vipps app, which offered the option to Open the app or to install it on my Android tablet in addition to my phone.

Choosing to to Open the app only took me back to the same single-option Update screen, in a perfectly circular progession.

So after failing to find any other option, I ended up uninstalling, then reinstalling the Vipps app.

Which of course involves a completely new setup. Fortunately (or perhaps worryingly from a privacy perspective), the app managed to connect itself to my main bank account, inferred from my national ID number, which is a required bit of information in the sign-up process.

So UX fail #1 was in the Skyss app, where the developers had apparently trusted the Vipps app to either never fail or at least fail in some obvious way, so displaying any information from Vipps was deemed not necessary.

UX fail #2 would likely go to the developers of the Vipps app, who seem to have assumed that users will only ever interact with their system directly, never through a third party app that uses Vipps as the payment back end. Or perhaps the Skyss developers screwed up their app's API interaction with the Vipps app, possibly hooking in the app when they really should have been talking to the Vipps back end instead.

Finally, UX fail #3 goes clearly to the Vipps team, who appear to have failed to test the sequence of events that will be triggered by their Update button in the app. Whatever they did test apparently did not involve any recent-ish Android phone from those too-big-to-fail Koreans.

While an Internet greybeard like myself was able to figure out that the app needed to be dealt some minor violence, I can only imagine the utter puzzlement any less (Internet) digital native senior citizen of actually pretty much the same age as myself would have experienced when met with this exact scenario.

Bonus Track: Adobe Does This Too, With AI

For the developers I have just chided for not doing their jobs properly UX-wise, there might be some consolation in knowing that they are not alone in producing UX failures.

Returning readers will be aware that The Book of PF, 4th edition is coming soon (also here), and we have reached the time when the thing is in the last rounds of proofing.

For reasons probably best explained by the publishers' production team, the application we use for final proofing and related annotations is Adobe's Acrobat. A few years ago I decided that macOS is BSDish enough that I will use it quite a bit, so installing the no direct cost version of the app on a system within reach was a fairly painless excercise. As was the initial proofing round and an intermediate one.

Then when the PDF for the final proofing cycle arrived, and I loaded the two hundred and fifty-some pages PDF, I discovered that Acrobat had acquired an AI Assistant component.

When the progress indicator showed that the file was ready to display for my final proofing round, the Acrobat AI Assistant oh-so-helpfully prompted me with

This looks like a long document. Would you like to see a summary instead?

Granted, my use case here is possibly not the typical one for a user of the gratis version of Acrobat.

But I will award Adobe the UX fail #4 bonus prize here, a UX FAIL because AI, for failing utterly to consider that some people do, in fact, create long-ish documents and prefer to see them in the full.


A Bus Ride and the (At Least) 3x UX FAILs is © 2025 Peter N. M. Hansteen (published 2025-11-28)
You might also be interested in reading selected pieces via That Grumpy BSD Guy: A Short Reading List (also here).