Saturday, October 29, 2011

You're Doing It Wrong, Returning Scoundrels

The numbers are in. The slow dunces still don't get it.

After five days of activity and no wins on my machines, the Hail Mary Cloud moved on. That means we have yet another complete set of data to summarize and analyze. The numbers are:

A total of 4773 attempts, none of them successful, involving 338 distinctive source addresses, the most active host (, according to whois located somewhere in the Netherlands, made 109, while at the other end of the scale 30 hosts made only a single attempt). The wannabe attackers attempted to access 944 different user names, the most frequently attempted user name by far was root, with several blocks of root-only accesses even during the otherwise purely alphabetical stage.

The current sample is too small to support any far reaching conclusions, but it is tempting to speculate that with only 338 hosts participating we are seeing an indication that their success rate is sinking (previous attempts counted a cople of thousand hosts), even though they may be at least partially succeeding in their secondary goal: avoiding detection. That success is partial at best, this blog post and the earlier ones pluss varied commentary at Slashdot are indications that at least some of us are paying attention to our logs.

Another few observations worth making: 1) I have still not seen any of these sequences aimed at my Internet-facing OpenBSD systems, only Linux and FreeBSD ones. 2) It's likely that the miscreants are directing their attempts at several targets at the same time, so this sample is only a tiny fraction of the whole.

Reports of similar activity are surfacing from elsewhere, but very few people appear to be willing to share their data. It is of course even possible that the earlier episodes generated enough noise that better password policies (or preferably key logins only policies) are now in place, frustrating the random password guessers' attempts.

Whether or not you have been seeing these sequences in you authentication logs, please do yourself a favor and study your logs every now and then. It might even be worth the trouble to set up some kind of log collection and analysis infrastructure. Europeans may have to consider the legal implications of storing logs in light of the Data Retention Directive, denizens of the great elsewhere would do well to check if any similar legislation applies.

Good night and good luck.

Broken link fixed, sorry. Also, of course this has been discussed earlier, most recently in this post, also in this one as well as A low intensity, distributed bruteforce attempt (December 2, 2008), A Small Update About The Slow Brutes (December 6, 2008), Into a new year, slowly pounding the gates (December 21, 2008), The slow brutes, a final roundup (January 22, 2009) and The slow brute zombies are back (April 12, 2009). Read those for further info.

Update 2011-11-06: Another round of attempts has started, see the data aggregation page for the November 2011 entries. Of particular interest, perhaps is the List of participating hosts, sorted by number of attempts.

Update 2011-11-06 part 2: A note over at the ISC, "New, odd SSH brute force behavior" linked here, generating some additional traffic. Commenting over there requires a login and the confirmation email appears to be delayed by greylisting, so I'll comment here instead: I would not call this a particularly new approach. We've been seeing these attempts on and off since we started noticing them sometime in 2008, and it's entirely possible that there have been earlier attempts that did slip in under our radars. Analyses based on data from other sites beside mine would be very welcome indeed.

Update 2011-11-20: They keep coming back, now again after taking a 9 day breather (or possibly poking elsewhere in the meantime). Data accumulating again at the Hail Mary Cloud Data Page, with notes on the most recent activity at the very end. Please do play with the data, there's hope yet that some useful insights are to be found.

Note: A Better Data Source Is Available
Update 2013-06-09: For a faster and more convenient way to download the data referenced here, please see my BSDCan 2013 presentation The Hail Mary Cloud And The Lessons Learned which summarizes this series of articles and provides links to all the data. The links in the presentation point to a copy stored at NUUG's server, which connects to the world through a significantly fatter pipe than has.

Sunday, October 23, 2011

You're Doing It Wrong, Or, The Return Of The Son Of The Hail Mary Cloud

Do Linux system administrators still in this day and age run with PermitRootLogins yes in their sshd configurations? Do they also allow password logins? Do they ever attempt to keep their systems up to date and reasonably secure?

Apparently the answers are yes, yes, and no, at least for some. The evidence is slowly accumulating in the authentication logs on one of my servers, published via the The Hail Mary Cloud Data Page. There are several reasons why these attempts stand out, but it kind of helps that the number of users with sensible or indeed legitimate reasons for shell access to this particular server is quite limited.

I've ranted about this before, famously but not exclusively in a series of slashdotted and much-syndicated blog posts such as this one. For the TL;DR crowd, here's the summary:

If you're allowing root logins from the great elsewhere, you're doing it wrong.

If you've been allowing root logins from the great elsewhere, I wouldn't be surprised it's one or more of your boxes doing the distributed password guessing.

If you can't remember the last time you checked that your system is up to date and properly configured, you're doing it wrong.

So nothing really new to see here, it's only yours truly seeing his hope of never seeing this silliness repeated dashed, again.

If you're interested in background information about the Hail Mary Cloud phenomenon, please do read the previous posts (A low intensity, distributed bruteforce attempt (December 2, 2008), A Small Update About The Slow Brutes (December 6, 2008), Into a new year, slowly pounding the gates (December 21, 2008), The slow brutes, a final roundup (January 22, 2009) and The slow brute zombies are back (April 12, 2009) as well as the one referenced earlier.

Good night and good luck.

Update 2011-10-27: The alphabetic stage has started, see refreshed data for details.

Note: A Better Data Source Is Available
Update 2013-06-09: For a faster and more convenient way to download the data referenced here, please see my BSDCan 2013 presentation The Hail Mary Cloud And The Lessons Learned which summarizes this series of articles and provides links to all the data. The links in the presentation point to a copy stored at NUUG's server, which connects to the world through a significantly fatter pipe than has.

Wednesday, August 3, 2011

Practical Packet Analysis Is Good Fun [Book Review]

If you've wanted to take a peek at what network traffic really looks like, but put it off because it sounded all too complicated, a new book may be just what you need to get started. Practical Packet Analysis is an excellent network analysis introduction.

When Chris Sanders' Practical Packet Analysis (with the subtitle Using Wireshark to Solve Real-World Network Problems) was originally published in mid-2007, to generally very favorable reviews and quite respectable sales for a network book, the memory of Wireshark's predecessor Ethereal's eviction from the OpenBSD package system for too many unfixed security bugs in a short time was fresh enough that most of us in the OpenBSD contingent generally shrugged and moved on. The criticism of Ethereal centered on the fact that there was next to no separation between the packet capture part of the system (which needs to run with elevated privilege) and the analysis-related parts (that do not in fact need to). At the time I was also rather busy working on a book of my own (full disclosure: also a No Starch title), so I mentally filed Wireshark under 'things to get back to', possibly to look into reviving the OpenBSD port.

Nevertheless I was quite pleasantly surprised when No Starch Press contacted me a little while back and asked if I would like to have a review copy of Practical Packet Analysis, second edition. The book is a pleasant read and good fun, and in fact the program compiles from source fine on OpenBSD, but more about that later.

After a brief introduction that contains summaries of the numbered chapters and practical information such as where to get the sample packet capture files, Practical Packet Analysis leads in with a chapter called Packet Analysis and Network Basics, which introduces the reader through an introduction to network protocols in general and how packet sniffers fit in the general picture.

The chapter then goes on to present the classical ISO seven layer model and makes a credible attempt at mapping those to the slightly fewer layers of the TCP/IP stack before going through a discussion of common variants of networking hardware and touches briefly on classes of network traffic (multicast, broadcast and unicast).

The second chapter, Tapping into the Wire focuses mainly on how and where to position your network tap, introduces a couple of different hardware devices for the purpose and even touches on ARP spoofing as a tap technique before giving a first glimpse of how to display and interpret captured traffic using the Windows utility Cain & Abel, and ends with a summary that includes a flowchart to help decide on the most useful tapping technique for the task at hand.

The third chapter, Introduction to Wireshark, introduces the book's main tool, with installation instructions for the more common operating systems and an initial walkthrough of the graphical user interface. While the install instructions are not correct in all details (when installing from source, you do not actually need elevated privileges until you get to the concluding make install step), they should be sufficient to get most readers started with minimal fuss.

Finishing up the chapter with a section that moves quickly to Your First Packet Capture is a nice touch that emphasizes the practical approach that's typical of this book.

Chapter 4, Working with Captured Packets, walks the user through some highlights of the filtering, analysis and presentation features that makes working with the likes of Wireshark fun. Much of this functionality would be available or at least fairly familiar to a seasoned tcpdump user, but this walkthrough does illustrate that sometimes a graphical interface can be fun too.

The chapter also leads off with fairly weakly worded advice that packet capture and analysis are likely to be separate activities. Considering that Chapter 3 introduced Wireshark's ability to choose interfaces by point and click in a dialog box (indicating that the program runs with elevated privileges), I for one would have found a stronger admonition very helpful.

An inexperienced reader will likely want to view the packet capture of her own network traffic animated and in full color, so at this point or earlier it would have been useful to include a note that on Unixish operating systems, either of these three commands

$ sudo tshark -w - | wireshark -k -i -
$ sudo dumpcap -w - | wireshark -k -i -
$ sudo tcpdump -e -s 65535 -i <interfacename> -w - | wireshark -k -i -

will at least buy token security in that only the packet capture runs with elevated privileges, the analysis tools run as your regular user (and yes, <interfacename> is where the actual interface name goes).

Fortunately, for the rest of the book, the activities are firmly centered around the packet capture files collected by the author himself. Chapters 5 through 11 present a variety of specific traffic scenarios that showcase the analysis and presentation features (including various graphing options) that make Wireshark a useful tool.

There are several memorable moments to be found here, including a packet capture that demonstrates how a successful compromise of a Microsoft Windows system (getting a privileged command shell) could look like at the network level. There is a wide variety of examples, and most of them are clearly designed to nudge the reader into exploring further. A good selection of protocols and protocol features are explained with a learning by doing approach, but there is enough that's only hinted at to let an interested reader look up the Further Reading appendix to dive into the rest.

All in all Practical Packet Analysis, second edition stands out as a book that's a very useful learning resource, and one that makes the learning process a lot of fun. Seasoned network professionals will most likely not find much new material here, but the book is a good read for anyone with a networking interest and I'm pretty sure you'll enjoy the hours you spend leafing through it before you hand it over to your junior network admin or your students.

Title: Practical Packet Analysis, 2nd Edition - Using Wireshark to Solve Real-World Network Problems
Author: Chris Sanders
Publisher: No Starch Press, San Francisco
Published: July 2011
Pages: 280
ISBN: 978-1-59327-266-1
Price: USD 49.95 for print + ebook, USD 39.95 for ebook (PDF, Mobi, and ePub formats)

Tuesday, July 19, 2011

What to expect in OpenBSD 5.0 onwards

With OpenBSD-current tagged as 5.0-beta it's time to take a closer look at the upcoming release and the processes that make the OpenBSD project work.

Before you start getting all worked up about an upcoming dot-zero release, I'll tell you right away: Don't. Or maybe just a little bit. Release numbering in OpenBSD simply does not work the way most people expect. For this upcoming release, slated for general availability via FTP and other download methods on November 1, 2011 (and CD in pre-orderers' hands up to about two weeks earlier), 5.0 was simply the next available version number increment. OpenBSD releases every six months like clockwork, with the version number incremented by exactly 0.1 each time.

That does not mean that there is nothing to be excited about this time around, only that the OpenBSD approach is about guided and well planned evolution rather than revolutionary changes where large chunks of code are thrown away and replaced with new, untested code with bugs to be explored and exploited until a future dot-something-else release is finally considered stable.

The snapshots, as always available from your friendly local mirror, turned 5.0-beta during the early hours of today (July 19th 2011) CEST, and upgraders will notice two significant improvements before they're done running the installer (which in most other respects is fairly identical to what I described in an earlier article).

The most visible set of changes are near the end of the process:

As the illustration shows, once the install sets are in place, the upgrade program will offer to run sysmerge(8), the program that's specifically designed to help you merge any required changes into your configuration files with minimum fuss and disruption. Once you have chosen to either run the merge or skip it, the upgrade program will offer to fetch updated versions of any non-free firmware detected on your system at first boot.

When that first boot is finished, you will find a number of improvements are in place. You can find an overview of the changes at on the Daily changelog page over at the OpenBSD web site. We'll get into some of the more significant ones in this article.

But before we go into details of those changes, let's take a look at how the OpenBSD development process works. The Goals page on the project web site lays out the main project goals, while the Security page goes into somewhat more detail about the OpenBSD approach to security (including code audits and general all-tree bug sqashing), with a very worthwhile Further reading section at the bottom of the page. If you want to go into even more detail (short of reading source code and commit logs), the Papers page has a large selection of presentations and papers by various OpenBSD developers.

There's a lot of good material linked from that page; among the more recent favorites are Damien Miller's Recent developments in OpenSSH, Henning Brauer and Sven Dehmlow's Puffy At Work - getting code right and secure, the OpenBSD way and Theo de Raadt's The OpenBSD release process: A success story.

The last one is Theo's own first-hand account of how the project has been able to consistently deliver high quality releases every six months since the project started more than a decade ago. It's well worth your time flipping through the entire presentation, but for an outsider the possibly most enlightening slides are the ones contrasting the traditional release process with the OpenBSD one.

In particular, pay attention to the two slides describing the OpenBSD release calendar, slide 16 and slide 17. If you do the math, you see every six-month cycle is divided into roughly four months of intense development followed by roughly two months of stabilization before the release is cut.

Anything that will not fit within four months of hacking without breaking the tree (leaving the system in an unusable state) will not make it in. Any large changes need to be carefully planned and introduced via a number of substeps, some purely preparatory, others introducing user-visible changes. One visible effect of this is that truly incompatible changes such as the NAT and redirection syntax change that occured in OpenBSD 4.7 (prompting among other things an extensive rewrite of a certain book) are exceedingly rare, and if you're capable of reading the commit logs, you will see that it took several years of preparation before the switch happened.

It's all about clear thinking and proper planning, and changes will be introduced gradually and as they fit into the overall system. With all this as the background, it makes sense that 5.0 is just another number. Now let's take a peek at what changed since OpenBSD 4.9 and why it makes sense for both enthusiasts and others to start testing snapshots.

The changes listed here are my particular favorites, what you yourself will consider important may be different, depending on your specific use case:

BIGMEM on by default on amd64 - this is literally a big one. The amd64 architecture has true 64-bit capability, but the ghosts of hardware designs past keep hauting us, with legacy devices that continue to require a lot of dark magic to be handled correctly and safely. Excpect Ariane van der Steldt (ariane@) to be presenting at EuroBSDCon with the gory details. This commit happened fairly early in the OpenBSD 5.0 cycle.

Disk UID (DUID) support in all storage related parts such as mount, and by extension fstab (and DUID-style fstab enabled by default for new installs), so disk device renumbering will not be such a headache the next time you add or remove disks.

The proxies (ftp-proxy(8), tftp-proxy(8)) now use divert(4) sockets for performance, meaning that your rdr-to rules for those proxies need to be rewritten to divert-to rules.

Rewrite of the old /etc/security (a shell script) to the new security(8) by Andrew Fresh and Ingo Schwartze, a much needed refresh of a crucial component.

Various IPv6-related improvements in PF and other parts of the networking code, meaning that traffic using the newer generation protocol is now a bit more manageable. Just like everywhere else, the IPv6-related work is still in the early stages of seeing full scale use, and more likely than not future releases will have more news in this area.

The beginning of the end for ALTQ signalled by the introduction of always-on priority queues available as PF per-rule options, as noted in a previous article. This is an example of the longer term, incremental and well planned change like the ones I mentioned earlier. The internals of queueing and traffic shaping in OpenBSD is about to change in the long run, an it's even conceivable that the current ALTQ grammar will at one point use the newer code, but existing ALTQ setups will continue to work in OpenBSD 5.0.

Further package system improvements Marc Espie (espie@) continues his refinement of the package handling with an overhauled pkg_delete(8) that has several important improvements, including the -a option (click the link to read the man page online).

The Daily changelog page contains a lot more changes than the ones I've highlighted here. Among the things I've mostly skipped are added support for new hardware and a host of platform-specific fixes and enhancements. And of course it's likely that the list of changes will grow visibly over the next few weeks via commits caused by you testing and user feedback.

The exact point in time when a release is cut and shipped off to production is never pre-announced. The best indicator is to look for the commit message that changes the -current version string back to N.m.-current again after a brief period as N.m. A little later, the OpenBSD Orders page will allow preorders, and if you get your order in soon enough, you'll have your CDs and other swag before the official release date.

In about six months' time, you will see blog posts and other news items announcing the change to OpenBSD 5.1-beta, and we will be gearing up for yet another OpenBSD release. In any case, the best way to support the project (that produces, among other widely used software, OpenSSH, more likely than not by a wide margin, your remote login system) in addition to contributing code, testing and direct donations is to go to the OpenBSD Orders page and order one or more items.

Update: The sysmerge run from upgrade feature was backed out in a last minute commit by deraadt@, but it's possible it will return in time for the 5.0 to 5.1 upgrade cycle.

Update 2011-09-09: Pre-orders have started, the 5.0 release now has both a release page and a detailed list of changes since previous release. Expect more details to emerge over the coming days and weeks, and please do go order some items to help fund further OpenBSD development!

Saturday, July 9, 2011

Anticipating the Post-ALTQ World

A peek into exciting new features in the upcoming OpenBSD 5.0 release and beyond, plus how to teach our favorite operating system better?

One of the more exciting pieces of news to come out of the Edmonton OpenBSD hackathon this week was that the face of OpenBSD traffic shaping is about to change.

This change is by no means the only news to come out of Edmonton (read source-changes@, either by subscribing or via public archives such as and, if you're at all BSD geekish, feel your excitement levels rise), but the new prio keyword that was added to PF grammar and announced via a message to the tech@ mailing list titled new small, fast, always on priority queueing is both a major new feature and a prime example of how the OpenBSD project works, avoiding sudden "revolutionary" steps in favor of well planned evolutionary steps. The stepwise approach ensures, among other things, that changes are properly tested, and is visible in the fact that the OpenBSD source tree is as close as doesn't matter to always being in a buildable state, even during hackathons.

ALTQ, the ALTernate Queueing subsystem, was integrated into the PF codebase for OpenBSD version 3.3 (mainly by the efforts of the very same Henning Brauer who posted the message linked in the previous paragraph), and from that release onward, managing your filtering and your traffic shaping via your pf.conf file has been a highly appreciated feature of OpenBSD and other systems that have adopted the PF networking toolset.

Although much appreciated by its users everywhere, the integration was never quite an easy fit and the developers have been privately discussing how the queuing should be rewritten to avoid the separateness of ALTQ compared to the rest of the PF features.

There have been other concerns too. Not all of the features outlined in Cho's original USENIX paper were ever fully implemented, and the code had begun showing its age in several respects. By amazing coincidence another mailing list thread appeared in the same week as the new queueing system was announced, pointing out that ALTQ's total bandwidth parameter is a 32-bit value, meaning in practical terms that the maximum bandwidth ALTQ will be able to manage is in the 4Gbit range. Just changing the relevant variables to a wider type apparently breaks the code in other interesting ways (if you're adventurous, you could try playing with FreeBSD developer Ermal Luci's patch (available among other places here) and see what happens). The results could be hackishly interesting, but with the new queueing system set for gradual introduction, this may not be the optimal time to submit ALTQ patches.

Once again, the 32 bit value is a sign of the code's age. At the time ALTQ was originally written, a 32-bit value for bandwidth, and by extension an absolute cap at four gigabits per second, was a no less reasonable choice than the choice of 32 bits as the length of IP addresses some years earlier. And while we are still in the ALTQ world, the obvious workaround in settings where you have bandwidth that comes in increments of ten or forty gigabit is to do your filtering and traffic shaping at network locations where bandwidth is a bit more scarce. In the post-ALTQ world, things may become significantly different.

The early outline of the new world comes in the form of in-ruleset traffic classification, at the face of it not that different from ALTQ when seen from a user perspective. In the upcoming OpenBSD 5.0 release onwards, you will be able to skip the familiar ALTQ queue definitions, but still do traffic classification like this:

pass in proto tcp to port ssh prio 6

meaning that incoming ssh traffic will pass with a relatively high priority (the range is 0 through 7), hopefully making it through earlier than the rest.

The new scheme even lets you duplicate the old speedup trick of setting aside a separate subqueue for ACKs and other lowdelay packets, like so:

pass in proto tcp to port ssh prio (5, 6)

This has yet to reach downloadable binary snapshots, but if you have the required bits in place, you can get an early start by building your own OpenBSD-current. See the relevant pieces of The FAQ, supplemented by man release.

It is important to note that in the upcoming release, priority rules will coexist with the existing ALTQ infrastructure. Over time, however, the plan is to replace ALTQ with a largely rewritten system where the internal workings of the HFSC and CBQ queues, for example, are not all that different. We are also likely to see the total bandwidth definition for an interface move out ouf your pf.conf and re-emerge as an ifconfig option.

Once a traffic shaping infrastructure that's functionally equivalent to ALTQ (or it is hoped, superior in all ways including performance) is in place, ALTQ will eventually be removed. In the meantime, we have already seen commits that set default priority for certain traffic types, and reading source-changes@ along with testing snapshots will continue to be a fun and exciting experience in the weeks and months ahead.

Over time, it might even become necessary to do more book work, if so you will hear about it first here.

Whatever happens on the book front, I'll more than likely get back to those new features in more detail in upcoming PF tutorial sessions (See here, here and here for announcements about the ones scheduled so far), and I'm trying to figure out a way to improve the quality of the tutorial experiences.

The lecture plus demonstrations format of the tutorial sessions has so far allowed for very limited interactivity with little or no hands-on experience for attendees during the sessions themselves. I would be very happy to hear ideas for and input on practical implementation of changes that would improve the experience. One possibility is setting up a set of virtual labs, hosted somewhere reachable from the conference locations. Suggestions are welcome via email or the comments field.

Update 2011-07-10: The prio code is in snapshots dated July 10, 2011 or newer. If you're upgrading from a previous version, the installer will now also offer to run sysmerge(8) and it will even offer to upgrade any non-free firmware it finds installed on your system.

Update 2011-09-14: Chris Cappucio wrote in, adding some more useful detail to ALTQ's history. Chris writes:

I did the initial altq port to OpenBSD and Kenjiro Cho (author of ALTQ) was invited to the next hackathon (I believe it was c2k2) along with myself to integrate it into the tree. I declined to participate for personal reasons but Kenjiro did turn my patch into an in-tree implementation, one or two years later the PF guys replaced the ALTQ packet classifier and configuration utility with PF itself. Now they are replacing the ALTQ scheduler framework completely with a simpler framework that still uses the PF classifier and configurator :)
Update 2015-04-02: The new queueing subsystem finally appeared in OpenBSD 5.5, and was the main motivation for revising The Book of PF into its 2014 third edition. If you follow the book links in this article, you will find the edition that describes both the new queueing system and the legacy ALTQ system.

Friday, July 8, 2011

SEK 1995 for six months' worth of trademark protection?

In a fit of rage, I went out and did something I wouldn't have even remotely considered doing just a few moments before. I'm now the proud owner of the domain.

Does somebody out there really want to register If they did, it's probably too late by now.

My friends all know that I'm not too fond of talking on the phone, and trying to sell me anything by cold-calling is just never going to work. So when somebody calling themselves "Nordic Domain Hosting", calling from +46 406660225 (a Swedish unlisted number as far as I can tell) did just that, it had to end badly.

The lady at the other end claimed that somebody was trying to register the domain, and seeing that I was the owner of doubleU-doubleU-doubleU-dot-bee-ess-dee-ell-why-dot-enn-ee-tee, would I be interested in blocking the attempt? It would just be a matter of 'trademark protection', and it would cost me the mere trifle of SEK 1995 (roughly USD 310 or EUR 218 at today's rate).

I told them right off that I wasn't terribly interested in owning a Swedish domain, but the lady they tried to talk me through a series of yes/no answers I assume were designed to set up a legal-sounding agreement, all the better to bill me afterwards. More likely than not, she was reading it all off the whois info for

They had the act pretty well rehearsed, except for one detail that irritated me immensely: they insisted on getting an oral confirmation that I was interested in their service. Only after an oral confirmation was in place would they be sending me anything in writing.

The conversation never progressed much beyond beyond the initial "Is your name Peter [...]", "Are you duly authorized to act on $company_name's behalf", which is where I broke it off. For one thing the connection was terrible, and for another I was beginning to smell a rat. (Yes, I'm dense at times, I know.) At the end of too-many minutes, I thought I had finally got them to agree to send me their papers for me to review and hung up. Only to have the same lady call once more, continuing the script.

The second conversation did not progress much either, and for the third call a male claiming to be a supervisor had taken over. He didn't actually clinch a sale either.

So after three phone conversations, I used my regular registrar's web interface to register the domain myself, setting me back a whopping NOK 140 (roughly EUR 18 or USD 26). While I was writing this note, the "Registration complete" message from my registrar arrived here in my inbox.

So now I have another domain. I've expanded into Sweden.

Or as they say in the trade, get a geek really riled up, he'll go right off and buy a domain. In Sweden.

Wednesday, June 8, 2011

My First IPv6 Spam

On the day of The Great Experiment, an anecdote on how much the world stays the same even with IPv6, security-wise. No reason to stay all dotty, this is when the fun starts.

Happy IPv6 Day, everyone! As I write this column, the 24 hour worldwide Internet Protocol, version Six preparedness experiment is still in progress, with some hours to go before the summaries, no doubt penned by industry luminaries, will start appearing. In the meantime, I have a small IPv6 anecdote of my own to share.

Like most network oriented techies, I've had IPv6 somewhere in my field of vision for some time, with a footnoted TODO list to match. I started nagging local ISPs for their IPv6 plans some years ago, and as you've probably guessed, up until very recently the answer at essentially all European ISPs has been 'non-existent'. Among European nations, most of them pretty early in the queue when the original IPv4 address allocations were made, Norway was quite fortunate, and with a total population of less than five million and enough NAT to go around we have a good enough IPv4 address to total population ratio that makes for no panic of us running out of IPv4 locally.

So this left us early adopters to seek out the next generation connectivity via tunnel providers such as Hurricane Electric or Sixxs, with the dancing turtle at the Kame Project website as the early reward for venturing into 128-bit address territory.

For the longest while, that would be pretty much it. Once you'd kept your tunnel stable for a while, you could generally have a /48 subnet for the asking. But finding any good content on IPv6 or large amounts of IPv6 traffic of any kind was not easy. At parties we'd even bitch about the famous lines in one IPv6 textbook that claimed that IPv6 routing tables were generally much smaller and more compact than their IPv4 counterparts -- of course they're smaller, there are essentially no sites, and they produce next to no traffic!

Then, famously, the time came when the last /8 range allocations of IPv4 address space were made, and the IPv4 internet was officially full (should we just start telling them to go away yet?), for some values of at least. Even if certain European organizations or nations are not in any danger of running out of IP addresses, sixteen years after the initial IPv6 RFCs entered the standards track (see RFC1883 and friends), we found ourselves in a situation where any new large-scale implementations would likely be built exclusively with IPv6 or very nearly so, and the historical backdrop for today's experiment was complete.

I expect the summaries to say mostly that modern operating systems, at least the free ones such as OpenBSD came very well prepared for the task. The two protocols are in fact not compatible, but running both will work, and for the vast majority of services, the sysadmin's worksheet looks like this:

1) Get hold of whatever number of IPv6 addresses your application requires

2) Configure your network interfaces with IPv6 addresses (some systems come preconfigured to do their own autoconfiguration)

3) Edit in the required IPv6 addresses into your services' configurations

4) Add any AAAA records required to your domains' externally-visible zones.

And you're done, at least for now. With quad-A entries in your externally-visible DNS, you will start seeing IPv6 traffic hitting your services from elsewhere, not just your own test traffic.

Your logs and other monitoring will show you how your rig behaves. Most likely it all just works, and your users won't notice anything different, except perhaps a dancing turtle (or missing ads on IPv6 for some web sites, as noted by some early IP version six day reports).

I did those things a little while back on my home network, and when the first few days did not show up anything I almost missed what could be an important event: my first spam email delivered over IPv6. I only read message headers when the message stands out somehow, and in this case it was clearly spam, so I took a peek at the headers in case I it would be worth blacklisting manually:

Received: from ([2a01:238:20a:202:53f0::1])
by with esmtp (Exim 4.73)
(envelope-from )
id 1QFOdo-0001zq-E9
for; Thu, 28 Apr 2011 12:40:25 +0200
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; t=1303987210; l=2236144;
X-RZG-AUTH: :KWgWcE6pb9/UNsdwkwZbgj6IM9/U3aYAugAbJE4rNBO+ejjApHAeOC4nD+Q=
Received: from PACO1 ( [])
by (jimi mo42) (RZmta 25.17)
with ESMTPA id J00252n3SABVBu ; Thu, 28 Apr 2011 12:29:29 +0200 (MEST)

The full message is preserved here with headers or as mainly message body as text if you're so inclined.

From the most recent Received: header we see that the delivery happened over IPv6, while the second tells us that the originator most likely did not have IPv6 connectivity, or at least did not see fit to use it if they did.

It's also worth noting that this message, like a surprising amount of spam in recent times, also comes with what appears to be a valid DKIM signature. So much for cryptograpic validation as an effective antispam measure.

Now of course my next step was to find out if this particular sender had tried to darken my mail spool before:

peter@skapet:~$ sudo grep /var/spool/exim/logs/main.log
2010-11-07 14:21:44 1PF5Bm-0006RI-KB <= [] P=esmtp S=251712
2010-12-14 20:42:49 1PSaln-000255-PQ <= [] P=esmtp S=759434
2011-01-30 21:13:30 1PjdeA-0001Nn-If <= [] P=esmtp S=1905781
2011-02-22 21:59:05 1PrzIX-0006jv-7x <= [] P=esmtp S=1227270
2011-04-28 12:40:26 1QFOdo-0001zq-E9 <= [2a01:238:20a:202:53f0::1] P=esmtp S=2210478 

Apparently they had, on more or less a monthly basis, so let's take a peek at the log entries for all of those messages:

peter@skapet:~$ for foo in 1PF5Bm-0006RI-KB 1PSaln-000255-PQ 1PjdeA-0001Nn-If 1PrzIX-0006jv-7x 1QFOdo-0001zq-E9; do sudo grep $foo /var/spool/exim/logs/main.log ; done
2010-11-07 14:21:43 1PF5Bm-0006RI-KB DKIM: s=domk c=relaxed/relaxed a=rsa-sha1 t=1289136101 l=251099 [verification failed - signature did not verify (headers probably modified in transit)]
2010-11-07 14:21:44 1PF5Bm-0006RI-KB <= [] P=esmtp S=251712
2010-11-07 14:21:44 1PF5Bm-0006RI-KB => peter <> R=localuser T=local_delivery
2010-11-07 14:21:44 1PF5Bm-0006RI-KB Completed
2010-12-14 20:42:46 1PSaln-000255-PQ DKIM: s=domk c=relaxed/relaxed a=rsa-sha1 t=1292355762 l=766689 [verification succeeded]
2010-12-14 20:42:49 1PSaln-000255-PQ <= [] P=esmtp S=759434
2010-12-14 20:42:49 1PSaln-000255-PQ => peter <> R=localuser T=local_delivery
2010-12-14 20:42:49 1PSaln-000255-PQ Completed
2011-01-30 21:13:26 1PjdeA-0001Nn-If DKIM: s=domk c=relaxed/relaxed a=rsa-sha1 t=1296418396 l=1927465 [verification succeeded]
2011-01-30 21:13:30 1PjdeA-0001Nn-If <= [] P=esmtp S=1905781
2011-01-30 21:13:30 1PjdeA-0001Nn-If => peter <> R=localuser T=local_delivery
2011-01-30 21:13:30 1PjdeA-0001Nn-If Completed
2011-02-22 21:59:03 1PrzIX-0006jv-7x DKIM: s=domk c=relaxed/relaxed a=rsa-sha1 t=1298408169 l=1240300 [verification succeeded]
2011-02-22 21:59:05 1PrzIX-0006jv-7x <= [] P=esmtp S=1227270
2011-02-22 21:59:06 1PrzIX-0006jv-7x => peter <> R=localuser T=local_delivery
2011-02-22 21:59:06 1PrzIX-0006jv-7x Completed
2011-04-28 12:40:21 1QFOdo-0001zq-E9 DKIM: s=domk c=relaxed/relaxed a=rsa-sha1 t=1303987210 l=2236144 [verification succeeded]
2011-04-28 12:40:26 1QFOdo-0001zq-E9 <= [2a01:238:20a:202:53f0::1] P=esmtp S=2210478
2011-04-28 12:40:26 1QFOdo-0001zq-E9 => peter <> R=localuser T=local_delivery
2011-04-28 12:40:26 1QFOdo-0001zq-E9 Completed

This tells us that they've consistently sent to an address that features only on my website and has never been used as a sender or return address and has never been signed up for any mailing list of any kind. We also see that all the messages bar one pass the initial DKIM validity test.

There are several lessons to be learned here. One is that with a sensible choice of operating system and other software, the transition from classic IP version four only to a dual-stack configuration with both IP version four and IP version six running on your systems will likely be less dramatic than you have been lead to believe.

Another is that at least in some respect the world stays very much the same with IPv6 as it was with IPv4, and content is likely to cross the protocol border. Or to put it slightly differently, the same caveats about ill-intentioned traffic still apply; any security measures you had inplace before you went dual stack most likely need to be tuned to handle traffic from the brave new world of IPv6.

And of course, even if a number of implementation bugs have been found and fixed and a number of fundamental design flaws have been identified and worked around, only full scale testing like today's experiment (preferably sustained over longer periods) offer any hope of identifying and fixing the problems we haven't found yet. The looming IPv6 transition is likely to make and break careers, and some of us will have our share of both fun and nightmares while seeing to the confidentiality, integrity and general security of your data and your systems.

Good night and good luck, while we're slowly going from dots to colons.

Thanks to Sevan Janiyan for tweeting "Though the gmail website is reachable via IPv6, mail is still going via IPv4 :(" earlier today and reminding me of the incident that was the inspiration for this column.

Sunday, June 5, 2011

How Do We Appropriately Celebrate The Arrival Of The 100,000th PF Tutorial Visitor?

A nice surprise may be in line for a new visitor, and you (yes, you) can help me pick the surprise.

In late 2004, I started working on the text for a user group lecture for the BLUG meeting scheduled for the the following January.

The original manuscript was in Norwegian, but after a rather successful and surprisingly well attended user group meeting, I wrote up an English version and posted both online. With some encouragement from Greg Lehey (I'd participated in the group of volunteer reviewers for his third edition of The Complete FreeBSD), I submitted a proposal to give a half day tutorial for the 2005 AUUG conference in Sidney.

The proposal was accepted, as were several of the followup submissions to other conferences, and via a sequence of conferences and some private sessions, the document kept developing. In early 2007 I started working on turning the manuscript into a usable book. As regular readers will be aware, the much revised second edition was completed during the second half of 2010, and even that version has recently been subjected to its first update thanks to the ongoing development of the OpenBSD operating system.

The original tutorial has kept attracting a relatively steady stream of new visitors from all over the world, even though I have not added any new material to the document since I started working on the book version. New material will, rather, find its way into slides for the next session (such as the most recent one at BSDCan 2011), or will be put in the queue for possible upcoming book material.

During periods when I have had little visible output to offer, it has been interesting to see that the documents attract visitors and the occasional comment or suggestion for improvement. Then a little while back, I realized that in a not too distant future, the number of unique host names or IP addresses that have visited the tutorial tree will roll past a hundred thousand (100,000).

That particular number is possibly only significant to me, I keep the count of unique hosts accessing mainly to get an idea how many people have looked at my work. The raw number of page hits for the same location (we don't have any numbers for the early days when it was hosted at a now-defunct ISP) is fairly close to one and a half million, but I feel that number is a rather pointless statistic.

But when visitor number one hundred thousand arrives, how should we celebrate? I'm inclined to try to identify and contact the lucky visitor and offer a prize of sorts, but I have not quite made up my mind what and how. I'll welcome suggestions sent to via email (to with a recognizable subject.

It is worth mentioning that neither the tutorial nor this blog directly generates any revenue for me. I did for a short time have Google-supplied ads running on both sites, but for reasons that have never been quite clear to me, Google chose to terminate my AdSense account a few days before my second USD 100 transfer was due.

Friday, April 29, 2011

RFC1149: Ten Years of In-Flight Internet

It's been ten years since a small group of Bergen hackers implemented RFC1149, the Carrier Pigeon Internet Protocol, making in-flight Internet a reality well ahead of any airline.

On April 28th, 2001, my laptop received four ping packets. That in itself, you might say, was not a particularly noteworthy event.

However, on this occasion, the message was in the medium. The traffic my laptop received that day was transmitted via avian carriers, more commonly known as carrier pigeons.

The story of the hows and whys was rather quickly documented in main implementer Vegard Engen's "preliminary writeup" (part of the CPIP Workgroup website (which was slashdotted and for a few days saw traffic in the million hits per day range).

Why did we do it? Because the RFC was implementable, and it was fun. Very few people outside tech circles have ever clued in on this, with a few notable exceptions such as Peter Meyers, who wrote the article The pigeon protocol that was published within a few days of the event.

If you're interested in this kind of thing (if you're still reading, I suppose you are), other writeups such as my own The kernel hacker speaks to polite people and assists in the flight of network packets - a conspirator's view of Alan Cox' April 2001 Bergen visit may be worth reading too.

After the event, Vegard made a presentation about the workgroup efforts at the next IETF conference, and was presented with a plaque (jpg, 76kB, jpg, 2.1MB) in return.

The CPIP WG activities have proceeded at a more leisurely pace in recent years. In 2005 I went to the AUUG 2005 conference to do an early version of the PF tutorial, and en route I made a presentation in Adelaide about the project (slides and accompanying notes are still avaliable).

We're still looking for independent, interoperable implementations, though. Preferably on other free operating systems besides Linux. If we can entice our old pigeon partners to participate, we're more than willing to arrange for interoperability tests.

The world needs this to be on the IETF Standards Track.

Sunday, February 27, 2011

The Problem Isn't Email, It's Microsoft Exchange

The takeaway: don't pretend your appointment book can handle your email. And don't blame the Internet for all the compatibility issues. The main problem is Microsoft Exchange.

I care about email. In fact, a large part of how I have made a living over the years has depended on a reliable email service. I get a lot of email, and I send my fair share of it too - some of it is correspondence directly related to whatever I'm working on at the moment, some of it is personal, quite a bit comes from topic-oriented mailing lists such as openbsd-misc, and a large chunk of my mail archive consists of automatically generated mail sent by systems in my care. I've also been known to treat email much the same as other correspondence, rarely if ever deleting messages. When the mailboxes became too unwieldy I would transfer some of the contents to archive storage.

I've become convinced that a large part of the reason I don't mind dealing with large volumes of email is that I started doing it before Microsoft became an actor in the Internet email market. Way back in the late eighties and early nineties, email of the Internet, TCP/IP, kind would be handled by some sort of Unix box (a BSD or, by the mid-nineties, a Linux variant, perhaps) that would frequently offer shell command line access, but more likely than not also email reading via POP or IMAP interfaces.

And it worked. Users who insisted on (or needed to be on) a Microsoft desktop could be persuaded to install a useful email client such as Eudora (now defunct but fortunately Qualcomm donated the code base to Mozilla for integration in Thunderbird), and for mailboxes that became too unwieldy, the advice would be to just move content to mailboxes that Eudora wouldn't load into memory by default, such as the ubiquitous Inbox. Over the years the volumes of and the nature of email changed gradually, so along the way we learned to deal with spam and mail-borne Microsoft worms by installing content filtering and setting up other tools. Still, everywhere I worked, apart from the unavoidable but infrequent freak incidents SMTP email was considered reliable, and your email archive was just that.

From other parts of the world we would hear every now and then stories about the death of email, and recently even a largish IT company announced that they were planning to get rid of all email in the near future. Email, the story goes, is just too time consuming and disruptive. I never quite understood what they were on about.

Then not too long ago I started working regularly in an environment where email is done the Microsoft way, via Exchange and Outlook. And it has struck me that they're right: If your email experience is via Exchange and Outlook, the net effect is both time consuming and disruptive.

Forced to work with an all-Microsoft desktop for the first time in years (where my most frequently used application by far is putty.exe, but that's beside the point here), I found Outlook's user interface clunky and with frankly insane default settings ("rich text" by default, newest messages on top and positively deranged quoting setups, more about that later) that were for the most part fortunately changeable, at least on a per mailbox basis.

The first revelation came when I heard a co-worker praise newer Microsoft Office releases "because 2007 and newer has discussions". I was forced to imagine how life must have been like without threading as we've tended to call it on the USENET and mailing lists since, well, the late 1980s. Outlook's predecessor Microsoft Mail of course did not support threading, but I suppose any plans to support threading via References: headers and suchlike received a major blow when the translators of MSMail decided not to leave the RFC-dictated "Re:" prefix alone, but rather translate it for local language versions and lead the way to the "Re: SV: Antw: VS:" and so on cascades we see in the Subject: fields for correspondence between users of Microsoft mail clients and others.

No big surprise then, that when Microsoft decided to "invent" threading for their messaging products, they again ignored the RFC-compliant References: header and chose to implement their very own version based on a set of X-something headers that appears to make the threading a local-to-this-Exchange-server (and Outlook clients only) thing. Messages that do not retain the X-something headers regularly show up as separate "discussions". All this is to a Unix-head much like the "Recall" functionality that always draws smiles on mailing lists.

Being robbed of any easy way to track the relationships between messages in your mailboxes is bad enough, but there's more. Even with a limited sort of threading in place (even one that would break at the slightest interference from outside software), the damage had already been done by software that introduced counterproductive, confusing and time consuming response practices.

For reasons that have never become entirely clear to me, the developers of Microsoft email client software decided that direct and limited quoting of text from previous messages was not a priority. So rather than build on earlier work where we would have exchanges like

From: First Correspondent <first.correspondent@onecompany.nx>
To: Second Emailer <second.emailer@otherplace.nx>
Subject: A most enlightening message

Dear Second,

Here I offer an important insight that I would like to share.
Followed with random commentary that may or may not be important.

I hope you agree this was worth sharing.


where a typical response from Second would typically be something like this,

From: Second Emailer <second.emailer@otherplace.nx>
To: First Correspondent <first.correspondent@onecompany.nx>
Subject: Re: A most enlightening message

First Correspondent <first.correspondent@onecompany.nx> writes:

> Here I offer an important insight that I would like to share.

Thanks for sharing that! The next bit was really about something else
entirely, but is probably worth discussing over refreshments at an
appropriate time.

> I hope you agree this was worth sharing.

Oh, definitely! We'll get plenty of good out of this as time goes by.

Be seein' ya,
Second, jr

they chose a different approach entirely.

Keep in mind that other parts of the world that were already used to email and related forms of communication such as Usenet news, where exchanges like these were commonplace and gave a reasonable certainty as to who said what, when.

What Microsoft did instead was to introduce a wholly new convention for email responses. The details vary over the various versions, but the main parts were to wrap any text information in pseudo-html formatting and place the entire previous message after the present correspondent's signature, with the cursor for the user to input text at the top.

Inline quoting like in the exchange I quoted earlier was tricky bordering on impossible, and adventurous users would resort to tricks like "my parts are the ones in magenta", only to discover that the carefully hand-painted text would fail to render correctly on any other software than their own version, down to the minutest patch level.

Thus was born the age of all-inclusive top-posting, where deciphering the true meaning of any of the paragraphs on top of the message could take more moments than you really have at your disposal, the time needed to decipher the cascade of earlier messages included. Not only would the ever-expanding, all-inclusive (but actually rather unreliable and far from tamper-proof) discussion-in-a-message convention confuse all readers involved, it also meant that the text and any file attachments would be stored multiple times, many times over for long discussions. It would take only a minimally uncharitable view of the average C?O's intellectual capacity to suggest that this was a prime mover behind the intense rush to "data deduplication" in storage marketing literature a few years ago.

Which takes us to the next item: Storage. Taken in semi-random order, the next hurdle for a Microsoft email user to overcome is storage. Outlook by default uses its own binary format for local message storage, know as PST files or Personal Storage Table files as the informative Wikipedia entry explains. In some configurations all mail is stored in a database of sorts on the Exchange server, and the user may or may not have the option to save messages to local PST files to work around space limitations on the server.

It is not uncommon for Exchange admins to turn off users' ability to save messages to PST files. One major reason is that more likely than not any saved PST file will end up on the end user's computer, with the consequence that potentially important messages may end up being backed up infrequently, if at all. Other reasons to avoid PSTs are size limits (originally 2GB but larger in newer releases), but the thing that tends to scare people the most are horror stories of data corruption to the point of absolute unrecoverability. As in gigabytes of your business or personal life gone, due to a scrambled PST file. There is anecdotal evidence that missing or scrambled PST files are a big headache for those who for various reasons want to look into the inner life of the Bush 43 administration.

So for records keeping involving your email, you're in a bind: Your mailbox size is likely to be limited -- every Exchange admin knows that large mailboxes will hurt performance, impacting all users of that server -- and the only way to save messages offline is a known-unsafe method. As far as I have been able to find out, there is no easy way (other than extracting messages to a separate system, say via IMAP) to export mail from the Microsoft product combo to any text or non-microsoft mailbox format.

Now weigh those practical considerations against legislation that dictates all business related correspondence be kept on file for a matter of several years. The exact number of years varies by location, but unless you've purchased one of the add-on solutions for archiving, you will be struggling to keep in line with requirements.

It all comes down to the shortsightedness or intellectual shallowness of Microsoft Exchange's designers, way back then. It does make sense that your appointment calendar application should be able to send and receive email, and it kind of makes sense that your appointments are within easy reach from your email client.

Those facts do not, however, dictate that the appointments calendar and your email archive should share a common storage backend. In fact, it's likely that the decision to merge the email storage and appointments storage into one is the direct cause of many of the inefficiencies of Microsoft Exchange.

In one recent incident involving a user mailbox of perhaps a couple of gigabytes, where the bulk of the data was made up of an estimated (since Outlook never managed to display totals before freezing) 1.5 million messages of about one kilobyte each, even deleting the messages using an Outlook filtering rule (the content was not of a nature that required long term storage) literally took weeks, typically proceeding at a rate of one message per second early in the process, speeding up to somewhere in the five to ten messages per second rate near the end. Fortunately the user in question was able to access email functionality via the Outlook web access interface while deletion proceeded, but anecdotal evidence suggests that the workload had measurable performance impact on other hosts attached to the same SAN.

Even if you tackle the storage hurdle, you more than likely will be tripped up by other inanities in the software design. There are bound to be other pitfalls, but here is my personal list of things that continue to irritate me (in addition to the default "rich text" formatting), coming as I do from the outside world:

Using Outlook it appears to be impossible to see what your From: address will be before you send the message. The effect is sometimes quite bizarre, in my case since the site has several domains, I of course ended up signing up to several mailing lists with a wrong address, banishing my posts there to moderator queues until I was able to study the real mail headers on a non-Microsoft system.

Also, Outlook is overly helpful in filling in adress fields such as To: and Cc: from common address books and Active Directory, leading in at least one case I know of to a supposed-to-be-private message to be sent to every mailbox in a largish corporation. That's when you learn that after the first reply, retracting the message won't actually work.

And no rant about Exchange would be complete without mention of the largely information-free bounce messages the system generates for non-delivery. A significant portion of the spamtrap addresses I use have been fished out of bounce messages, and the Exchange ones stand out as the ones practically guaranteed to exclude any information about where the triggering message came from, or when.

Summing up, if you're an executive who feels that your organization is saddled with inefficient email processing and dubious archiving, the likely culprit is not email as such, but rather the poorly constructed application some unscrupulous sales person inserted in your network for you.

Changing to a standards compliant, preferably open source, alternative is likely to save your organization costs at all levels, including hardware and software acquisition and maintenance costs as well as significant personell time. At the same time a move to a standards compliant, open source solution will likely leave you in a better position with respect to security, information consistency and verification. A full treatment of email as a business tool would have had at least one column of similar length as this one on each of these topics, and I may return to those in future columns. In the meantime, if inefficient emailing bothers you, you may need to realize that a large part of your problem is Microsoft Exchange.

St. Patrick's Day PF tutorial in Tokyo: Returning readers may already be aware that I will be giving a PF tutorial at AsiaBSDCon 2011. My session will be on March 17th, known in some parts of the world as St Patrick's Day. You can register for my session and others here, hope to see you there!

Monday, January 31, 2011

La oss prise fruktbarheten, for den lar de snakkesalige slippe til [.NO #DLD]

Eller: Det hjertet er fullt av, renner munnen over med. Her: Trafikkdata og grådighet.

I dag har vi spesiell grunn til å være takknemlige for menneskelig fruktbarhet. Spesielt må vi takke justisminister Storbergets ektefelle for å ha medvirket til at mannen fikk anledning til å ta pappaperm akkurat nå. Uten en justisminister på pappaperm og med forsvarsministeren som vikar hadde vi nemlig ikke fått dagens oppslag i Aftenposten, med den betegnende tittelen "Faremo frykter hackere, men er samtidig ikke så bekymret".

Det interessante kommer først i tredje siterte svar på spørsmål, der vår vikarierende justisminister etter de obligatoriske floskelavleveringene med marginal sannhetsgehalt ('dette er ikke overvåking', 'vi styrker personvernet', osv) avslutter med

Det vil forbause meg mye om vi ikke også får en diskusjon om hvordan trafikkdata kan brukes kommersielt.

Vi skal huske på at Grete Faremo i sin forvisningstid fra politikken (hun forsvant fra justisministerposten i Torbjørn Jaglands regjering midt på nittitallet, etter bråket som oppstod da det viste seg at minst ett medlem av Lund-kommisjonen hadde vært under overvåking mens kommisjonens arbeid pågikk) var innom IT-bransjen, der hun antakelig må ha plukket opp tanken om adferdsdata som handelsvare. Det skulle ikke forundre om akkurat Faremos tidligere arbeidsgiver Microsoft ville være svært takknemlig mottaker av detaljerte oppplysninger om hvem som snakker sammen og skriver til hverandre, med noe nær komplette adferdsdata for hele befolkningen. Når kommersiell utnyttelse er bare en forsnakkelse unna, er det grunn til å spisse ørene.

Men for all del, i det perspektivet -- at motivene bak iveren for elektronisk brev- og besøkskontroll for hele nasjonen ikke er rent politiske, men også klart kommersielle -- avslører merkelig nok tilhengerne av det uspiselige direktivet mer menneskelighet enn noen gang før. Det handler kanskje helt enkelt om grådighet. Så er de kanskje ikke primært ute etter å ha kontroll på politiske motstandere likevel. De ønsker å selge detaljerte opplysninger om ditt og mitt privatliv til høystbydende, gjerne flere ganger til forskjellige aktører til hver sin pris.

Men før dataene kan selges eller på annen måte overlates til kommersielle interesser, er det avgjørende at dataene faktisk blir samlet inn. I dag er innsamling og oppbevaring av detaljerte trafikkdata i virkelig kommersielt interessant skala (les: Til: og Fra: med nøyaktig tidspunkt for all epost, samme for telefonsamtaler, men da supplert med posisjonsdata med brukbar oppløsning) ikke tillatt i Norge. Det er de enkelte operatørene som står for innsamlingen, og hvilke data som blir samlet inn, er styrt av hver enkelt operatørs faktureringsrutiner og driftstekniske hensyn.

Det lovforslaget som Arbeiderpartiet for tiden er alene om å gå inn for, forutsetter at større mengder adferdsdata enn noengang før blir samlet inn og lagret, og sannsynligvis vil man også ende opp med ett sentralt og ytterst attraktivt lagringssted. Lovforslaget legger riktignok opp til at utlevering av data fra det sentrale samlingsstedet skal være under domstolskontroll, men det er en smule påfallende hvor lite bekymret vår fungerende justisminister er for at brev- og besøkskontrolldataene skal komme på avveie.

Politiet har alltid hatt mulighet til å hente ut og reservere disse dataene når det er behov for det, sentral lagring vil gjøre det lettere å hente ut virkelig verdifulle data om deg og meg for de som ikke har skrupler med å gå bakveier til dataene. Samtidig vet vi fra de landene som har innarbeidet datalagringsdirektivet i nasjonal lov at direktivet helt klart ikke bidrar til å bedre oppklaringsgrad eller forekomst av kriminalitet, og at flere EU-land har kjent direktivet eller innføringslovene grunnlovsstridige. Men alt dette bekymrer tilsynelatende ikke vår fungerende justisminister. Kanskje neste datalagringslov vil gjøre det lettere å selge data om ditt og mitt privatliv over bordet.

Stortingsbehandlingen av datalagringsdirektivet innledes med høringer 7. og 9. februar. Se (med informativ samleside) for detaljer.

Sunday, January 30, 2011

I will not mindlessly paste from HOWTOs

Even with proper discouragement, mindless pasting is rampant, it seems

It had to happen sooner or later.

My incoming mail this morning had one item about what I thought was a fairly trivial misconfiguration, and I answered it like this
From: (Peter N. M. Hansteen)
Subject: Re: interesting-traffic
To: Name Withheld <>
Date: Sun, 30 Jan 2011 12:44:35 +0100

Name Withheld <> writes:

> how should i handle the 'intersting-traffic macro not defined' error
> in pf.conf on obsd 4.8 reboot syntax error starting pf?

either define the macro (remove a # comment perhaps) or remove any
references to it.  Have you been pasting from a partial example
floating around the web perhaps?

- P

Then a few sips of coffee later, it dawned on me: the macro interstring-traffic is more than likely one I made up for the bridge example in the short (and now rarely updated) version of my PF tutorial document. (I added the strongly worded note there as a reaction to this incident).

So it's at least partly my fault. I put an incomplete example out there, hoping that whoever stumbled upon the material would grasp the context and fill in any needed details. The important bits are all there, but when pasted into a config without checking, the result will be just as Name Withheld experienced.

But then I can't really take the full blame: Had he bothered to read the rest of the document or even the book that's a further development, he would have seen this admonition which comes out even more clearly in the slides version. If for some reason the links are inoperative, here it is:

The Pledge of the Network Admin

This is my network. 

It is mine 
or technically my employer's, 
it is my responsibility 
and I care for it with all my heart

there are many other networks a lot like mine,

but none are just like it.

I solemnly swear 

that I will not mindlessly paste from HOWTOs.

I actually recite that at the very beginning of all my tutorial sessions, and while of course it's sometimes accompanied by giggles, the point remains: there is no substitute for actually understanding your configuration. Testing (if nothing else, a quick sudo pfctl -vnf /etc/pf.conf and reading the output before rebooting) would have helped enormously too.

For those hungry for fresh PF tutorials, I'll jump the gun and announce that there will be one by yours truly at AsiaBSDCon 2011, final schedule to appear on that URL shortly. A few other events are in the works too, more details here and at the PF tutorial page when details are settled.