Sunday, December 30, 2012

It Really Should Not Be Necessary To Write This. We're Professionals, Dammit!

Misogyny and sexual harassment in tech circles is a real problem, and one that needs to be addressed head-on.

There are a few things we should be better off without. And the kind of behavior that Cryptoparty organizer Asher Wolf describes in her recent blog post about the goings-on at a fairly high profile hacker event comes in pretty high on my list of stuff that needs to stop right now.

I would have liked to say with some confidence that 'we are better than this', but unfortunately, as this post about DEFCON 20 shows, harassment at tech conferences is a recurring phenomenon. I'm sure searches with appropriate keywords will turn up even more unpleasant tales.

Being white, male and one who only started going to tech conferences with any frequency in my early forties, I have never been the target of anything like this kind of harassment myself, but I must have been fairly close to episodes of that kind, since several of my favorite conferences have adopted clear anti-harrasment policies. You can read BSDCan's here.

The takeaway from that piece should be roughly this: We are professionals.

Conferences are, to professionals, extensions of the workplace, and every conference attendee has the right to be treated like a colleague who attends the conference for the same reason you do -- to meet colleagues and hear news about recent developments in the field.

That includes even the sales rep from some vendor or other manning a booth or somebody giving a talk on a topic you're not interested in. They have the right to be treated as colleagues, even if you think them 'insufficiently technical' (I've had that, never contributed code but keeps lecturing and writing about it all the same) or otherwise undesirable. A professional knows when to just walk away.

To put this all in context, most countries and territories have some sort of legislation that would get you fired or even jailed for the kind of behavior Asher or Valerie describes in a workplace environment, and it should stand to reason that anything that will get you fired at work should get you thrown out of a conference, no refunds or excuses offered or available.

Conferences, even 'hacker' type events, are not intended to be barely supervised summer camp where pranks and bad behavior are generally admired, but from the tales we hear here, it's clear that at least parts of the tech/hacker community is in dire need of adult supervision.

It is up to organizers and attendees to make sure that all conference attendees get 'a harassment-free conference experience for everyone regardless of gender, sexual orientation, disability, physical appearance, body size, race, or religion', to quote the BSDCan policy statement.

If you consider yourself a professional in your field, act like one.

If you can do that, I'll be looking forward to seeing you at one of the upcoming conferences. I'll be announcing somewhere near here where and when I'll be giving talks.

Good night and good luck.

Flattr this

Tuesday, December 25, 2012

DDOS Bots Are People! (Or Manned By Some, At Least)

Mitigating a DDOS attack against your infrastructure involves both people skills and tech skills. Whining won't cut it at all. The underlying problem remains the sad fact that the botnet herders are able to find fresh hosts for their malware. Should we start publishing more information about those pesky DDOS participants?

I have a confession to make. For a while and up until recently, one of my name servers was in fact an open resolver. The way I discovered and fixed the problem was by way of a rather crude DNS based DDOS.

Regular readers (Hi, Bert!) will be aware that I haven't actually published anything noteworthy for a while. So I was a bit surprised to find in early December 2012 that bsdly.net and associated domains was under a DNS based distributed denial of service (DDOS) attack. The attack itself appeared to be nothing special -- just a bunch of machines sending loads and loads of rubbish DNS requests directed at the IP addresses listed as authoritative masters for a few selected domains.

The targets were on relatively modest connections (think SOHO grade), so their pipes were flooded by the traffic and the people who were relying on that connectivity were not getting much network-related done. The sites weren't totally offline, but just about anything would time out without completing and life would be miserable. I've made a graph of the traffic available here, in a weekly view of that approximate period that nicely illustrates normal vs abnormal levels for those networks, generated by nfsen from pflow(4) data.

The networks under attack were in this instance either part of my personal lab or equipment used and relied upon by old friends, so I set out to make things liveable again as soon as I was able. Read on for field notes on practical incident response.

Under Attack? Just Block Them Then!
My early impulse was of course to adjust the PF rules that take care of rapid-fire brute force attacks (see eg the tutorial or the book for info) to swallow up the the rapid-fire DNS as well. That was unfortunately only partially successful. We only achieved small dips in the noise level.

Looking at the traffic via tcpdump(8) and OpenBSD's excellent systat states view revealed that the floods were incoming at a fairly quick pace and was consistently filling up the state table on each of the firewalls, so all timeouts were effectively zero for longish periods. A pfctl -k directed at known attackers would typically show a few thousand states killed, only to see the numbers rise quickly again to the max number of states limit. Even selectively blocking by hand or rate-limiting via pf tricks was only partially effective.

The traffic graphs showed some improvement, but the tcpdump output didn't slow at all. At this point it was getting fairly obvious that the requests were junk -- no sane application will request the same information several thousand times in the space of a few seconds.

It Takes People Skills. Plus whois. And A Back Channel.
So on to the boring part. In most cases what does help, eventually, is contacting the people responsible for the security of the networks where the noise traffic originates. On Unixish systems, you have at your fingertips the whois(1) command, which is designed for that specific purpose. Use it. Feeding a routeable IP adress to whois will in most circumstances turn up useful contact data. In most cases, the address you're looking for is abuse@ or the security officer role for the network or domain.

If you're doing this research while you're the target of a DDOS, you will be thanking yourself for installing a back channel to somewhere that will give you enough connectivity to run whois and send email to the abuse@ addresses. If your job description includes dealing with problems of this type and you don't have that in place already, drop what you're doing and start making arrangements to get a back channel, right now.

Next up, take some time to draft a readable message text you can reuse quickly to convey all relevant information to the persons handling abuse@ mail at the other end.

Be polite (I've found that starting with a "Dear Colleague" helps), to the point, offer relevant information up front and provide links to more (such as in my case tcpdump output) for followup. Stress the urgency of the matter, but do not make threats of any kind, and save the expletives for some other time.

The issue here is to provide enough information to make the other party start working on the problem at their end and preferably inspire them to make that task a high priority one. Do offer an easy point of contact, make sure you can actually read and respond to email at the address you're sending from, and if necessary include the phone number where you are most easily reachable.

When you have a useful template message, get ready to send however many carefully amended copies of that message to however many colleagues (aka abuse@) it takes. Take care to cut and paste correctly, if there's a mismatch between your subject and your message body on anything essential or inconsistencies within your message, more likely than not your message will be discarded as a probable prank. Put any address you think relevant in your Cc: field, but do not hold out any high hopes off useful response from law enforcement. Only directly affected parties will display any interest whatsoever.

Fill in any blanks or template fields with the output from your monitoring tools. But remember, your main assets at this point are your people skills. If the volume is too high or you find the people part difficult, now is the time to enlist somebody to handle communications while you deal with the technical and analysis parts.

You will of course find that there are abuse contact addresses that are in fact black holes (despite the RFC stating basic requirements), and unless you are a name they've heard about you should expect law enforcement to be totally useless. But some useful information may turn up.

Good Tools Help, Beware Of Snake Oil
I've already mentioned monitoring tools, for collecting and analyzing your traffic. There is no question you need to have useful tools in place. What I have ended up doing is to collect NetFlow traffic metadata via OpenBSD's pflow(4) and similar means and monitoring the via NFSen. Other tools exist, and if you're interested in network traffic monitoring in general and NetFlow tools in particular, you could do worse than pick up a copy of Michael W. Lucas' recent book Network Flow Analysis.

Michael chose to work with the flow-tools family of utilities, but he does an outstanding job of explaining the subject in both theory and in the context of practical applications. What you read in Michael's book can easily be transferred to other toolsets once you get at grip on the matter.

Unfortunately, (as you will see from the replies you get to your messages) if you do take an interest in your network traffic and start measuring, you will be one of a very select minority. One rather perverse side effect of 'anti-terror' or 'anti-anythingyouhate' legislation such as the European Union Data Retention Directive and similar log data retention legislation in the works elsewhere is that logging anything not directly associated with the health of your own equipment is likely to become legally burdensome and potentially expensive, so operators will only start logging with a granularity that would be useful in our context once there are clear indications that an incident is underway.

Combine this with the general naive optimism people tend to exhibit (aka 'it won't happen here'), and result is that very few system operators actually have a clue about their network traffic.

Those who do measure their traffic and respond to your queries may turn up useful information - one correspondent was adamant that the outgoing traffic graph for the IP adress I had quoted to them was flat and claimed that what I was likely seeing was my servers being utilized in a DNS amplification attach (very well described by Cloudflare in this blog post). The main takeway here is that since UDP is basically 'fire and forget', unless your application takes special care, it is possible to spoof the source address and target the return traffic at someone else.

My minor quarrel with the theory was that the vast majority of requests were not recursive queries (a rough count based on grep -c on tcpdump output preserved here says that "ANY" queries for domains we did legitimately answer for at the start of the incident outnumbered recursive queries by a ratio better than 10,000 to 1). So DNS amplification may have been a part of the problem, but a rather small one (but do read the Cloudflare article anyway, it contains quite a bit of useful information).

And to make a long story slightly shorter, the most effective means of fighting the attack proved also to be almost alarmingly simple. First off, I moved the authority for the noise generating domains off elsewhere (the domains were essentially dormant anyway, reserved on behalf of a friend of mine some years ago for plans that somehow failed to move forward). That did not have the expected effect: the queries for those domains kept coming beyond the zone files' stated timeouts, aimed at the very same IP adresses as before. The only difference was that those queries were now met with a 'denied' response, as were (after I had found the config error on one host and fixed it) any recursive queries originating from the outside.

The fact that the noisemakers kept coming anyway lead me to a rather obvious conclusion: Any IP address that generates a 'denied' response from our name server is up to no good, and can legitimately be blackhole routed at the Internet-facing interface. Implementing the solution was (no surprise) a matter of cooking up some scriptery, including one that tails the relevant logs closely, greps out the relevant information and one that issues a simple route add -host $offendingip 127.0.0.1 -blackhole for each offending IP address.

My users reported vastly improved network conditions almost immediately, while the number of blackhole routed IP addresses at each site quickly shot up to a 24-hour average somewhere in the low thousands before dropping rather sharply to at first a few hundreds, through a few dozen to, at last count, a total of 5.

There are a number of similar war stories out there, and good number of them end up with a recommendation to buy 'DDOS protection' from some vendor or other (more often than not some proprietary solution where you get no clue about the innards), or to 'provision your stuff to infrastructure that's too big to be DDOSed'. Of these options I would prefer the latter, but this story I think shows that correct use of the tools OpenBSD and other sane operating systems provide for free will go a long way. More kit helps if you're lucky, but smarts win the day.

Should we publish, or 'name and shame'?
I strongly suspect that most of the handful of boxes that are currently blackhole routed by my setup here belong to a specific class of 'security consultant' who for reasons of their own want a piece of the sniffing for recursive resolvers action. But I really can't be certain: I have now way except whois and guesswork to determine who mans the scanning boxes and for what purpose (will they alert owners of any flaws found or save it all for their next attack -- there just is no way to tell). Scans like those (typically involving a query for './A/IN' or the texbook 'isc.org/ANY/IN') are are of course annoying, but whoever operates those boxes are very welcome to contact me in any way they can with data on their legitimate purposes.

During the attack I briefly published a list of the IP addresses that had been active during the last 24 hours to the bsdly.net web site, and for a short while I even included them as a separate section in the bsdly.net blacklist for good measure (an ethically questionable move, since that list is generated for a different and quite specific purpose). I am toying with the idea of publishing the current list of blackholed hosts in some way, possibly dumping to somewhere web-accessible every hour or so, if feedback on this column indicates it would be a useful measure. Please let me know what you think in comments or via email.

For the rest of you out there, please make an effort to keep your systems in trim, well configured with no services running other than for the specific purposes you picked. Keeping your boxes under your own control does take an effort, but it's worth your trouble. Of course there are entire operating environments worth avoiding, and if you're curious about whether any system in your network was somehow involved in the incident, I will answer reasonable requests for specific digging around my data (netflow and other). As a side note, the story I had originally planned to use as an illustration of how useful netflow data is in monitoring and capacity planning involves a case of astoundingly inane use of a Microsoft product in a high dollar density environment, but I'll let that one lie for now.

Good night and good luck.
Flattr this

Update 2019-12-18: Man page links updated to modern-style man.openbsd.org links (no other content change).

Wednesday, August 29, 2012

Ubuntu Made Easy May Have Achieved Its Noble Goal


A new No Starch Press title sets out with the intention of making Ubuntu Linux accessible to newcomers of the more timid kind. It may very well have succeeded.

If you've been part of the open source community for a while, you will have seen quite a few books and that set out to make some version or other of Linux accessible to beginners.  While some of these efforts have been quite compentent and admirable, others have been too shallow, barely rewritten from a template originally created with other systems in mind.  Others again have failed by being simply too massive and not actually very beginner oriented past the first ten to fifteen pages or so.

It's fair to say that a new Linux beginners' book has the odds stacked against it in several important ways. But then for a greying unixer like myself, there is the always a lingering hope that the next Linux or other Unix for beginners book will finally get it right, and manage to strike the right balance between a gentle learning curve and providing enough information to be genuinely useful.

So when No Starch Press asked me if I would like a review copy of Rickford Grant and Phil Bull's Ubuntu Made Easy (I have a pre-existing business relationship with No Starch Press, see note at the end), I hesitated for a few minutes and told them I'd take a peek.

When my review copy landed on my desk a few weeks ago I had already had access to PDF version for a little while.  Seeing the total page count in the PDF I was at first a little worried that this would be another one for the too big to be useful category.

But I needn't have worried. The writing in the approximately 420 pages of core text flows well and the subject matter is presented in a way that in my limited testing on less experienced users (I do not have easy access to truly fresh newbies, unfortunately) seems to build a useful and gentle learning curve.  A gentle learning curve does not necessarily mean low information density, however -- even my not totally green test subjects all found pieces of new information or useful tips in all chapters.

The text (with fairly frequent and useful illustrations) flows through a total of 22 chapters and four appendixes that takes the reader from a very gentle introduction to the system (even suggesting that you run from the live CD at first, only committing to a permanent install once you've gotten your feet wet) through a sequence of smaller projects in a full range of Linux desktop life topics that are clearly designed to both expand the users' skill set and to build their confidence in their own abilities.

After the project or task oriented chapters that make up the bulk of the book, the later chapters include an introduction to interacting with the Ubuntu community as well as a fairly useful symptom-oriented troubleshooting guide.

More experienced users may opt to skip or browse rather quickly through the early chapters, but as I mentioned earlier, even the more experienced may be able to find new information or at least a fresh perspective on familiar topics in various sections of this book.

So my verdict is that all in all, I think the authors of Ubuntu Made Easy may be very close to succeeding in the much longed-for goal of making a Linux beginner book that is actually useful to beginners.

Title: Ubuntu Made Easy - A Project-Based Introduction to Linux
Authors: Rickford Grant with Phil Bull
Published: No Starch Press (San Francisco), July 2012. 480 pages.
ISBN: 978-1-59327-425-2


Note: I have a pre-existing business relationship with No Starch Press.  The good people there had the patience to work with me through the process of writing three editions of The Book of PF, and they have sent me review copies of more books than the ones I've actually gotten around to reviewing.

Sunday, July 1, 2012

Keeping Your OpenBSD System In Trim: A Works For Me Guide

Keeping your OpenBSD systems in trim is dead easy. Occasional reboots are inevitable, but then that's where our famous redundancy features really shine, right? Read on for notes on upgrading your system. (Most of the steps here are relevant for new installs too, but do visit the Install Guide if you're new to OpenBSD.)

Note: Unless you have very specific requirements such as omitting sets, the base system upgrade can be performed painlessly with the sysupgrade(8) command followed by the sysmerge(8) and pkg_add(8) commands as described in this article. See the man pages or the OpenBSD FAQ for more information. (Updated 2021-07-02).

My upgrades always start with the same couple of commands:

$ cd ~/upgrade
$ ls -l
$ ncftp eu-openbsd
ncftp ...penBSD/snapshots/amd64 > dir

The first two commands of this sequence show me the date of the latest snapshot I downloaded, and the next two give me the date on what is on the OpenBSD European mirror site.

The ncftp bookmark eu-openbsd in this case expands to

ftp://ftp.eu.openbsd.org/pub/OpenBSD/snapshots/amd64/

which is appropriate for a North European like myself with an AMD64 based system on hand. If you're not in Europe, it's likely you're better off with some other choice of mirror. Check the Getting Releases page on the OpenBSD site for a mirror near you. And of course, do pick the correct architecture for your system.

If the files on the mirror server are newer than the ones I have locally, I'll download them with

ncftp ...penBSD/snapshots/amd64 > get *

If there are no updates, well, that means I'll just check back later.

And of course, if you do not have the ncftp package installed, this command will work on any OpenBSD box with a full base system installed:

$ ftp -ia ftp://ftp.eu.openbsd.org/pub/OpenBSD/snapshots/`uname -m`/{index.txt,*tgz,bsd*,INS*,BOOT*,BUILDINFO,SHA*}

(thanks, Pedro Caetano!)

One of the things that makes doing OpenBSD upgrades so amazingly easy is the sysmerge(8) program.  Much inspired by FreeBSD's classic mergemaster(8), this program is, as we shall see by executing the following commands,


$ which sysmerge
/usr/sbin/sysmerge
$ file `which sysmerge`
/usr/sbin/sysmerge: Korn shell script text executable

actually a shell script that makes extensive use of sdiff(1) to highlight the differences between your installed version of a configuration file and the one from the source tree or your install sets, and offers to merge (hence the name) your current customizations into the newer version files and install them on the spot if you like.

Do take a peek with at what the script does:

$ less /usr/sbin/sysmerge

possibly complemented by looking up the sdiff(1) man page if you want. But more about that later.

If you run regular upgrades like I tend to, with snapshots only days or at most a few weeks apart, the differences sysmerge(8) detects are likely few and trivial (but a word of caution: it usually pays to check the Following -current page from time to time).

If you do longer jumps, such as between releases, you will almost certainly find more differences, and in rare cases the changes in the configuration file baselines will be large enough that you will need to take some time out to edit the files by hand. In those cases, studying the upgrade notes relevant to your versions (and all intermediate ones if you do a long jump -- go to http://www.openbsd.org/faq/ and choose the Upgrade Guide link at the top of the left column) will be well worth your time.

The next step is to copy the fresh bsd.rd to the root directory of my boot disk:

$ doas cp bsd.rd /

With the bsd.rd in place, the next step is to reboot your system. Either choose your window manager's reboot option or simply from a shell prompt type

$ doas reboot

When the boot> prompt appears, type b bsd.rd and press Enter.

The familiar bsd.rd boot messages appear (much like the ones illustrated in my earlier installation piece, The Goodness Of Men And Machinery), and the first question you need to answer is:


(I)nstall, (U)pgrade or (S)hell?

Since we're upgrading, we type u (upper or lower case doesn't matter) and press Enter.

The next question up is the choice of keyboard layout:


Choose your keyboard layout ('?' or 'L' for list)

Here I type no for my Norwegian keyboard layout, if you want a different one you can get a list of available choices by typing L instead, and pressing Enter.

Once you've chosen your keyboard layout, the upgrade script prompts you to specify where your root file system is located. The upgrader's guess is only very rarely wrong, so more likely than not you'll just press Enter here.

The upgrader performs a full file system check on the designated root file system, and proceeds to configure any network interfaces it finds configuration files for in the designated root file systems etc directory. You will see the output of DHCP negotiations or similar.

When the network configuration is done, the upgrader prompts you to decide whether you want to perform a full file system check on the other partitions it finds.  The default choice here is no, so if you press enter here, the upgrade script simply runs a fsck -p on each of the file system listed in the system's fstab. You'll see the output for each one of these relatively lightweight checks.

Next, the upgrade script asks where to find the install sets:

Location of sets? (disk http or 'done') [disk]

Here, since I already downloaded the install set files to a local directory, the natural choice is disk. The upgrade script has already mounted all partitions from the system's fstab under /mnt/, so the files I downloaded to /home/peter/upgrade are now available under /mnt/home/peter/upgrade, and that's what I type in response to the prompt for where the sets are to be found.

Unless you have prepared a site.tgz for your site there is normally no reason to add or subtract sets from the default list, so pressing Enter will do nicely. The sets are copied and extracted, and when the extraction is done, you confirm that the upgrade is [done], and when the prompt appears, choose reboot (the default)

to let the system boot into the upgraded operating system.

Watch the system boot, and if you look closely, you will notice that on first boot the updated sysmerge(8) program runs and does the obvious things that do not require manual intervention, and if there are non-obvious things left, a message and an email to root alerts you of the need to do a manual sysmerge(8) run.

I tend to do a sysmerge run anyway after upgrade, if only to see it complete silently:

$ doas sysmerge

That's it! There is a chance that there have been updates to packages you have installed (such as ncftp in my case), so I tend to do a package upgrade pretty soon after rebooting into a freshly upgraded system. The basic update command is pkg_add -u, but I tend to want more verbose output and prompts and generally choose this variant:

$ doas pkg_add -vVuUmr

This command will most likely just work, fetching packages from the site and directory specified in /etc/installurl which is automagically created and maintained by the install program. If the installer somehow guessed wrong, you need to adjust the contents of that file and restart pkg_add (fiddling with the environment variable PKG_PATH as was the procedure in previous releases will not do any good). During the relatively short time when snapshots identify as the release but actual -release packages are not yet available (as in the recent 7.5 release cycle) you may have to adjust the pkg_add command to include -D snap, like this:

$ doas pkg_add -D snap -vuV

For upgrading from one snapshot to the next, there is really not much more to the process. You will occasionally want to run

$ doas pkg_delete -a

to remove packages that were installed as dependencies (mainly libraries) but are no longer needed. In addition, you may want to install and run sysclean to help you indentify other files such as obsolete configuration files that are no longer needed in your current configuration.

If you're upgrading from one release to the next, it makes sense to check the Errata page for any patches you need to apply to the release you're running.

An if you've read this far and found this interesting or useful, please head over to the OpenBSD Donations page to make a donation to the project.

Update 2017-04-17: Minor updates to adjust to the post-6.1 world, such as no more CDs. Thanks to Marc Espie for (as always) valuable input.

Update 2018-06-12: Modern man.openbsd.org links are much simpler now than when this was originally written, links updated. Also small tweaks to make the text reflect the user visible changes to the upgrade program.

Update 2019-04-26 + 2024-09-01: Now that you've read this piece to the end and have probably gone through more than a few manual steps, you will probably be pleased to hear that the process is now about to become significantly simpler in almost all cases.

Only hours after the news of the OpenBSD 6.5 release hit, the new sysupgrade(8) command was added to OpenBSD-current, and so will most likely be part of the OpenBSD 6.6 release, expected in about six months. In its present form, sysupgrade(8)  only performs the base system upgrade, so the pkg_add(8) parts mentioned in this piece will still be needed as a separate step. Further details will be available as development proceeds, but now you know about at least one thing to look forward to in OpenBSD 7.6 (due some time in late October to early November of 2024, with the release page starting to emerge some weeks ahead of that date.

Monday, May 28, 2012

In The Name Of Sane Email: Setting Up OpenBSD's spamd(8) With Secondary MXes In Play - A Full Recipe


Recipes in our field are all too often offered with little or no commentary to help the user understand the underlying principles of how a specific configuration works. To counter the trend and offer some free advice on a common configuration, here is my recipe for a sane mail setup.

Mailing lists can be fun. Most of the time the discussions on lists like openbsd-misc are useful, entertaining or both.

But when your battle with spam fighting technology ends up blocking your source of information and entertainment (like in the case of the recent thread titled "spamd greylisting: false positives" - starting with this message), frustration levels can run high, and in the process it emerged that some readers out there place way too much trust in a certain site offering barely commented recipes (named after a rare chemical compound Cl-Hg-Hg-Cl).

I did pitch in at various points in that thread, but then it turned out that the real problem was a misconfigured secondary MX, and I thought I'd offer my own recipe, in the true spirit of sharing works for me(tm) content. So without further ado, here is

Setting Up OpenBSD's spamd(8) With Secondary MXes In Play in Four Easy Steps

Yes, it really is that simple. The four steps are:
  1. Make sure your MXes (both primary and secondary) are able to receive mail for your domains
  2. Set set up content filtering for all MXes, since some spambots actually speak SMTP
  3. Set up spamd in front of all MXes
  4. Set up synchronization between your spamds
These are the basic steps. If you want to go even further, you can supplement your greylisting and publicly available blacklists with your own greytrapping, but greytrapping is by no means required.

For steps 1) and 2), please consult the documentation for your MTA of choice and the content filtering options you have available.

If you want an overview article to get you started, you could take a peek at my longish Effective spam and malware countermeasures article (originally a BSDCan paper - if you feel the subject could bear reworking into a longer form, please let me know). Once you have made sure that your mail exchangers will accept mail for your domains (checking that secondaries do receive and spool mail when you stop the SMTP service on the primary), it's time to start setting up the content filtering.

At this point you will more likely than not discover that any differences in filtering setups between the hosts that accept and deliver mail will let spam through via the weakest link. Tune accordingly, or at least until you are satisfied that you have a fairly functional configuration.

When you're done, leave top or something similar running on each of the machines doing the filtering and occasionally note the system load numbers.

Before you start on step 3), please take some time to read relevant man pages (pf.conf, spamd, spamd.conf and spamlogd come to mind), or you could take a peek at the relevant parts of the PF FAQ, or my own writings such as The Book of PF, the somewhat shorter Firewalling with PF online tutorial or the most up to date tutorial slides with slightly less text per HTML page.

The book and tutorial both contain material relevant to the FreeBSD version and other versions based on the older syntax too (really only minor tweaks needed). In the following I will refer to the running configuration at the pair of sites that serve as my main lab for these things (and provided quite a bit of the background for The Book of PF and subsequent columns here).

As you will have read by now in the various sources I cited earlier, you need to set up rules to redirect traffic to your spamd as appropriate. Now let's take a peek at what I have running at my primary site's gateway. greping for rules that reference the smtp should do the trick:

peter@primary $ doas grep smtp /etc/pf.conf

which yields

pass in log quick on egress proto tcp from <nospamd> to port smtp
pass in log quick on egress proto tcp from <spamd-white> to port smtp
pass in log on egress proto tcp to port smtp rdr-to 127.0.0.1 port spamd queue spamd
pass out log on egress proto tcp to port smtp


Hah. But these rules differ both from the example in the spamd man page and in the other sources! Why?

Well, to tell you the truth, the only thing we achieve by doing the quick dance here is to make sure that SMTP traffic from any host that's already in the nospamd or spamd-white tables is never redirected to spamd, while traffic from anywhere else will match the first non-quick rule quoted here and will be redirected.

I do not remember the exact circumstances, but this particular construct was probably the result of a late night debugging session where the root cause of the odd behavior was something else entirely. But anyway, this recipe is offered in a true it works for me spirit, and I can attest that this configuration works.

The queue spamd part shows that this gateway also has a queue based traffic shaping regime in place. The final pass out rule is there to make sure spamlogd records outgoing mail traffic and maintains whitelist entries.


Update 2017-05-25: At some point after this was originally written, I revised that rule set. They now read, with no quick dance:

pass in on egress inet proto tcp from any to any port smtp \
    divert-to 127.0.0.1 port spamd set queue spamd set prio 0
pass in on egress inet6 proto tcp from any to any port smtp \
    divert-to ::1 port spamd set queue spamd set prio 0
pass in log(all) on egress proto tcp from <nospamd> to port smtp 
pass in log(all) on egress proto tcp from <spamd-white> to port smtp 
pass out on egress proto tcp from { self $mailservers } to any port smtp


And of course for those rules to load, you need to define the tables before you use them by putting these two lines

table <spamd-white> persist
table <nospamd> persist file "/etc/mail/nospamd"


somewhere early in your /etc/pf.conf file.

Now let's see what the rules on the site with secondary MX looks like. We type:

$ doas grep smtp /etc/pf.conf

and get

pass in log on egress proto tcp to port smtp rdr-to 127.0.0.1 port spamd
pass log proto tcp from <moby> to port smtp
pass log proto tcp from <spamd-white> to port smtp
pass log proto tcp from $lan to port smtp


which is almost to the letter (barring only an obscure literature reference for one of the table names) the same as the published sample configurations.

Pro tip: Stick as close as possible to the recommended configuration from the spamd(8) man page. The first version here produced some truly odd results on occasion.

Once again the final rule is there to make sure spamlogd records outgoing mail traffic and maintains whitelist entries. The tables, again earlier on in the /etc/pf.conf file, are:

table <spamd-white> persist counters
table <moby> file "/etc/mail/nospamd"


At this point, you have seen how to set up two spamds, each running in front of a mail exchanger. You can choose to run with the default spamd.conf, or you can edit in your own customizations.

The next works for me item is bsdly.net's very own spamd.conf file, which automatically makes you a user of my greytrapping based blacklist.

Once you have edited the /etc/rc.conf.local files on both machines so the spamd_flags= no longer contains NO (change to spamd_flags="" for now), you can start spamd (by running /usr/libexec/spamd and /usr/libexec/spamdlogd and run /usr/libexec/spamd-setup manually).

Note (update 2021-03-19): On modern OpenBSD versions, the easiest way to enable and start spamd is (assuming you have configured doas to allow your user to run rcctl:)

$ doas rcctl enable spamd
$ doas rcctl start spamd

Or if you want, reboot the system and look for the spamlogd and spamd startup lines in the /etc/rc output.

The fourth and final required step for a spamd setup with backup mail exchangers it to set up synchronization between the spamds. The synchronization keeps your greylists in sync and transfers information on any greytrapped entries to the partner spamds. As the spamd man page explains, the synchronization options -y and -Y are command line options to spamd.

So let's see what the /etc/rc.conf.local on the primary has in its spamd_flags options line:

peter@primary-gw $ doas grep spamd /etc/rc.conf.local
spamd_flags="-v -G 2:8:864 -w 1 -y bge0 -Y secondary.com -Y secondary-gw.secondary.com "


Here we see that I've turned up verbose logging (-v), for some reason I've fiddled with the greylisting parameters (-G). But more significantly, I've also set up this spamd to listen for synchronization messages on the bge0 interface (-y) and to send its own synchronization messages to the hosts designated by the -Y options.

On the secondary, the configuration is almost identical. The only difference is the interface name and that the synchronization partner is the primary gateway.

$ doas grep spamd /etc/rc.conf.local spamd_flags="-v -G 2:8:864 -w 1 -y xl0 -Y primary-gw.primary.com -Y primary.com"

With these settings in place, you have more or less completed step four of our recipe. But if you want to make sure you get all spamd log messages in a separate log file, add these lines to your /etc/syslog.conf:

# spamd
!!spamd
daemon.err;daemon.warn;daemon.info;daemon.debug /var/log/spamd


After noting the system load on your content filtering machines, restart your spamds. Then watch the system load values on the content filterers and take a note of them from time to time, say every 30 minutes or so.

Step 4) is the last required step for building a multi-MX configuration. You may want to just leave the system running for a while and watch any messages that turn up in the spamd logs or the mail exchanger's logs.

The final embellishment is to set up local greytrapping. The principle is simple: If you have one or more addresses in your domain that you know will never be valid, you add them to your list of trapping addresses with a command such as

$ doas spamdb -T -a noreply@mydomain.nx

and any host that tries to deliver mail to noreply@mydomain.nx will be added to the local blacklist spamd-greytrap to be stuttered at for as long as it takes.

Greytrapping can be fun, you can search for posts here tagged with the obvious keywords. To get you started, I offer up my published list of trap addresses, built mainly from logs of unsuccessful delivery attempts here, at The BSDly.net traplist page, while the raw list of trap email addresses is available here. If you want to use that list in a similar manner for your site, please do, only remember to replace the domain names with one or more that you will be receiving mail for.

This is the list that is used to trap the addresses I publish here with a faster mirror here. The list is already in the spamd.conf file I offered you earlier.

If you want more background on the BSDly.net list, please see the How I Run This List, Or The Ethics of Running a Greytrapping Based Blacklist page or search this blog on the obvious keywords.

By the way, what happened to the load on those content filtering machines?

Update 2012-05-30: Titles updated to clarify that the main feature here is the spamd(8) spam deferral daemon from the OpenBSD base system, not the identically-named program from the SpamAssassin content filtering suite.

Update 2013-04-16: Added the Pro tip: Stick as close as possible to the recommendend configuration from the spamd(8) man page. The first version here produced some truly odd results on occasion.

Update 2015-01-23: Changed the Book of PF links to point to the most recent edition.

Update 2015-08-01: Several correspondents have asked me for a useful nospamd file. Here's the one I use at bsdly.net, (http://www.bsdly.net/~peter/nospamd) collected over the years from various incidents and some SPF records extracting via dig -ttxt domain.tld.

Update 2017-05-25: In an act of procrastination while preparing slides for the upcoming BSDCan PF and networking tutorial, I added man.openbsd.org links for man page references, and edited in some minor fixes such as up to date rules.

Update 2021-03-19: Added a note about using rcctl to enable and start spamd on recent OpenBSD versions. A tangentially related incident had me review this article, and I found that it would be more useful to readers to be pointed at the recommended way to run a system. Returning readers may also be interested in the care and feeding activites for the traplist and other data we offer as described in the more recent articles Goodness, Enumerated by Robots. Or, Handling Those Who Do Not Play Well With Greylisting and Badness, Enumerated by Robots (both 2018).

And of course, if you're still wondering why OpenBSD is good for you, the slides from my OpenBSD and you presentation might help.

Wednesday, April 11, 2012

Why Not Use Port Knocking?

The robots currently at work knocking around for your guessable password could easily be repurposed to guess your Unicode password currently known as your port knocking sequence, and quite likely have been already. Plus, we already have authpf(8) for network-level restrictions on access.

Whenever you write about security in general and SSH security in particular (and for good measure also get slashdotted for your efforts), the comments inevitably turn up a variety of useful and less useful suggestions.

The April 6th, 2012 article about the possible preparations for a new round of slow bruteforcers was no different. Some of these recurring ideas are worthy of some more attention, and a more or less complete list is as follows -

  • Set up iptables with the recent module - a clear indication that the commenter did not, in fact read the article. That module implements a subset of the state tracking techniques from my PF rule set referenced in the article, but unlike the PF rules it is specific to the SSH protocol rather than a general technique. And inevitably, the actual configuration the poster presents would in fact not trigger on the kind of traffic we see from the Hail Mary Cloud with its signature slow brute force attempts. Misunderstandings of this kind keep cropping up, for no good reason.

  • Disable root logins over the network. Yes, this is obviously useful, and if I remember correctly, PermitRootLogin no is the default in recent OpenSSH versions. Do remember to check what the setting is on your kit, obviously.
     
  • Use packet filtering or other means to restrict where users can log in from. Once again, a very useful suggestion, but some sites may require shell access from semi-random locations, so practical issues (aka real life) can interfere with this one.

  • Disable password logins entirely, and allow only key-based logins. Another a very useful suggestion, and one that is useful for several reasons. You do get the downside of managing your keys securely, but key-only logins are generally recommended practice and should be encouraged.

  • Set up your sshd to listen on a non-standard port. Several sites report less log file noise after moving their sshd to a non-standard port. Easy to implement and possibly useful, but if a prospective attacker takes the time to do a real port scan for active services, they will find your sshd running on port 22222 fairly easily. I'd say moving your sshd to listen on a non-standard port postpones rather than solves the problem.

  • Use denyhosts to automatically add hosts with too many failed logins to a blacklist. Yes, an excellent idea in principle, but it comes with one caveat: Creating a blacklist automatically leaves you in charge of maintaining its content. Blacklist maintenance -- as in keeping your lists up to date and useful, is a complex enough issue that I'm fairly confident it is worth a separate column, and I most likely will return to the issue later. (Update: I did return to the subject, in a 2013 column called Maintaining A Publicly Available Blacklist - Mechanisms And Principles)

  • Why not set up port knocking? Well, as you may have guessed I'm about to tell you about a few whys not. I've been circling port knocking as an article subject for a long time, and now it seems that the time has come to outline why I think that port knocking emphatically not a good idea. Read below for my take on the subject.
Port Knocking: Threat Or Menace?
I must admit that the port knocking concept fascinated me at first, but all implementations had the downside of adding yet another piece of clearly experimental software to my system along with somewhat convoluted procedures for setting the thing up, so I left it for later.

At each of the later dates when i decided to look into the topic again, I discovered new reasons not to like the method (read this thread on OpenBSD-misc for a particularly disenchanting example -- how's single-packet unconditional remote reboot or remote kill your sshd for a feature?). Finally, it was the appearance of the slow bruteforcers (also known as the Hail Mary Cloud) that made me finally decide that port knocking is neither particularly elegant or actually useful.

One of the reasons the port knocking concept seems so deceptively attractive is probably that it is fairly easy to explain (see the Port Knocking article on Wikipedia for a fairly thorough treatment). The target system runs with no services accessible to the outside, and in order to gain access a remote system must send a specific, pre-determined sequence of packets aimed at specified ports. Unless you know (or are able to guess) the correct sequence of ports, you will not be able to gain any access to the target system.

And after this explanation, the naive listener goes
"Oh, how clever, we send packets to random ports!
Nobody will ever be able to guess our combination!".

But first impressions are not always correct. First, please take one step back and explain to me what problem this is supposed to solve. If the answer is the old canard "all the insecurities of that horrible SSH protocol", I will ask you to please point out to me just what those are, and if you will, describe which parts of what I am about to describe actually adds value (that's "security" to you) in a real life scenario.

So let's go straight to the the core of the matter, and consider what actual information content an attacker would need to get right in order to gain access to the system. The hopeful would need to know or guess a sequence of TCP or UDP ports. In both cases, the range of possible ports is a 16 bit number, with a total of 65536 possible values.

 Each value is a 16-bit number, with a size of two bytes, or equal to two ASCII characters or one Unicode character. Port knocking examples generally do not run to more than three packets, which means that the minimum amount of information a prospective attacker would need to get right in order to gain access is six bytes, equal to six ASCII characters or three Unicode characters.

Seen from this angle, all port knocking gets you is a somewhat cumbersome method for encoding your unicode password. In most implementations, that password would even be common to all users, with no easy way to change it.

And of course, in almost all other contexts where passwords are used, most sites dictate that you choose a personal password that is longer than six bytes. So that's at least two failures rolled into one: a password that's common to several or all users, and one that is hard or impossible to change, possibly even short enough to fail even basic guidelines.

The amount of information an attacker would need to get right, measured in number of bits or bytes is a fairly useful measure. I admit it's a fairly crude measure, but at least real data are easy to obtain, and since we are talking here about designing and maintaining systems, not writing them, how much cryptographic fun goes into generating the necessary data is not really relevant to the context. Proper cryptography is essential to maintain confidentiality and ensure the integrity of the communications, but we'll leave the details of the crypto bits for a later time.

One other factor that speaks against most implementations of port knocking is that the system runs with no ports open, and a daemon that parses firewall logs for anticipated sequences of port numbers contacted as the sole authority to determine whether access will be granted. We all know that all non-trivial software will contain bugs, so what are the chances that even a fairly simple deamon will at some point in the future be confronted with a situation that makes it terminate, making your system inaccessible (or as we say around here, "bricking your system")?

And one other thing, how do you spot an attacker? If you're looking for access attempts to closed ports, as you should be if you run a port knocking setup, how do you spot an attacker in the primal soup of noise that will anyway be arriving at your public interfaces? Do you, for example, record all sequences that could possibly be seen as unsuccessful attempts and put the source addresses in a "definitely banned" list?

I have not seen any useful answer to that one either. Doing a sudo tcpdump -nettti pflog0 action drop (that is, using tcpdump(8) to extract information on blocked traffic from my packet filtering setup, directly from the log device) on my gateway here certainly shows me access attempts to ports I didn't know much about -- what legitimate purpose, for example, is UDP port 287 actually used for? Once again it all boils down to the fact that if you rely on port knocking, you're really only implementing a cumbersome variant encoding of relatively short passwords.

On a side note (thanks to Adrian Close for reminding me), anybody who sits in the signal path between your knock target and somebody executing the correct knock sequence will be able to record the connections (using, for example, a netflow based tool -- expect to see Netflow-themed columns here in the future), making your closely guarded secret roughly equivalent to a plaintext password much like we used to see in old-style ftp setups.

Next, let us let us instead look at the thing port knocking is trying to protect, namely the sshd service. Typical SSH connection setup involves the transfer of at least the public host keys (typically in the some hundred bytes to several kilobytes range), followed by the user authentication which can involve passwords, keys or a number of cryptography based toys^H^Hols, in any case producing further transfer and validation of anything up to several more kilobytes of data (and in some setups, use of out-of-band methods like short lived keys generated by special-purpose devices) before any meaningful access is possible. The exchange and validation of host keys in itself involves more information that the attacker would have to get right in order to gain access than is involved in any port knocking setup I've seen demonstrated. And remember, host keys exchange is only one early steps of several on the way to authentication.

For the actual math on how data sizes and entropy is significant (yes, in some contexts, size does matter), see the Wikipedia entry and a somewhat shorter summary by Phil Ratcliffe.

All port knocking implementations share the problems I've outlined, and their proponents have tended to ignore or gloss over the issues rather than address them. It is an unfortunate and perhaps embarrasing fact that port knocking in practice comes down to implementing and maintaining a separate set of passwords, and in all likelihood you will be using tools that are less appropriate for the task than the ones that already come with the base system on any reasonable Unix-like system.

In an environment where we know there are large, distributed efforts underway to crack easily guessable passwords, it is naive not to assume that the same password guessing tools could be adapted to try sequences of TCP or UDP ports instead of character sequences. If there is in fact any significant number of systems that use port knocking today, it would surprise me if this isn't already happening. The way I see it, the belief that port knocking in fact offers an effective protection against attackers is a dangerous misunderstanding and probably only serves to divert your attention away from the real problems involved in keeping your systems running in a secure fashion.

If you're still not convinced, I'll show you a better way

Now, if you really, really want to implement port knocking anyway, I'll let you in on a dirty little secret: you don't need a write a new daemon or install anything besides what's already in the OpenBSD base system (or for that matter, on other PF-equipped BSD variants).

You can implement port knocking in a relatively straightforward manner via minor contortions in your PF rule set, and I know for a fact that people have done just that. The examples I was thinking of before I started writing this piece appear to have been removed from the public eye, but with a little effort using the obvious keywords, some well intended but actually quite bad advice on how to put together a seemingly elegant implementation will be on its way to you.

Be aware that you will be doing things the tools were not designed for, and more likely than not you will soon find yourself in a mess of rinky-dink workarounds that keep fighting back.

If, however, what you really want to do is create a separate line of defence with its own round of authentication required for access, your OpenBSD base system already contains a suitable tool in authpf(8). It's fairly easy to come up with a setup that lets your users log in to the authenticating gateway, using any authentication method sshd(8) supports.

You can even run your inital sshd(8) on a separate, non-standard port if you like, and a successful login at that point will only result in a set of packet filtering rules (possibly tailored to that specific user) being loaded. When the ssh session terminates, the rules are unloaded, and the system reverts to the state it had before the user authenticated.

The tools are as flexible and robust as you have come to expect from the OpenBSD system tools (even if you're not running OpenBSD at the moment, you are by a large margin more likely than not using OpenSSH as your ssh client and server, which is supplied to the world by the OpenBSD project), and you can tailor your access profiles and services to your heart's content so they fit your needs.

If you've read this far, you've probably also come to the conclusion that port knocking is really not the way to go. Other, more appropriate tools exist. But how to go about improving the situation?

My suggestions follow below, with some specific suggestions that may have a personal slant.

A Path To Self Improvement

As with any problem that is really rooted in a lack of information or knowledge, educating yourself is the key. Read up on relevant subjects, go back and determine what problem you are actually trying to solve, and then perhaps shop around for tools that will help you solve that problem.

My suggestion is to go to first to www.openbsd.org and browse the resources available there, including the online manual pages and the extensive FAQ document that actually serves quite well as a user manual.

The next logical step is to go to the Orders page and order at least one copy of the latest release (and if you're reading this not too long after publication, do pre-order the upcoming release as well). If you're too impatient to wait for the postal services' mules to load up and deliver, you could download the release (or a -current snapshot if you're feeling adventurous) from a suitable mirror site and start installing right away.

If you don't order any merchandise, you really should head over to the donations page and donate at least an amount equal to the cost of the CDs and other stuff you would otherwise have ordered. There are several fit and frightening ghosts primed and ready to come haunt you if you don't.

If you want some literature to back up the official OpenBSD documentation, you could do worse than order The Book of PF, 3rd edition and Michael W. Lucas' 2012 book SSH Mastery. Using those links here will ensure the OpenBSD project gets a larger-than-otherwise cut of the money as net profit. There are also electronic versions available for both titles (see The Book of PF home page or the SSH Mastery home page for details). If you're really short of funds, even the free, online PF tutorial that the Book of PF evolved from will get you started.

And finally, there are few places better to collect useful information about OpenBSD and its sister operating systems than the various local and regional user groups and conferences such as the upcoming BSDCan in Ottawa, Canada May 9th through 12th, 2012 or EuroBSDCon, with this year's version to be held in Warsaw, Poland, October 18th through 21st.

These events attract both regular users (some of them quite proficient) as well as a useful subset of the people who write and maintain the tools.
Whatever you end up doing, buying or downloading, please keep in mind that it's your responsibility to keep your systems in the best possible shape, and make sure you monitor them properly, via log files or other means. Also see the Pledge of the Network Admin for my (slightly humorous) take on that responsibility. Along with the better parts of the tutorial, it made its way into the book, too.

Thanks
to Michael Dexter, Kristaps Dzonsons, Michael W. Lucas, Thordur Bjornsson and Henning Brauer for insightful comments and valuable input.

Copyright © 2012 Peter N. M. Hansteen


If you found this article inspiring, irritating, useful or otherwise moving, please let me know via the comments field just below here, or if you like, drop me a line at peter at bsdly dot eu.

Note: A Better Data Source Is Available
Update 2013-06-09: For a faster and more convenient way to download the data referenced here, please see my BSDCan 2013 presentation (also in article form as The Hail Mary Cloud And The Lessons Learned) which summarizes this series of articles and provides links to all the data. The links in the presentation and the article point to a copy stored at NUUG's server, which connects to the world through a significantly fatter pipe than BSDly.net has.

Friday, April 6, 2012

If We Go One Attempt Every Ten Seconds, We're Under The Radar

The Slime Also Evolves: New bruteforce ssh attempts come in at 10 second intervals, and they keep going even if we block them. Is this the warmup to a new iteration of the Hail Mary Cloud?

Note: This post has been updated with a correction, see the end of the article.

Regular readers will remember the activities of the Hail Mary Cloud, which turned up in authentication logs with large numbers of unsuccessful ssh login attempts, apparently coordinated across a large number of source IP addresses and with any individual host in the attacker set making a new attempts at intervals of anything from several seconds to several minutes.


At the time, commentators took these activites either as an indication of a truly inspired idea from a brilliant mind (after all, avoiding detection is essential) or a token of almost unimaginable ineptitude or perhaps just an overdose of faith that if you keep going long enough, even extremely unlikely things will happen.

It's been a litte while now since we last saw the slow, distributed bruteforce attacks at work here at the BSDly labs (we've kept collecting data here), but one curious incident during the last week indicates that somebody, somewhere is still working on ssh cracking scripts that operate on fairly similar methods.

Bruteforce attacks can be fairly easy to detect and head off. In most cases the attacker comes in with a larger than usual number of login attempts in rapid succession from a single IP address, and with modern tools such as OpenBSD's PF packet filter, you can set up rules that use state tracking options to intercept. The phenomenon is common enough that the bruteforce avoidance section is one of the more popular parts of my online PF tutorial (and of course, a slightly expanded version is avavailable in The Book of PF).

I wouldn't publish or recommend anything that I haven't at least tried myself, so just to illustrate,

[Fri Apr 06 14:48:21] peter@skapet:~$ doas grep bruteforce /etc/pf.conf
table <bruteforce> persist counters
block log (all) quick from <bruteforce>
pass log (all) proto { tcp, udp } to port ssh keep state (max-src-conn 15, max-src-conn-rate 7/4, overload <bruteforce>

The PF rules on BSDly.net's gateway have something much like the published example. This means that a traditional bruteforce attempt will end up something like this:

[Fri Apr 06 15:30:38] peter@skapet:~$ grep 203.34.37.62 /var/log/authlog
Apr 5 17:42:36 skapet sshd[32722]: Failed password for root from 203.34.37.62 port 44936 ssh2
Apr 5 17:42:36 skapet sshd[32722]: Received disconnect from 203.34.37.62: 11: Bye Bye [preauth]
Apr 5 17:42:38 skapet sshd[26527]: Failed password for root from 203.34.37.62 port 45679 ssh2
Apr 5 17:42:38 skapet sshd[26527]: Received disconnect from 203.34.37.62: 11: Bye Bye [preauth]
Apr 5 17:42:41 skapet sshd[29912]: Invalid user db2inst1 from 203.34.37.62
Apr 5 17:42:41 skapet sshd[29912]: Failed password for invalid user db2inst1 from 203.34.37.62 port 46283 ssh2
Apr 5 17:42:41 skapet sshd[29912]: Received disconnect from 203.34.37.62: 11: Bye Bye [preauth]
Apr 5 17:42:43 skapet sshd[30349]: Failed password for root from 203.34.37.62 port 46898 ssh2
Apr 5 17:42:43 skapet sshd[30349]: Received disconnect from 203.34.37.62: 11: Bye Bye [preauth]
Apr 5 17:42:46 skapet sshd[25557]: Invalid user prueba from 203.34.37.62
Apr 5 17:42:46 skapet sshd[25557]: Failed password for invalid user prueba from 203.34.37.62 port 47495 ssh2
Apr 5 17:42:46 skapet sshd[25557]: Received disconnect from 203.34.37.62: 11: Bye Bye [preauth]
Apr 5 17:42:48 skapet sshd[5380]: Failed password for bin from 203.34.37.62 port 48087 ssh2
Apr 5 17:42:48 skapet sshd[5380]: Received disconnect from 203.34.37.62: 11: Bye Bye [preauth]
Apr 5 17:42:51 skapet sshd[23635]: Invalid user postgres from 203.34.37.62
Apr 5 17:42:51 skapet sshd[23635]: Failed password for invalid user postgres from 203.34.37.62 port 48658 ssh2
Apr 5 17:42:51 skapet sshd[23635]: Received disconnect from 203.34.37.62: 11: Bye Bye [preauth]
Apr 5 17:42:54 skapet sshd[2450]: Failed password for root from 203.34.37.62 port 49307 ssh2
Apr 5 17:42:54 skapet sshd[2450]: Received disconnect from 203.34.37.62: 11: Bye Bye [preauth]
Apr 5 17:42:56 skapet sshd[16673]: Failed password for root from 203.34.37.62 port 49910 ssh2
Apr 5 17:42:57 skapet sshd[16673]: Received disconnect from 203.34.37.62: 11: Bye Bye [preauth]
Apr 5 17:42:59 skapet sshd[17522]: Failed password for root from 203.34.37.62 port 50503 ssh2
Apr 5 17:42:59 skapet sshd[17522]: Received disconnect from 203.34.37.62: 11: Bye Bye [preauth]
Apr 5 17:43:02 skapet sshd[4633]: Invalid user mythtv from 203.34.37.62
Apr 5 17:43:02 skapet sshd[4633]: Failed password for invalid user mythtv from 203.34.37.62 port 51218 ssh2
Apr 5 17:43:02 skapet sshd[4633]: Received disconnect from 203.34.37.62: 11: Bye Bye [preauth]
Apr 5 17:43:05 skapet sshd[25728]: Failed password for root from 203.34.37.62 port 51849 ssh2
Apr 5 17:43:05 skapet sshd[25728]: Received disconnect from 203.34.37.62: 11: Bye Bye [preauth]
Apr 5 17:43:08 skapet sshd[10487]: Failed password for root from 203.34.37.62 port 52565 ssh2
Apr 5 17:43:08 skapet sshd[10487]: Received disconnect from 203.34.37.62: 11: Bye Bye [preauth]
Apr 5 17:43:10 skapet sshd[31156]: Failed password for root from 203.34.37.62 port 53264 ssh2
Apr 5 17:43:11 skapet sshd[31156]: Received disconnect from 203.34.37.62: 11: Bye Bye [preauth]
Apr 5 17:43:13 skapet sshd[31956]: Invalid user mmroot from 203.34.37.62
Apr 5 17:43:13 skapet sshd[31956]: Failed password for invalid user mmroot from 203.34.37.62 port 53958 ssh2
Apr 5 17:43:13 skapet sshd[31956]: Received disconnect from 203.34.37.62: 11: Bye Bye [preauth]


And looking up the current contents of the table shows our new perpetrator has indeed been caught:

[Fri Apr 06 15:34:23] peter@skapet:~$ sudo pfctl -t bruteforce -vT show
91.197.131.24
  Cleared: ; Thu Apr 5 20:22:29 2012
In/Block: [ Packets: 1 Bytes: 52 ]
In/Pass: [ Packets: 0 Bytes: 0 ]
Out/Block: [ Packets: 0 Bytes: 0 ]
Out/Pass: [ Packets: 0 Bytes: 0 ]
200.11.174.131 
Cleared Thu Apr 5 19:09:30 2012  
In/Block: [ Packets: 1 Bytes: 52 ]
In/Pass: [ Packets: 0 Bytes: 0 ]
Out/Block: [ Packets: 0 Bytes: 0 ]
Out/Pass: [ Packets: 0 Bytes: 0 ]
203.34.37.62
Cleared: Thu Apr 5 17:43:13 2012
In/Block: [ Packets: 1 Bytes: 52 ]
In/Pass: [ Packets: 0 Bytes: 0 ]
Out/Block: [ Packets: 0 Bytes: 0 ]
Out/Pass: [ Packets: 0 Bytes: 0 ]


The table data show us one more thing worth noting: All of these bruteforcers sent exactly one packet after they were blocked, and gave up right away when they noticed they were blocked.

On Sunday, April 1st 2012, I noticed an unusually high number of ssh login attempts coming from two Chinese addresses (58.214.5.51 and 61.160.76.123), amazingly persistent and for some reason they had not been caught by my bruteforce avoidance rules. Thinking I'd simply adjust my rate settings, I simply added those addresses to the table by hand and started looking at the authentication log versus my rule set. Then a little while later, I noticed that instead of just bowing out after blocking, these two kept going. (I also tweeted about this, however not accurate in all details, at the time)

A little later that same evening, the <bruteforce> table looked like this:

[Sun Apr 01 22:58:02] peter@skapet:~$ sudo pfctl -t bruteforce -vT show
58.51.95.75
Cleared: Sun Apr 1 22:05:29 2012
In/Block: [ Packets: 1 Bytes: 52 ]
In/Pass: [ Packets: 0 Bytes: 0 ]
Out/Block: [ Packets: 0 Bytes: 0 ]
Out/Pass: [ Packets: 0 Bytes: 0 ]
58.214.5.51
Cleared: Sun Apr 1 14:06:21 2012
In/Block: [ Packets: 3324 Bytes: 199440 ]
In/Pass: [ Packets: 0 Bytes: 0 ]
Out/Block: [ Packets: 0 Bytes: 0 ]
Out/Pass: [ Packets: 0 Bytes: 0 ]
61.91.125.115
Cleared: Sun Apr 1 03:10:05 2012
In/Block: [ Packets: 1 Bytes: 52 ]
In/Pass: [ Packets: 0 Bytes: 0 ]
Out/Block: [ Packets: 0 Bytes: 0 ]
Out/Pass: [ Packets: 0 Bytes: 0 ]
61.160.76.123
Cleared: Sun Apr 1 14:07:08 2012
In/Block: [ Packets: 3262 Bytes: 195720 ]
In/Pass: [ Packets: 0 Bytes: 0 ]
Out/Block: [ Packets: 0 Bytes: 0 ]
Out/Pass: [ Packets: 0 Bytes: 0 ]


The two hosts kept coming, at a rate of roughly one attempt every ten seconds, and apparently ignored the fact that they were blocked in the packet filter rules and would be getting connection refused errors for each attempt.

Looking at the log data (preserved here along with data from various other attempts from other sources in the relevant period), both hosts were busy trying to guess root's password from the time they started until they were blocked. When the block expired after 24 hours, they had both apparently proceeded down similiar lists of user names and were busy with rooter ):

Apr 2 14:10:06 skapet sshd[13332]: Invalid user rooter from 61.160.76.123
Apr 2 14:10:06 skapet sshd[13332]: input_userauth_request: invalid user rooter [preauth]
Apr 2 14:10:06 skapet sshd[13332]: Failed password for invalid user rooter from 61.160.76.123 port 46578 ssh2
Apr 2 14:10:06 skapet sshd[13332]: Received disconnect from 61.160.76.123: 11: Bye Bye [preauth]
Apr 2 14:10:14 skapet sshd[30888]: Invalid user rooter from 58.214.5.51
Apr 2 14:10:14 skapet sshd[30888]: input_userauth_request: invalid user rooter [preauth]
Apr 2 14:10:14 skapet sshd[30888]: Failed password for invalid user rooter from 58.214.5.51 port 47587 ssh2
Apr 2 14:10:14 skapet sshd[30888]: Received disconnect from 58.214.5.51: 11: Bye Bye [preauth]

They both kept going afterwards, at roughly the same rates as before. The host at 61.160.76.123 kept varying its rate and at one point sped up enough that it triggered the automatic bruteforce blocking.

After running a fairly familiar alphabetic progression through a list of supposed user names, the remaining host finally gave up during the first hour of April 3rd, by CEST time:

Apr 3 00:36:24 skapet sshd[30287]: Received disconnect from 58.214.5.51: 11: Bye Bye [preauth]
Apr 3 00:36:33 skapet sshd[27318]: Invalid user clodia from 58.214.5.51
Apr 3 00:36:33 skapet sshd[27318]: input_userauth_request: invalid user clodia [preauth]
Apr 3 00:36:33 skapet sshd[27318]: Failed password for invalid user clodia from 58.214.5.51 port 58185 ssh2
Apr 3 00:36:33 skapet sshd[27318]: Received disconnect from 58.214.5.51: 11: Bye Bye [preauth]

Before we go into further details, I have a question for you, dear reader: Did anything like this turn up in your authentication logs during the same rough time frame? If your logs show something similar, please drop me a line at (lightly obfuscated) peter at bsdly dot se.

It could be instructive to compare this last batch with the previous samples. The log format differs slightly, since the previous attempts were aimed at FreeBSD machines, while this last round was aimed at a single OpenBSD host.

The whois information for the two hosts (58.214.5.51 and 61.160.76.123) both point to Chinese networks, as far as I can tell in the same provice and possibly in the same city, Wuxi, which appears to be one of several Chinese tech cities.

The slow rate of the login attempts and the sequence of user names attempted are both similar enough to the earlier distributed attempts that it's possible this is a limited experiment by the developers of the previous bruteforcing malware. The rate of roughly one attempt per host per 10 seconds is a significant speedup compared to the previous attempts, and it fits in the interval where blocking due to the rate of connections would most likely produce an unacceptably high number of false positives.



It will be interesting to see what rate of incoming connection the next full scale attempts will be using. It is possible that the source addresses are somewhere close to the actual whereabouts of the malware developers, but at this point it's pure speculation.

At this point we can only keep watching our logs and make sure that our sshd configurations are the best possible shape. If you need up to date advice on how to configure and use SSH safely, you could do significantly worse than grabbing Michael W. Lucas' recent SSH book SSH Mastery.

Update 2013-04-25: Revisiting the data in preparation for my BSDCan 2013 talk (also to be featured or rather previewed at tonight's BLUG meeting), I realized that a trivial scripting error had lead me to draw false conclusions.

The total number of attempts is correct, but both the number of hosts involved and the number of user names attempted were seriously off. The two hosts I mentioned in the article were the most active, but actually a total of 23 hosts participated, trying for a total of 1081 user names. Full data available here.

It seems the Hail Mary Cloud had shrunk, but not completely vanished as I thought at the time.


Note: A Better Data Source Is Available
Update 2013-06-09: For a faster and more convenient way to download the data referenced here, please see my BSDCan 2013 presentation The Hail Mary Cloud And The Lessons Learned which summarizes this series of articles and provides links to all the data. The links in the presentation point to a copy stored at NUUG's server, which connects to the world through a significantly fatter pipe than BSDly.net has.

Sunday, February 26, 2012

The Linux Command Line Is A Very Appealing Story

William E. Shotts, Jr.'s The Linux Command Line is a delightful read and a book you can hand to junior colleague or friend with more limited Unix shell exposure and be confident that they come back significantly shell-wiser and probably happier.

For almost as long as 'Linux' has been a somewhat familiar term in IT circles, there has been a constant effort to wrap the system in some sort of graphical interface to make the system 'user friendly'. The result is that the run of the mill Linux user is seldom if ever exposed to the classical Unix shell command line interface unless they actively seek it out. There is no shortage of literature describing how to be productive on Linux using the various graphical interfaces either.

With this context in mind, it's quite refreshing to find a book that is written with the purpose of pointing out that the real power and productivity gains of swithcing from something else to Linux (or other free unixlike systems) lies in tapping the power of the shell command line and shell scripting.

The book quite sensibly starts out with the premise that modern Linux users most likely have learned to use their computers mainly or even exclusively through grapical interfaces, and gently introduces the user to the shell via a series of bite-sized but insightful examples and excercises, starting from a few simple commands and navigating the file system hierarchy.

The pace picks up gradually from there, with explanations and tips on the workings of the Unixy environment and how to tailor it to your own preferences, useful administration commands, a smacking of regular expressions and various other basic building blocks that generally find a useful application within the first few pages of their first mention. The reader is instructed early on that the book is intended to be read from beginning to end like a novel, and when you follow that instruction the narrative and buildup works very well. At the two-thirds point in the book, the user is instructed in how to compile their first C program, a task that isn't quite as difficult as may sound if you pick a sensible chunk of code to start with.

The fourth and last section of the book (about third of the total by my eyeballing) is a very well written tutorial on writing useful shell scripts, which touches on a number of shell features and may very well even be a useful refresher for users with a little more shell experience than the intended primary target audience.

Although the author warns that the book is entirely Linux-centric, the BASH shell that is really the centrepiece in the narrative is in fact available on all modern Unixes and unixlike systems, so The Linux Command Line is in fact a quite useful introduction to the shell for users of BSDs and even, I imagine, MacOS users who want to see what can be done outside the wholly graphical interface.

Unix greybeards will find little that's entirely new here, but the book is a delightful read and you can hand it to a junior colleague or friend with more limited Unix shell exposure and be confident that they come back significantly shell-wiser and probably happier.

Title: The Linux Command Line
Author: William E. Shotts, Jr.
Publisher: No Starch Press, Inc.
ISBN-10: 1-59327-389-4
ISBN-13: 978-1-59327-389-7
Published January 2012, 480 pages

Available at better bookstores everywhere and directly from the publisher at http://nostarch.com/tlcl.htm.

Sunday, January 22, 2012

SSH Mastery: A Very Welcome Addition to Any Unix User's Bookshelf

The first paragraph of this book's afterword reads:

"You now know more about SSH, OpenSSH and Putty than the vast majority of IT professionals! Congratulations".

That claim will be true for any reader of SSH Mastery who has read the book up to that point and has incorporated at least some of the elements of the configurations it describes into their own environments.

"But why a book dedicated to a single command?", you might ask. Almost all Unixes and Unix-likes have incorporated OpenSSH, the free SSH that is developed as part of the OpenBSD project, and OpenSSH comes with excellent documentation in the form of several extensive man pages.

Well, that question in itself justifies this title's existence (there are in fact several programs in the OpenSSH suite), and readers familiar with Michael Lucas' work will appreciate hearing that his latest work is task-oriented and well written, covering anything from the basic secure shell access through to the peculiarities of setting up a virtual private network (VPN) using OpenSSH. An enterprising reader would be able to find all the information in this book or close equivalents using the OpenSSH man pages or other online sources, but this book provides a very concise guide to both the basics and some rather advanced concepts and provides you with the vocabulary and understanding that you will need in order to successfully navigate the man pages.

This book has several highlights, such as the very sensible and useful discussion of key based authentication and how to set things up for a passwordless existence, a number of suggestions on how to distribute and maintain both host keys and user keys as well as very readable and useful introductions to various kinds of tunneling, forwarding and proxying available using the OpenSSH tools.

In particular I enjoyed reading the description of SSH-based virtual private networks (VPNs) in Chapter 13. This is one of the most clearly written and useful treatments I've seen of that subject, and for many readers this chapter alone will be worth the price of the book or even considerably more.

The book very sensibly covers OpenSSH on OpenBSD, FreeBSD and Ubuntu Linux, and users who are compelled to use Microsoft Windows desktops will be pleased to hear that configuration and use information for Putty, the most popular and free SSH client for their environment, is included too everywhere it's relevant to the task at hand.

Before Michael W. Lucas' new title was released in January 2012, the most recent widely available book about the Secure Shell protocol (SSH) and applications that support it was an O'Reilly title dated 2005. So even with high quality documentation available via the manual pages, it was time for a new title on the subject.

This title conveniently grew out of one of Michael W. Lucas' other technical writing projects, the second edition of Absolute OpenBSD. The SSH chapter of that manuscript simply kept growing until it made sense to branch the text off to a separate book. This probably means that the treatment of SSH in the upcoming OpenBSD title will be slimmer again, but separating out the OpenSSH parts as a separate book with information for several different environments added makes sense because it makes high-quality information about important tools available to a larger audience.

I am convinced SSH Mastery is a title that Unix users and system administrators like myself will want to keep within reach on their Kindles or other ebook readers for a quick and convenient refresh of important concepts. If you're a student or learning your Unix skills, you will certainly find this to be a very handy guide that helps you grasp both the basics of SSH and several advanced concepts that are hard to find well described elsewhere.

The ebook is available in several formats via Amazon and other ebook outlets, a printed version is planned but was not yet available at the time of writing (January 22, 2012).

Title: SSH Mastery: OpenSSH, PuTTY, Tunnels and Keys
Author: Michael W. Lucas
Publisher: Tilted Windmill Press (January 18, 2012)