Thursday, December 11, 2014
The Password? You Changed It, Right?
As my Twitter followers may be aware, I spent the first part of this week at the Passwords14 conference in Trondheim, Norway. More about that later, suffice for now to say that the conference was an excellent one, and my own refreshed Hail Mary Cloud plus more recent history talk was fairly well received.
But the world has a way of moving on even while you're not looking, and of course when I finally found a few moments to catch up on my various backlogs while waiting to board the plane for the first hop on the way back from the conference, a particular sequence stood out in the log extracts from one of the Internet-reachable machines in my care:
Dec 9 19:00:24 delilah sshd[21296]: Failed password for invalid user ftpuser from 81.169.131.221 port 37404 ssh2
Dec 9 19:00:25 delilah sshd[6320]: Failed password for invalid user admin from 81.169.131.221 port 38041 ssh2
Dec 9 19:00:26 delilah sshd[10100]: Failed password for invalid user D-Link from 81.169.131.221 port 38259 ssh2
Dec 9 19:03:53 delilah sshd[26709]: Failed password for invalid user ftpuser from 83.223.216.46 port 43261 ssh2
Dec 9 19:03:55 delilah sshd[23796]: Failed password for invalid user admin from 83.223.216.46 port 43575 ssh2
Dec 9 19:03:56 delilah sshd[12810]: Failed password for invalid user D-Link from 83.223.216.46 port 43833 ssh2
Dec 9 19:06:36 delilah sshd[14572]: Failed password for invalid user ftpuser from 87.106.66.165 port 52436 ssh2
Dec 9 19:06:37 delilah sshd[427]: Failed password for invalid user admin from 87.106.66.165 port 53127 ssh2
Dec 9 19:06:38 delilah sshd[28547]: Failed password for invalid user D-Link from 87.106.66.165 port 53393 ssh2
Dec 9 19:14:44 delilah sshd[31640]: Failed password for invalid user ftpuser from 83.223.216.46 port 35760 ssh2
Yes, you read that right. Several different hosts from widely dispersed networks, trying to guess passwords for the accounts they assume exist on your system. One of the user names is close enough to the name of a fairly well known supplier of consumer and SOHO grade network gear that it's entirely possible that it's a special account on equipment from that supplier.
Some catching up on sleep and attending to some high priority tasks later, I found that activity matching the same pattern turned up in a second system on the same network.
By this afternoon (2014-12-11), it seems that all told a little more than 700 machines have come looking for mostly what looks like various manufacturers' names and a few other usual suspects. The data can be found here, with roughly the same file names as in earlier episodes. Full list of attempts on both hosts here, with the rather tedious root only sequences removed here, hosts sorted by number of attempts here, users sorted by number of attempts here, a CSV file with hosts by number of attempts with first seen and last seen dates and times, and finally hosts by number of attempts with listing of each host's attempts. Expect updates to all of these at quasi-random intervals.
The pattern we see here is quite a bit less stealthy than the classic Hail Mary Cloud pattern. In this sequence we see most of the hosts trying all the desired user names only a few seconds apart, and of course the number of user IDs is very small compared to the earlier attempts. But there seems to be some level of coordination - the attackers move on to the next target in their list, and at least some of them come back for a second try after a while.
Taken together, it's likely that what we're seeing is an attempt to target the default settings on equipment from a few popular brands of networking equipment. It's likely that the plan is to use the captured hosts to form botnets for purposes such as DDOSing. There is at least one publicly known incident that has several important attributes in common with what we're seeing: Norwegian ISP and cable TV supplier GET found themselves forced to implement some ad hoc countermeasures recently (article in Norwegian, but you will find robots) in a timeframe that fits with the earliest attempts we've seen here. I assume similar stories will emerge over the next days or weeks, possibly with more detail that what's available in the digi.no article.
If you're seeing something similar in your network and you are in a position to share data for analysis similar to what you see in the files referenced abovee, I would like to hear from you.
A conference dedicated to passwords and their potential replacements.
Yes, such a thing exists. All aspects of passwords and their potential replacements have been the topics of a series of conferences going back to 2011. This year I finally had a chance to attend the European one, Passwords14 in Trondheim, Norway December 8-10.
The conference has concluded, but you can find the program up still here, and the video from the live stream is archived here (likely to disappear for a few days soon, only to reappear edited into more manageable chunks of sessions or individual talks). You'll find me in the material from the first day, in a slightly breathless presentation (58 slides to 30 minutes talking time), and my slides with links to data and other material are available here.
Even if you're not in a position to go to Europe, there is hope: there will be a Passwords15 conference for the Europe-challenged in Las Vegas, NV, USA some time during the summer of 2015, and the organizers are currently looking for a suitable venue and time for the 2015 European one. I would strongly recommend attending the next Passwords conference; both the formal talks and the hallway track are bound to supply enlightening insights and interesting ideas for any reasonably security oriented geek.
Now go change some passwords!
I'll be at at least some of the BSD themed conferences in 2015, and I hope to see you there.
Saturday, October 5, 2013
The Hail Mary Cloud And The Lessons Learned
The Hail Mary Cloud was a widely distributed, low intensity password guessing botnet that targeted Secure Shell (ssh) servers on the public Internet.
The first activity may have been as early as 2007, but our first recorded data start in late 2008. Links to full data and extracts are included in this article.
We present the basic behavior and algorithms, and point to possible policies for staying safe(r) from similar present or future attacks.
But first, a few words about the devil we knew before the incidents that form the core of the narrative.
The Traditional SSH Bruteforce Attack
If you run an Internet-facing SSH service, you have seen something like this in your logs:
Sep 26 03:12:34 skapet sshd[25771]: Failed password for root from 200.72.41.31 port 40992 ssh2
Sep 26 03:12:34 skapet sshd[5279]: Failed password for root from 200.72.41.31 port 40992 ssh2
Sep 26 03:12:35 skapet sshd[5279]: Received disconnect from 200.72.41.31: 11: Bye Bye
Sep 26 03:12:44 skapet sshd[29635]: Invalid user admin from 200.72.41.31
Sep 26 03:12:44 skapet sshd[24703]: input_userauth_request: invalid user admin
Sep 26 03:12:44 skapet sshd[24703]: Failed password for invalid user admin from 200.72.41.31 port 41484 ssh2
Sep 26 03:12:44 skapet sshd[29635]: Failed password for invalid user admin from 200.72.41.31 port 41484 ssh2
Sep 26 03:12:45 skapet sshd[24703]: Connection closed by 200.72.41.31
Sep 26 03:13:10 skapet sshd[11459]: Failed password for root from 200.72.41.31 port 43344 ssh2
This is the classic, rapid-fire type of bruteforce attack, with rapid-fire login attempts from one source. (And yes, skapet is the Internet-facing host on my home network.)
The Likely Business Plan
These attempts are often preceded by a port scan, but in other cases it appears that the miscreants are just blasting away at random. In my experience, with the gateway usually at the lowest-numbered address, the activity usually turns up first there, before trying higher-numbered hosts. I'm not really in a mind to offer help or advice to the people running those scripts, but it might be possible to scan the internet from 255.255.255.255 downwards next time. Anyway, looking at the log excerpts, miscreants' likely plan is
- Try for likely user names, hope for guessable password, keep guessing until successful.
- PROFIT!
Traditional Anti-Bruteforce Rules
Rapid-fire bruteforce attacks are easy to head off. I tend to use OpenBSD on internet facing hosts, so first we present the technique as it has been available in OpenBSD since version 3.5 (released in 2005), where state tracking options are used to set limits we later act on:
In your /etc/pf.conf, you add a table to store addresses, block access for all traffic coming from members of that table, and finally amend your typical pass rule with some state tracking options. The result looks something like this:
table <bruteforce> persist block quick from <bruteforce> pass inet proto tcp to $int_if:network port $tcp_services \ keep state (max-src-conn 100, max-src-conn-rate 15/5, \ overload <bruteforce> flush global)
Here, max-src-conn is the maximum number of concurrent connections allowed from one host
max-src-conn-rate is the maximum allowed rate of new connections, here 15 connections per 5 seconds.
overload <bruteforce> means that any hosts that exceed either of these limits are have their adress added to this table
and, just for good measure, flush global means that for any host that is added to our overload table, we kill all existing connections too.
Basically,
Traditional Anti-Bruteforce Rules, Linux Style
For those not yet converted to the fine OpenBSD toolset (available in FreeBSD and other BSDs too, with only minor if any variations in details for this particular context), the Linux equivalent would be something like
sudo iptables -A INPUT -i eth0 -p tcp --dport 22 -m state --state NEW -m recent --set --name SSH
sudo iptables -A INPUT -i eth0 -p tcp --dport 22 -m state --state NEW -m recent --update --seconds 5 \ --hitcount 15 --rttl --name SSH -j DROP
But be warned: this will still be minus the maximum number of connections limit, plus the usual iptables warts. And you'd need a separate set of commands for ip6tables.
It's likely something similar is doable with other tools and products too, including possibly some proprietary ones. I've made something of an effort to limit my exposure to the non-free tools, so I can't offer you any more detail. To find out what your present product can do, please dive into the documentation for whichever product you are using. Or come back for some further OpenBSD goodness.
But as you can see, for all practical purposes the rapid-fire bruteforce or floods problem has been solved with trivial configuration tweaks.
But then something happened.
What's That? Something New!
On November 19th, 2008 (or shortly thereafter), I noticed this in my authentication logs:
Nov 19 15:04:22 rosalita sshd[40232]: error: PAM: authentication error for illegal user alias from s514.nxs.nl
Nov 19 15:07:32 rosalita sshd[40239]: error: PAM: authentication error for illegal user alias from c90678d3.static.spo.virtua.com.br
Nov 19 15:10:20 rosalita sshd[40247]: error: PAM: authentication error for illegal user alias from 207-47-162-126.prna.static.sasknet.sk.ca
Nov 19 15:13:46 rosalita sshd[40268]: error: PAM: authentication error for illegal user alias from 125-236-218-109.adsl.xtra.co.nz
Nov 19 15:16:29 rosalita sshd[40275]: error: PAM: authentication error for illegal user alias from 200.93.147.114
Nov 19 15:19:12 rosalita sshd[40279]: error: PAM: authentication error for illegal user alias from 62.225.15.82
Nov 19 15:22:29 rosalita sshd[40298]: error: PAM: authentication error for illegal user alias from 121.33.199.39
Nov 19 15:25:14 rosalita sshd[40305]: error: PAM: authentication error for illegal user alias from 130.red-80-37-213.staticip.rima-tde.net
Nov 19 15:28:23 rosalita sshd[40309]: error: PAM: authentication error for illegal user alias from 70-46-140-187.orl.fdn.com
Nov 19 15:31:17 rosalita sshd[40316]: error: PAM: authentication error for illegal user alias from gate-dialog-simet.jgora.dialog.net.pl
Nov 19 15:34:18 rosalita sshd[40334]: error: PAM: authentication error for illegal user alias from 80.51.31.84
Nov 19 15:37:23 rosalita sshd[40342]: error: PAM: authentication error for illegal user alias from 82.207.104.34
Nov 19 15:40:20 rosalita sshd[40350]: error: PAM: authentication error for illegal user alias from 70-46-140-187.orl.fdn.com
Nov 19 15:43:39 rosalita sshd[40354]: error: PAM: authentication error for illegal user alias from 200.20.187.222
Nov 19 15:46:41 rosalita sshd[40374]: error: PAM: authentication error for illegal user amanda from 58.196.4.2
Nov 19 15:49:31 rosalita sshd[40378]: error: PAM: authentication error for illegal user amanda from host116-164.dissent.birch.net
Nov 19 15:55:47 rosalita sshd[40408]: error: PAM: authentication error for illegal user amanda from robert71.lnk.telstra.net
Nov 19 15:59:08 rosalita sshd[40412]: error: PAM: authentication error for illegal user amanda from static-71-166-159-177.washdc.east.verizon.net
... and so on. The alphabetic progression of user names went on and on.
The pattern seemed to be that several hosts, in widely different networks, try to access our system as the same user, up to minutes apart. When any one host comes back it's more likely than not several user names later. The full sequence (it stopped December 30th), is available here.
Take a few minutes to browse the log data if you like. It's worth noting that rosalita was a server that had a limited set of functions for a limited set of users, and basically no other users than myself ever logged in there via SSH, even if they for various reason had the option open to them. So in contrast to busier sites where sequences like this might have drowned in the noise, here it really stood out. And I suppose after looking at the data, you can understand my initial reaction.
The Initial Reaction
My initial reaction was pure disbelief.
For the first few days I tried tweaking PF rules, playing with the attempts/second values and scratching my head, going, "How do I make this match?"
I spent way too much time on that, and the short version of the answer to that question is, you can't. With the simple and in fact quite elegant state tracking options, you will soon hit limits (especially time limits) that interfere with normal use, and you end up blocking legitimate traffic.
So I gave up on prevention (which really only would have rid me of a bit of noise in my authentication logs), and I started analyzing the data instead, trying to eyeball patterns that would explain what I was seeing. After a while it dawned on me that this could very well be a coordinated effort, using a widely distributed set of compromised hosts.
So there was a bit of reason in there after all. Maybe even a business plan or model. Next, I started analyzing my data, and came up with -
Bruteforcer Business Plan, Distributed Version
The Executive Summary would run something like this: Have more hosts take turns, round robin-ish, at long enough intervals to stay under the radar, guessing for weak passwords.
The plan is much like before, but now we have more host on the attacking side, so
- Pick a host from our pool, assign a user name and password (picked from a list, dictionary or pool)
- For each host,
- Try logging in to the chosen target with the assigned user name and password
- If successful, report back to base (we theorize); else wait for instructions (again we speculate)
- Go to 1).
- For each success at 2.2), PROFIT!
You're The Target
Let's recap, and take step back. What have we learned?
To my mind at least, it all boils down to the basics
- Your Unix computer (Linux, OpenBSD, FreeBSD or other) is a desirable, powerful thing.
- If your password is weak, you will be 0WN3D, sooner rather than later.
- There's a whole fleet out there, and they're coordinated.
Initial Public Reaction
On December 2, 2008, I published the first blog post in what would become a longish sequence, A low intensity, distributed bruteforce attempt, where I summarized my findings. It's slightly more wordy than this piece, but if I've piqued your interest so far, please go ahead and read. And as to a little public attention, I got my wish. The post ended up
That brought
- more disbelief, (see slashdot and other comments) but also
- confirmation, via comments and email, that others were seeing the same thing, and that the first occurrences may have been seen up to a year earlier (November-ish 2007).
The slow bruteforcers were not getting in, so I just went on collecting data. I estimated they'd be going on well past new year's if they were going to reach the end of the alphabet.
On December 30th, 2008, The Attempts Stopped
The attempts came to an end, conveniently while I was away on vacation. The last entries were:
Dec 30 11:03:08 rosalita sshd[51108]: error: PAM: authentication error for illegal user sophia from 201.161.28.9
Dec 30 11:05:08 filehut sshd[54932]: error: PAM: authentication error for illegal user sophia from 201.161.28.9
Dec 30 11:06:35 rosalita sshd[51116]: error: PAM: authentication error for illegal user sophia from static-98-119-110-139.lsanca.dsl-w.verizon.net
Dec 30 11:09:03 filehut sshd[54981]: error: PAM: authentication error for illegal user sophia from static-98-119-110-139.lsanca.dsl-w.verizon.net
That is, not even completing a full alphabetic cycle.
By then they had made 29916 attempts, all failed. You can find the full listing at here).
Trying 6100 user IDs (list by frequency here). More than likely you can guess the top one without even looking.
From a total of 1193 different hosts (list by frequency here).
As I said earlier, there were no successful penetrations. Zero.
Common characteristics
The slashdot story brought comments and feedback, with some observations from other sites. Not a lot of data, but enough that the patterns we had observed were confirmed. The attempts were all password authentication attempts, no other authentication methods attempted.
For the most part the extended incident consisted of attempts on an alphabetic sequence of 'likely' user names, but all sites also saw at least one long run of root only attempts. This pattern was to repeat itself, and also show up in data from other sources.
There would be anything from seconds to minutes between attempts, but attempts from any single host would come at much longer intervals.
First Round Observations, Early Conclusions
Summing up what we had so far, here are a few observations and attempts at early conclusions.
At the site where I had registered the distributed attempts, the Internet-reachable machines all ran either OpenBSD or FreeBSD. Only two FreeBSD boxes were contacted.
The attackers were hungry for root, so having PermitRootLogin no in our
We hadn't forced our users to keys only, but a bit of luck and John the Ripper (/usr/ports/security/john) saved our behinds.
The number of attempts per user name had decreased over time (as illustrated by this graph), so we speculated in the second article Into a new year, slowly pounding the gates (on slashdot as The Slow Bruteforce Botnet(s) May Be Learning) that success or not was measured at a command and control site, with resources allocated accordingly.
With the sequence not completed, we thought they'd given up. After all, the odds against succeeding seemed monumental.
After all, a couple of slashdotted blog posts couldn't have hurt, could they?
But Of Course They Came Back
As luck would have it, whoever was out there had not totally admitted defeat just yet. In the early hours CET, April 7th, 2009, the slow brutes showed up again:
Apr 7 05:02:07 rosalita sshd[4739]: error: PAM: authentication error for root from ruth.globalcon.net
Apr 7 05:02:15 rosalita sshd[4742]: error: PAM: authentication error for root from ip-206-83-192-201.sterlingnetwork.net
Apr 7 05:02:54 rosalita sshd[4746]: error: PAM: authentication error for root from cyscorpions.com
Apr 7 05:02:59 rosalita sshd[4745]: error: PAM: authentication error for root from smtp.bancomorada.com.br
Apr 7 05:03:10 rosalita sshd[4751]: error: PAM: authentication error for root from 82.192.86.217
Apr 7 05:03:25 rosalita sshd[4754]: error: PAM: authentication error for root from 66.135.60.203
Apr 7 05:03:52 rosalita sshd[4757]: error: PAM: authentication error for root from rainha.florianonet.com.br
Apr 7 05:04:00 rosalita sshd[4760]: error: PAM: authentication error for root from 72.11.144.34
Apr 7 05:04:34 rosalita sshd[4763]: error: PAM: authentication error for root from s1.serverhex.com
Apr 7 05:04:38 rosalita sshd[4765]: error: PAM: authentication error for root from mail.pitnet.com.br
Starting with 2318 attempts at root before moving on to admin and proceeding with the alphabetic sequence. The incident played out pretty much like the previous one, only this time I was sure I had managed to capture all relevant data before my logs were rotated out of existence.
The data is available in the following forms: Full log here, one line per attempt here, users by frequency here, hosts by frequency here.
I couldn't resist kicking up some more publicity, and indeed we got another slashdot storm out of the article The slow brute zombies are back, on slashdot as The Low Intensity Brute-Force Zombies Are Back.
And shortly afterwards, we learned something new -
Introducing dt_ssh5, Linux /tmp Resident
Of course there was a piece of malware involved.
A Linux binary called dt_ssh5 did the grunt work.
The dt_ssh5 file was found installed in /tmp on affected systems. The reason our perpetrators chose to target that directory is likely because the /tmp directory tends to be world readable and world writeable.
Again, this points us to the three basic lessons:
dt_ssh5: Basic Algorithm
The discovery of dt_ssh5 made for a more complete picture. A rough algorithm suggested itself:
- Pick a new host from our pool, assign a user name and password
-
For each host,
- Try user name and password
-
if successful
- drop the dt_ssh5 binary in /tmp; start it
- report back to base
-
Go to 1. - For each success at 2.2, PROFIT!
I never got myself a copy, so the actual mechanism for communicating back to base remains unclear.
The Waves We Saw, 2008 - 2012
We saw eight sequences (complete list of articles in the References section at the end),
From - To | Attempts | User IDs | Hosts | Successful Logins |
---|---|---|---|---|
2008-11-19 15:04:22 - 2008-12-30 11:09:03 | 29916 | 6100 | 1193 | 0 |
2009-04-07 03:56:25 - 2009-04-12 21:01:37 | 12641 | 249 | 1104 | 0 |
2009-09-30 21:15:36 - 2009-10-15 13:42:07 | 9998 | 1 | 1071 | 0 |
2009-10-28 23:58:35 - 2010-01-22 09:56:24 | 44513 | 8110 | 4158 | 0 |
2010-06-17 01:55:34 - 2010-08-11 13:23:01 | 23014 | 3887 | 5568 | 0 |
2011-10-23 04:13:00 - 2011-10-29 05:40:07 | 4773 | 944 | 338 | 0 |
2011-11-03 20:56:18 - 2011-11-26 17:42:19 | 4907 | 2474 | 252 | 0 |
2012-04-01 12:33:04 - 2012-04-06 14:52:11 | 4757 | 1081 | 23 | 0 |
The 2009-09-30 sequence was notable for trying only root, the 2012-04-01 sequence for being the first to attempt access to OpenBSD hosts.
We may have missed earlier sequences, early reports place the first similar attempts as far back as 2007.
For A While, The Botnet Grew
From our point of view, the swarm stayed away for a while and came back stronger, for a couple of iterations, possibly after tweaking their code in the meantime. Or rather, the gaps in our data represent times when it focused elsewhere.
Clearly, not everybody was listening to online rants about guessable passwords.
For a while, the distributed approach appeared to be working.
It was (of course) during a growth period I coined the phrase "The Hail Mary Cloud".
Instantly, a myriad of "Hail Mary" experts joined the insta-punditry on slashdot and elsewhere.
It Went Away Or Dwindled
Between August 2010 and October 2010, things either started going badly for The Hail Mary Cloud, or possibly they focused elsewhere.
I went on collecting data.
There wasn't much to write about, except possibly that the botnet's command and control was redistributing effort based on past success. Aiming at crackable hosts elsewhere.
And Resurfaced In China?
Our last sighting so far was in April 2012. The data is preserved here.
This was the first time we saw Hail Mary Cloud style attempts at accessing OpenBSD systems.
The majority of attempts were spaced at at least 10 seconds apart, and until I revisited the data recently, I thought only two hosts in China were involved.
In fact, 23 hosts made a total of 4757 attempts at 1081 user IDs, netting 0 successful logins.
The new frequency data I thought interesting enough to write about, so I wrote up If We Go One Attempt Every Ten Seconds, We're Under The Radar, and netted another slashdotting. I took another look at the data later and slightly amended the conclusions, the article has been corrected with proper data extracted.
Then What To Do?
The question anybody reading this far will be asking is, what should we do in order to avoid compromise by the password guessing swarms? To my mind, it all boils down to common sense systems administration:
Mind your logs. You can read them yourself, or train a robot to. I use
Keep your system up to date. If not OpenBSD, check openssh.com for the latest version, check what your system has and badger the maintainer if it's outdated.
And of course, configure your applications such as
sshd_config: 'PermitRootLogin no' and a few other items
These two settings in your
PermitRootLogin no PasswordAuthentication no
Make your users generate keys, add the *.pub to their ~/.ssh/authorized_keys files.
For a bit of background, Michael W. Lucas: SSH Mastery (Tilted Windmill Press 2013) is a recent and very readable guide to configuring your SSH (server and clients) sensibly. It's compact and affordable too.
Keep Them Out, Keep Them Guessing
At this point, most geeks would wax lyrical about the relative strengths of different encryption schemes and algorithms.
Being a simpler mind, I prefer a different metric for how good your scheme is, or effectivness of obfuscation (also see entropy):
How many bytes does a would-be intruder have to get exactly right?I've summed up the answer to that question in this table:
Authentication method | Number of bytes |
---|---|
Password | Password length (varies, how long is yours?) |
Alternate Port | Port number (2 bytes, it's a 16 bit value, remember) |
Port Knocking | Number of ports in sequence * 2 (still a 16 bit value) |
Single Packet Authentication | 2 bytes (the port) plus Max 1440 (IPv4/Ethernet) or 1220 (IPv6/Ethernet) |
Key Only | Number of bytes in key (depending on key strength, up to several kB) |
You can of course combine several methods (with endless potential for annoying your users), or use two factor authentication (OpenSSH supports several schemes).
Keys. You've Got To have Keys!
By far the most effective measure is to go keys only for your ssh logins. In your sshd_config, add or uncomment
PasswordAuthentication no
Restart you sshd, and have all users generate keys, like this:
$ ssh-keygen -C "userid@domain.tld"
There are other options to play with, see ssh-keygen(1) for inspiration.
Then add the *.pub to their ~/.ssh/authorized_keys files.
And I'll let you in on a dirty little secret: you can even match on interface in your sshd config for things like these
Why Not Use Port Knocking?
Whenever I mention the Hail Mary Cloud online, two suggestions always turn up: The iptables example I mentioned earlier (or link to the relevant slide), and "Why not use port knocking?". Well, consider this:
Port knocking usually means having all ports closed, but with a daemon reading your firewalls logs for a predetermined sequence of ports. Knock on the correct ports in sequence, your're in.
Another dirty little secret: It's possible to implement port knocking with only the tools in an OpenBSD base system. No, I won't tell you how.
Executive Summary: Don't let this keep you from keeping your system up to date.
To my mind port knocking gives you:
- Added complexity or, one more thing that will go wrong. If the daemon dies, you've bricked your system.
- An additional password that's hard to change. There's nothing magical about TCP/UDP ports. It's a 16 bit number, and in our context, it's just another alphabet. The swarm will keep guessing. And it's likely the knock sequence (aka password) is the same for all users.
- You won't recognize an attack until it succeeds, if even then. Guessing attempts will be indistinguishable from random noise (try a raw tcpdump of any internet-facing interface to see the white noise you mostly block drop anyway), so you will have no early warning.
If you want a longer form or those arguments, my November 4, 2012 rant Why Not Use Port Knocking? was my take (with some inaccuracies, but you'll live).
There's No Safety In High Ports Anymore
Another favorite suggestion is to set your sshd to listen on some alternate port instead of the default port 22/TCP.
People who did so have had a few years of quiet logs, but recent reports show that whoever is out there have the resources to scan alternate ports too.
Once again, don't let running your
Of course I've ranted about this too, in February 2013, There's No Protection In High Ports Anymore. If Indeed There Ever Was. (which earned me another slashdotting).
Reports with logs trickle in from time to time of such activity at alternate ports, but of course on any site with a default deny packet filtering policy will not see any traces of such scans unless you go looking specifically at the mass of traffic that gets dropped at the perimeter.
Final thoughts, for now
Microsoftish instapundits were quick to assert that
They're wrong. OpenSSH (which is what essentially everyone uses) is maintained as an integral part of the OpenBSD project, and as such is a very thoroughly audited mass of code. And it keeps improving with every release.
I consider the Hail Mary Cloud an example of distributed, parallel problem solving, conceptually much like SETI@Home but with different logic and of course a more sinister intent.
Computing power is cheap now, getting cheaper, and even more so when you can leverage other people's spare cycles.
The huge swarm of attackers concept is as I understand it being re-used in the recent WordPress attacks. We should be prepared for swarm attacks on other applications as soon as they reach a critical mass of users.
There may not be a bullseye on your back yet (have you looked lately?), but you are an attractive target.
Fortunately, sane system administration practices will go a long way towards thwarting intrusion attempts, as in
- keep your system up to date,
- allow only what's necessary for the task at hand and
- keep watching your logs for weirdness.
UPDATE 2013-11-21: A recent ACM Conference on Computer and Communication Security paper, "Detecting stealthy, distributed SSH brute-forcing," penned by Mobin Javed and Vern Paxson, references a large subset of the data and offers some real analysis, including correlation with data from other sites (Spoiler alert: in some waves, almost total overlap of participating machines). One interesting point from the paper is that apparently attack matching our profile were seen at the Lawrence Berkeley National Laboratory as early as 2005.
And in other news, it appears that GitHub has been subject to an attack that matches the characteristics we have described. A number of accounts with weak passwords were cracked. Investigations appears to be still ongoing. Fortunately, GitHub appear to have started offering other authentication methods.
UPDATE 2014-09-28: Since early July 2014, we have been seeing similar activity aimed at our POP3 service, with usernames taken almost exclusively from our spamtrap list. The article Password Gropers Take the Spamtrap Bait has all the details and log data as well as references to the spamtrap list.
UPDATE 2014-12-10: My Passwords14 presentation, Distributed, Stealthy Brute Force Password Guessing Attempts - Slicing and Dicing Data from Recent Incidents has some further data as well as further slicing and dicing of the earlier data (with slightly different results).
UPDATE 2016-08-10: The POP3 gropers never went away entirely and soon faded into a kind of background noise. In June of 2016, however, they appeared to have hired themselves out to a systematic hunt for Chinese user names. The article Chinese Hunting Chinese over POP3 in Fjord Country has further details, and as always, links to log data and related files.
UPDATE 2020-04-28: Somewhat belatedly added a link to the more recent (August 2018) article Badness, Enumerated by Robots which has updates on data we currently collect with some further explanation. Also see links therein.
Update 2025-07-19: If you found this piece to be useful, informative, annoying or would for some other reason like to contact me or comment, please do.
You might also be interested in reading selected pieces via That Grumpy BSD Guy: A Short Reading List (also here).
References
The slides for the talk this article is based on live at http://home.nuug.no/~peter/hailmary2013/, with a zipped version including all data at http://home.nuug.no/~peter/hailmary2013.zip (approx. 26MB) for your convenience.
Mobin Javed and Vern Paxson, "Detecting stealthy, distributed SSH brute-forcing," ACM International Conference on Computer and Communication Security (CCS), November 2013.
The blog posts (field notes) of the various incidents, data links within:
Peter N. M. Hansteen, (2008-12-02) A low intensity, distributed bruteforce attempt (slashdotted)
Peter N. M. Hansteen, (2008-12-06) A Small Update About The Slow Brutes
Peter N. M. Hansteen, (2008-12-21) Into a new year, slowly pounding the gates (slashdotted)
Peter N. M. Hansteen, (2009-01-22) The slow brutes, a final roundup
Peter N. M. Hansteen, (2009-04-12) The slow brute zombies are back (slashdotted)
Peter N. M. Hansteen, (2009-10-04) A Third Time, Uncharmed (slashdotted)
Peter N. M. Hansteen, (2009-11-15) Rickrolled? Get Ready for the Hail Mary Cloud! (slashdotted)
Peter N. M. Hansteen, (2011-10-23) You're Doing It Wrong, Or, The Return Of The Son Of The Hail Mary Cloud
Peter N. M. Hansteen, (2011-10-29) You're Doing It Wrong, Returning Scoundrels
Peter N. M. Hansteen, (2012-04-06) If We Go One Attempt Every Ten Seconds, We're Under The Radar (slashdotted)
Peter N. M. Hansteen, (2012-04-11) Why Not Use Port Knocking?
Peter N. M. Hansteen, (2013-02-16) There's No Protection In High Ports Anymore. If Indeed There Ever Was. (slashdotted)
Peter N. M. Hansteen (2018-08-13) Badness, Enumerated by Robots (updates on data we currently collect, also see links therein).
Other Useful Texts
Marcus Ranum: The Six Dumbest Ideas in Computer Security, September 1, 2005
Michael W. Lucas: SSH Mastery, Tilted Windmill Press 2013
Michael W. Lucas: Absolute OpenBSD, 2nd edition No Starch Press 2013
Peter N. M. Hansteen, The Book of PF, 3rd edition, No Starch Press 2014, also the online PF tutorial it grew out of, several formats http://home.nuug.no/~peter/pf/, more extensive slides matching the most recent session at http://home.nuug.no/~peter/pf/newest/. More recently I've done a rewrite of the PF tutorial with Max Stucchi and Tom Smyth co-authoring and co-presenting, the lastest slides from one of these sessions are available at https://nxdomain.no/~peter/pf_fullday.pdf (sorry, labs available for real attendees only).
OpenBSDs web http://www.openbsd.org/ -- lots of useful information.
If you enjoyed this: Support OpenBSD!
If you have enjoyed reading this, please buy OpenBSD CDs and other items, and/or donate!
Useful links for this are:
OpenBSD Donations Page: http://www.openbsd.org/donations.html.
OpenBSD Hardware Wanted Page: http://www.openbsd.org/want.html.
Remember: Free software takes real work and real money to develop and maintain.
If you want to support me, buy the book!.
Friday, April 6, 2012
If We Go One Attempt Every Ten Seconds, We're Under The Radar
Note: This post has been updated with a correction, see the end of the article.
Regular readers will remember the activities of the Hail Mary Cloud, which turned up in authentication logs with large numbers of unsuccessful ssh login attempts, apparently coordinated across a large number of source IP addresses and with any individual host in the attacker set making a new attempts at intervals of anything from several seconds to several minutes.
At the time, commentators took these activites either as an indication of a truly inspired idea from a brilliant mind (after all, avoiding detection is essential) or a token of almost unimaginable ineptitude or perhaps just an overdose of faith that if you keep going long enough, even extremely unlikely things will happen.
It's been a litte while now since we last saw the slow, distributed bruteforce attacks at work here at the BSDly labs (we've kept collecting data here), but one curious incident during the last week indicates that somebody, somewhere is still working on ssh cracking scripts that operate on fairly similar methods.
Bruteforce attacks can be fairly easy to detect and head off. In most cases the attacker comes in with a larger than usual number of login attempts in rapid succession from a single IP address, and with modern tools such as OpenBSD's PF packet filter, you can set up rules that use state tracking options to intercept. The phenomenon is common enough that the bruteforce avoidance section is one of the more popular parts of my online PF tutorial (and of course, a slightly expanded version is avavailable in The Book of PF).
I wouldn't publish or recommend anything that I haven't at least tried myself, so just to illustrate,
[Fri Apr 06 14:48:21] peter@skapet:~$ doas grep bruteforce /etc/pf.conf table <bruteforce> persist counters block log (all) quick from <bruteforce> pass log (all) proto { tcp, udp } to port ssh keep state (max-src-conn 15, max-src-conn-rate 7/4, overload <bruteforce>
The PF rules on BSDly.net's gateway have something much like the published example. This means that a traditional bruteforce attempt will end up something like this:
[Fri Apr 06 15:30:38] peter@skapet:~$ grep 203.34.37.62 /var/log/authlog
Apr 5 17:42:36 skapet sshd[32722]: Failed password for root from 203.34.37.62 port 44936 ssh2
Apr 5 17:42:36 skapet sshd[32722]: Received disconnect from 203.34.37.62: 11: Bye Bye [preauth]
Apr 5 17:42:38 skapet sshd[26527]: Failed password for root from 203.34.37.62 port 45679 ssh2
Apr 5 17:42:38 skapet sshd[26527]: Received disconnect from 203.34.37.62: 11: Bye Bye [preauth]
Apr 5 17:42:41 skapet sshd[29912]: Invalid user db2inst1 from 203.34.37.62
Apr 5 17:42:41 skapet sshd[29912]: Failed password for invalid user db2inst1 from 203.34.37.62 port 46283 ssh2
Apr 5 17:42:41 skapet sshd[29912]: Received disconnect from 203.34.37.62: 11: Bye Bye [preauth]
Apr 5 17:42:43 skapet sshd[30349]: Failed password for root from 203.34.37.62 port 46898 ssh2
Apr 5 17:42:43 skapet sshd[30349]: Received disconnect from 203.34.37.62: 11: Bye Bye [preauth]
Apr 5 17:42:46 skapet sshd[25557]: Invalid user prueba from 203.34.37.62
Apr 5 17:42:46 skapet sshd[25557]: Failed password for invalid user prueba from 203.34.37.62 port 47495 ssh2
Apr 5 17:42:46 skapet sshd[25557]: Received disconnect from 203.34.37.62: 11: Bye Bye [preauth]
Apr 5 17:42:48 skapet sshd[5380]: Failed password for bin from 203.34.37.62 port 48087 ssh2
Apr 5 17:42:48 skapet sshd[5380]: Received disconnect from 203.34.37.62: 11: Bye Bye [preauth]
Apr 5 17:42:51 skapet sshd[23635]: Invalid user postgres from 203.34.37.62
Apr 5 17:42:51 skapet sshd[23635]: Failed password for invalid user postgres from 203.34.37.62 port 48658 ssh2
Apr 5 17:42:51 skapet sshd[23635]: Received disconnect from 203.34.37.62: 11: Bye Bye [preauth]
Apr 5 17:42:54 skapet sshd[2450]: Failed password for root from 203.34.37.62 port 49307 ssh2
Apr 5 17:42:54 skapet sshd[2450]: Received disconnect from 203.34.37.62: 11: Bye Bye [preauth]
Apr 5 17:42:56 skapet sshd[16673]: Failed password for root from 203.34.37.62 port 49910 ssh2
Apr 5 17:42:57 skapet sshd[16673]: Received disconnect from 203.34.37.62: 11: Bye Bye [preauth]
Apr 5 17:42:59 skapet sshd[17522]: Failed password for root from 203.34.37.62 port 50503 ssh2
Apr 5 17:42:59 skapet sshd[17522]: Received disconnect from 203.34.37.62: 11: Bye Bye [preauth]
Apr 5 17:43:02 skapet sshd[4633]: Invalid user mythtv from 203.34.37.62
Apr 5 17:43:02 skapet sshd[4633]: Failed password for invalid user mythtv from 203.34.37.62 port 51218 ssh2
Apr 5 17:43:02 skapet sshd[4633]: Received disconnect from 203.34.37.62: 11: Bye Bye [preauth]
Apr 5 17:43:05 skapet sshd[25728]: Failed password for root from 203.34.37.62 port 51849 ssh2
Apr 5 17:43:05 skapet sshd[25728]: Received disconnect from 203.34.37.62: 11: Bye Bye [preauth]
Apr 5 17:43:08 skapet sshd[10487]: Failed password for root from 203.34.37.62 port 52565 ssh2
Apr 5 17:43:08 skapet sshd[10487]: Received disconnect from 203.34.37.62: 11: Bye Bye [preauth]
Apr 5 17:43:10 skapet sshd[31156]: Failed password for root from 203.34.37.62 port 53264 ssh2
Apr 5 17:43:11 skapet sshd[31156]: Received disconnect from 203.34.37.62: 11: Bye Bye [preauth]
Apr 5 17:43:13 skapet sshd[31956]: Invalid user mmroot from 203.34.37.62
Apr 5 17:43:13 skapet sshd[31956]: Failed password for invalid user mmroot from 203.34.37.62 port 53958 ssh2
Apr 5 17:43:13 skapet sshd[31956]: Received disconnect from 203.34.37.62: 11: Bye Bye [preauth]
And looking up the current contents of the
[Fri Apr 06 15:34:23] peter@skapet:~$ sudo pfctl -t bruteforce -vT show
91.197.131.24
Cleared: ; Thu Apr 5 20:22:29 2012
In/Block: [ Packets: 1 Bytes: 52 ]
In/Pass: [ Packets: 0 Bytes: 0 ]
Out/Block: [ Packets: 0 Bytes: 0 ]
Out/Pass: [ Packets: 0 Bytes: 0 ]
200.11.174.131
Cleared Thu Apr 5 19:09:30 2012  
In/Block: [ Packets: 1 Bytes: 52 ]
In/Pass: [ Packets: 0 Bytes: 0 ]
Out/Block: [ Packets: 0 Bytes: 0 ]
Out/Pass: [ Packets: 0 Bytes: 0 ]
203.34.37.62
Cleared: Thu Apr 5 17:43:13 2012
In/Block: [ Packets: 1 Bytes: 52 ]
In/Pass: [ Packets: 0 Bytes: 0 ]
Out/Block: [ Packets: 0 Bytes: 0 ]
Out/Pass: [ Packets: 0 Bytes: 0 ]
The table data show us one more thing worth noting: All of these bruteforcers sent exactly one packet after they were blocked, and gave up right away when they noticed they were blocked.
On Sunday, April 1st 2012, I noticed an unusually high number of ssh login attempts coming from two Chinese addresses (58.214.5.51 and 61.160.76.123), amazingly persistent and for some reason they had not been caught by my bruteforce avoidance rules. Thinking I'd simply adjust my rate settings, I simply added those addresses to the
A little later that same evening, the <bruteforce> table looked like this:
[Sun Apr 01 22:58:02] peter@skapet:~$ sudo pfctl -t bruteforce -vT show
58.51.95.75
Cleared: Sun Apr 1 22:05:29 2012
In/Block: [ Packets: 1 Bytes: 52 ]
In/Pass: [ Packets: 0 Bytes: 0 ]
Out/Block: [ Packets: 0 Bytes: 0 ]
Out/Pass: [ Packets: 0 Bytes: 0 ]
58.214.5.51
Cleared: Sun Apr 1 14:06:21 2012
In/Block: [ Packets: 3324 Bytes: 199440 ]
In/Pass: [ Packets: 0 Bytes: 0 ]
Out/Block: [ Packets: 0 Bytes: 0 ]
Out/Pass: [ Packets: 0 Bytes: 0 ]
61.91.125.115
Cleared: Sun Apr 1 03:10:05 2012
In/Block: [ Packets: 1 Bytes: 52 ]
In/Pass: [ Packets: 0 Bytes: 0 ]
Out/Block: [ Packets: 0 Bytes: 0 ]
Out/Pass: [ Packets: 0 Bytes: 0 ]
61.160.76.123
Cleared: Sun Apr 1 14:07:08 2012
In/Block: [ Packets: 3262 Bytes: 195720 ]
In/Pass: [ Packets: 0 Bytes: 0 ]
Out/Block: [ Packets: 0 Bytes: 0 ]
Out/Pass: [ Packets: 0 Bytes: 0 ]
The two hosts kept coming, at a rate of roughly one attempt every ten seconds, and apparently ignored the fact that they were blocked in the packet filter rules and would be getting connection refused errors for each attempt.
Looking at the log data (preserved here along with data from various other attempts from other sources in the relevant period), both hosts were busy trying to guess root's password from the time they started until they were blocked. When the block expired after 24 hours, they had both apparently proceeded down similiar lists of user names and were busy with rooter ):
Apr 2 14:10:06 skapet sshd[13332]: Invalid user rooter from 61.160.76.123
Apr 2 14:10:06 skapet sshd[13332]: input_userauth_request: invalid user rooter [preauth]
Apr 2 14:10:06 skapet sshd[13332]: Failed password for invalid user rooter from 61.160.76.123 port 46578 ssh2
Apr 2 14:10:06 skapet sshd[13332]: Received disconnect from 61.160.76.123: 11: Bye Bye [preauth]
Apr 2 14:10:14 skapet sshd[30888]: Invalid user rooter from 58.214.5.51
Apr 2 14:10:14 skapet sshd[30888]: input_userauth_request: invalid user rooter [preauth]
Apr 2 14:10:14 skapet sshd[30888]: Failed password for invalid user rooter from 58.214.5.51 port 47587 ssh2
Apr 2 14:10:14 skapet sshd[30888]: Received disconnect from 58.214.5.51: 11: Bye Bye [preauth]
They both kept going afterwards, at roughly the same rates as before. The host at 61.160.76.123 kept varying its rate and at one point sped up enough that it triggered the automatic bruteforce blocking.
After running a fairly familiar alphabetic progression through a list of supposed user names, the remaining host finally gave up during the first hour of April 3rd, by CEST time:
Apr 3 00:36:24 skapet sshd[30287]: Received disconnect from 58.214.5.51: 11: Bye Bye [preauth]
Apr 3 00:36:33 skapet sshd[27318]: Invalid user clodia from 58.214.5.51
Apr 3 00:36:33 skapet sshd[27318]: input_userauth_request: invalid user clodia [preauth]
Apr 3 00:36:33 skapet sshd[27318]: Failed password for invalid user clodia from 58.214.5.51 port 58185 ssh2
Apr 3 00:36:33 skapet sshd[27318]: Received disconnect from 58.214.5.51: 11: Bye Bye [preauth]
Before we go into further details, I have a question for you, dear reader: Did anything like this turn up in your authentication logs during the same rough time frame? If your logs show something similar, please drop me a line at (lightly obfuscated) peter at bsdly dot se.
It could be instructive to compare this last batch with the previous samples. The log format differs slightly, since the previous attempts were aimed at FreeBSD machines, while this last round was aimed at a single OpenBSD host.
The whois information for the two hosts (58.214.5.51 and 61.160.76.123) both point to Chinese networks, as far as I can tell in the same provice and possibly in the same city, Wuxi, which appears to be one of several Chinese tech cities.
The slow rate of the login attempts and the sequence of user names attempted are both similar enough to the earlier distributed attempts that it's possible this is a limited experiment by the developers of the previous bruteforcing malware. The rate of roughly one attempt per host per 10 seconds is a significant speedup compared to the previous attempts, and it fits in the interval where blocking due to the rate of connections would most likely produce an unacceptably high number of false positives.
It will be interesting to see what rate of incoming connection the next full scale attempts will be using. It is possible that the source addresses are somewhere close to the actual whereabouts of the malware developers, but at this point it's pure speculation.
At this point we can only keep watching our logs and make sure that our sshd configurations are the best possible shape. If you need up to date advice on how to configure and use SSH safely, you could do significantly worse than grabbing Michael W. Lucas' recent SSH book SSH Mastery.
Update 2013-04-25: Revisiting the data in preparation for my BSDCan 2013 talk (also to be featured or rather previewed at tonight's BLUG meeting), I realized that a trivial scripting error had lead me to draw false conclusions.
The total number of attempts is correct, but both the number of hosts involved and the number of user names attempted were seriously off. The two hosts I mentioned in the article were the most active, but actually a total of 23 hosts participated, trying for a total of 1081 user names. Full data available here.
It seems the Hail Mary Cloud had shrunk, but not completely vanished as I thought at the time.
Note: A Better Data Source Is Available
Update 2013-06-09: For a faster and more convenient way to download the data referenced here, please see my BSDCan 2013 presentation The Hail Mary Cloud And The Lessons Learned which summarizes this series of articles and provides links to all the data. The links in the presentation point to a copy stored at NUUG's server, which connects to the world through a significantly fatter pipe than BSDly.net has.
Saturday, October 29, 2011
You're Doing It Wrong, Returning Scoundrels
After five days of activity and no wins on my machines, the Hail Mary Cloud moved on. That means we have yet another complete set of data to summarize and analyze. The numbers are:
A total of 4773 attempts, none of them successful, involving 338 distinctive source addresses, the most active host (109.237.210.147, according to whois located somewhere in the Netherlands, made 109, while at the other end of the scale 30 hosts made only a single attempt). The wannabe attackers attempted to access 944 different user names, the most frequently attempted user name by far was root, with several blocks of root-only accesses even during the otherwise purely alphabetical stage.
The current sample is too small to support any far reaching conclusions, but it is tempting to speculate that with only 338 hosts participating we are seeing an indication that their success rate is sinking (previous attempts counted a cople of thousand hosts), even though they may be at least partially succeeding in their secondary goal: avoiding detection. That success is partial at best, this blog post and the earlier ones pluss varied commentary at Slashdot are indications that at least some of us are paying attention to our logs.
Another few observations worth making: 1) I have still not seen any of these sequences aimed at my Internet-facing OpenBSD systems, only Linux and FreeBSD ones. 2) It's likely that the miscreants are directing their attempts at several targets at the same time, so this sample is only a tiny fraction of the whole.
Reports of similar activity are surfacing from elsewhere, but very few people appear to be willing to share their data. It is of course even possible that the earlier episodes generated enough noise that better password policies (or preferably key logins only policies) are now in place, frustrating the random password guessers' attempts.
Whether or not you have been seeing these sequences in you authentication logs, please do yourself a favor and study your logs every now and then. It might even be worth the trouble to set up some kind of log collection and analysis infrastructure. Europeans may have to consider the legal implications of storing logs in light of the Data Retention Directive, denizens of the great elsewhere would do well to check if any similar legislation applies.
Good night and good luck.
Broken link fixed, sorry. Also, of course this has been discussed earlier, most recently in this post, also in this one as well as A low intensity, distributed bruteforce attempt (December 2, 2008), A Small Update About The Slow Brutes (December 6, 2008), Into a new year, slowly pounding the gates (December 21, 2008), The slow brutes, a final roundup (January 22, 2009) and The slow brute zombies are back (April 12, 2009). Read those for further info.
Update 2011-11-06: Another round of attempts has started, see the data aggregation page for the November 2011 entries. Of particular interest, perhaps is the List of participating hosts, sorted by number of attempts.
Update 2011-11-06 part 2: A note over at the ISC, "New, odd SSH brute force behavior" linked here, generating some additional traffic. Commenting over there requires a login and the confirmation email appears to be delayed by greylisting, so I'll comment here instead: I would not call this a particularly new approach. We've been seeing these attempts on and off since we started noticing them sometime in 2008, and it's entirely possible that there have been earlier attempts that did slip in under our radars. Analyses based on data from other sites beside mine would be very welcome indeed.
Update 2011-11-20: They keep coming back, now again after taking a 9 day breather (or possibly poking elsewhere in the meantime). Data accumulating again at the Hail Mary Cloud Data Page, with notes on the most recent activity at the very end. Please do play with the data, there's hope yet that some useful insights are to be found.
Note: A Better Data Source Is Available
Update 2013-06-09: For a faster and more convenient way to download the data referenced here, please see my BSDCan 2013 presentation The Hail Mary Cloud And The Lessons Learned which summarizes this series of articles and provides links to all the data. The links in the presentation point to a copy stored at NUUG's server, which connects to the world through a significantly fatter pipe than BSDly.net has.
Sunday, October 23, 2011
You're Doing It Wrong, Or, The Return Of The Son Of The Hail Mary Cloud
Apparently the answers are yes, yes, and no, at least for some. The evidence is slowly accumulating in the authentication logs on one of my servers, published via the The Hail Mary Cloud Data Page. There are several reasons why these attempts stand out, but it kind of helps that the number of users with sensible or indeed legitimate reasons for shell access to this particular server is quite limited.
I've ranted about this before, famously but not exclusively in a series of slashdotted and much-syndicated blog posts such as this one. For the TL;DR crowd, here's the summary:
If you're allowing root logins from the great elsewhere, you're doing it wrong.
If you've been allowing root logins from the great elsewhere, I wouldn't be surprised it's one or more of your boxes doing the distributed password guessing.
If you can't remember the last time you checked that your system is up to date and properly configured, you're doing it wrong.
So nothing really new to see here, it's only yours truly seeing his hope of never seeing this silliness repeated dashed, again.
If you're interested in background information about the Hail Mary Cloud phenomenon, please do read the previous posts (A low intensity, distributed bruteforce attempt (December 2, 2008), A Small Update About The Slow Brutes (December 6, 2008), Into a new year, slowly pounding the gates (December 21, 2008), The slow brutes, a final roundup (January 22, 2009) and The slow brute zombies are back (April 12, 2009) as well as the one referenced earlier.
Good night and good luck.
Update 2011-10-27: The alphabetic stage has started, see refreshed data for details.
Note: A Better Data Source Is Available
Update 2013-06-09: For a faster and more convenient way to download the data referenced here, please see my BSDCan 2013 presentation The Hail Mary Cloud And The Lessons Learned which summarizes this series of articles and provides links to all the data. The links in the presentation point to a copy stored at NUUG's server, which connects to the world through a significantly fatter pipe than BSDly.net has.
Thursday, January 22, 2009
The slow brutes, a final roundup
Over the last few columns, we have followed the progress of what appears to be a botnet cloud's attempt at gaining access to a couple of FreeBSD machines I have in my care. One of my predictions about the distributed, slow ssh bruteforce attempts we started seeing in November of 2008 was that at the rate they were going at the time, it would be well into the new year before we would see the end of their alphabetic progression. As it turns out, they stopped just before year end, before even reaching the 'T's. The last attempt recorded was this:
Dec 30 11:09:03 filehut sshd[54981]: error: PAM: authentication error for illegal user sophia from static-98-119-110-139.lsanca.dsl-w.verizon.net
The full collection of raw data is available here, with a .csv summarising number of attempts, user names and hosts per day here.
With the incident apparently over, we can sit back and study the data and see what patterns emerge.
The anatomy of the attack
There are a number of ways to slice and dice the data. One useful way to view the collection is to do day to day statistics, such as the ones in this .csv file, numbers extracted by some simple greping and awkery. Based on the day to day data, I made this graph to illustrate the progression.

Then for your data overload cravings, I turned the same data to a log scale for enhancement and added number of attempts -

hopefully adding some insight into just what happened when and maybe supporting some guessing about what they were indeed trying to achieve.
It is possible that we missed the actual start of those coordinated attempts, but the data we do have show a few interesting points.
The earliest preserved data from November 19th shows the most attempts per user name (average 13.29), with 7 unique user names tried and a relatively low number of hosts (76).
On November 20th, the cloud turned its attention to one user, root, trying only that user name a total of 1697 times from 566 different hosts. It would be well into November 21st before the cloud moved on to admin (128 attempts, 107 different hosts) and an apparently coordinated alphabetic progression.
The absolute number of attempts and hosts involved per day fell quickly, with average number of attempts per user name stabilizing at a fairly low number after a few days. The exception is the peak on December 27th, which could perhaps be explained by owners of compromised computers returning from holiday celebrations and turning their computers back on. The sharp decline in all numbers over the next few days before the attempts stop seems consistent with what we assumed: That the botnet masters were allocating resources according to likelihood of success.
So why is this incident important, or even interesting? After all, attempts to gain access to services by brute force or dictionary based attacks are nothing new. I was rather intrigued to see clear evidence that miscreants were trying to find the way under the radar of intrusion detectors by distributing the intrusion task over a large number of hosts. If their success rate at my sites is anything to go by, this may be just a weird anomaly and an idea that did not lead anywhere. I haven't heard from anybody who was actually compromised by this particular set of clouds, but then again anybody who got bitten would likely be rather shy about telling the world at large or even fairly obscure researchers about the fact.
Always looking for patterns I even went to the trouble of extracting some data from the logs about the individual hosts that participated in the attack. After some basic shell gymnastics I ended up with a .csv of hostnames, number of attempts from the host as well as the date of first and last contact (available here). Next I tried (and failed - gnuplot gurus, here's your chance) to graph the data usefully in OpenOffice, but ended up with a sorted version (sorted by attempts, start and end date) that at least shows us that a surprising number of hosts actually hung on for most of the time the coordinated attepts went on.
The lessons learned: security the old-fashioned way
The general lesson of this incident is rather predictably that miscreants will occasionally try new and original ways to try to crack their way into your system. The slow method was a refreshing variation, and for all we know they may have succeeded in places where the people in charge remain blissfully unaware. Trying to catalogue and detect all kinds of variations on the theme of "attempts at unauthorized access" is the kind of activity that has kept "antivirus" people in beer money for quite a while, and if there is a lesson to be learned here, it is that trying to enumerate badness (Yes, do look it up using your favorite search) is a losing game. Make sure whatever system you run is sanely constructed, any bugs that do turn up are fixable within a reasonable time frame, and so on. I suppose I will come back later with a rant about how much damage the "black boxes" school of thinking about software has done, especially after it got elevated to practically religious dogma by certain major players. And yes, you can usefully look that up as well.
For those of you who are interested in the data, here are the now complete extracts for your perusal:
The full set of log data
The per day .csv file - and the same in an .ods sheet with some graphing attempts
Per host data in the "Host,Attempts,StartDate,EndDate" format and sorted by attempts, start and end
For those of you interested in learning about OpenBSD and related delights like PF, FreeCode is set to start offering courses featuring among others yours truly as well as the usual support and consulting offerings. Contact the good front end people at FreeCode for further details.
International readers are at liberty to ignore the following, but Norwegian online IT magazine digi.no are apparently in the process of setting up a sort of census of Norwegian bloggers. The following is there to make this blog show up in their listing. You can read their article about the initiative (Norwegian only, unfortunately).
Note: A Better Data Source Is Available
Update 2013-06-09: For a faster and more convenient way to download the data referenced here, please see my BSDCan 2013 presentation The Hail Mary Cloud And The Lessons Learned which summarizes this series of articles and provides links to all the data. The links in the presentation point to a copy stored at NUUG's server, which connects to the world through a significantly fatter pipe than BSDly.net has.