Honesty in marketing. You may have heard of it.
It may come as a surprise to some, but I generally do not spend much time on spam related matters. Occasionally I need to do some manual labor to keep spamd and spamassasin in trim, but at most times my little robot helpers just keep running, leaving my desktop essentially spam free.
That changed slightly late last month. Messages hawking the oddest wares started arriving, with a largish number of messages claiming that I had actually signed up to receive them:
You are receiving this message because on 2/26/2009 at 3:57 PM firstname.lastname@example.org 126.96.36.199 registered to receive messages from e-researchcouncil.com and its partners. To change your preferences with e-researchcouncil.com, go to the website and select "Contact Us" to review your options.
To give you an idea how likely that statement is to be true, consider this: The 188.8.131.52 address resolves back to somewhere in America Online's network, pretty much an ocean and then some away from where I'm usually located.
I assume entering my address into a few web forms is somebody's idea of a joke, and the net effect was that a number of spammy messages started appearing in my mailbox, starting on February 27th. Only about third of the messages contained that particular claim, and a typical message would contain headers like these:
X-From-Line: eHarmonyDating@BranchSprint.com Fri Feb 27 16:30:36 2009
Delivery-date: Fri, 27 Feb 2009 19:15:13 +0100
Received: from [184.108.40.206] (helo=dns7-cronomagic-biz.BranchSprint.com)
by skapet.bsdly.net with esmtp (Exim 4.69)
for email@example.com; Fri, 27 Feb 2009 19:15:13 +0100
From: eHarmonyDating <eHarmonyDating@BranchSprint.com>
Subject: eHarmony could match you with the right singles
Date: Fri, 27 Feb 2009 16:30:36 GMT
X-Information: 73479158_21937306 ListZA251
My first impulse was, in case this is an honest mistake somewhere, let's try and play nice at first. That meant sending messages to the X-Complaints-To: addresses and waiting to see what would happen.
You should not be terribly surprised to hear that those addresses all turned out to be invalid, the messages undeliverable.
In the meantime, I went on collecting messages, and the amount of data I had accumulated was large enough that I could reach some preliminary conclusions.
It's obvious that in order to reach me, the messages would need to clear greylisting and avoid triggering too many of my spamassassin rules. That meant in turn that the messages were sent using real mail servers. So I started collecting the messages with that claim for further study. The messages were almost all sent from a few distinct subnets, all of them apparently fairly well stocked with real mailservers.
Based on data from the spam messages and whois lookups and the larger groupings of messages, the professional spammers are, for your convenience in case you want to visit them:
4001 Kennett Pike
Greenville, DE 19807
P.O. BOX 243, 2221 S Webster Ave
Green Bay, WI 54301
5580 La Jolla Blvd # 73
La Jolla, CA 92037
5482 Wilshire Blvd. #302
Los Angeles, CA 90036
5482 Wilshire Blvd. #302
Los Angeles, CA 90036
970 West Valley Parkway
Escondido, CA 92025
5580 La Jolla Blvd # 73
La Jolla, CA 92037
7127 Hollister Ave., Suite 25A, #145
Goleta, CA 93117
Top Notch Media, Inc.
1735 Market Street · Suite A · PMB 429
Philadelphia, PA 19103-7588
In addition, some of the domain names used in the spam messages were registered via an anonymizing service whose whois data comes out as:
Dynamic Dolphin Privacy Protect
5023 W 120th Ave #233
The spam volume from all of them swelled at roughly the same time, so it is likely that they cooperate on keeping their lists up to date.
So we see spammers evolving: They buy or rent real mail servers now and they have even started coordinating. Using greylisting has actually increased the cost of becoming a successful spammer.
At our end of the game, we stay ahead of their game thanks to tools like spamd, and several of us dump and share our greytrap lists. It is even possible to collect IP addresses and feed a large number at the time to spamdb, but after a little while I grew tired of the increased manual work decided it was time for a counterprank. Cleaning up after spammers is no fun, unless you can have little robot helpers do the heavy lifting.
The Counterprank: A Feedback Loop
Regular readers will remember that I have a collection of known bad addresses in my domains that I use for my greytrapping, all generated elsewhere, that has come in handy at times. Run of the mill spam operators tend to just suck in anything that looks like email addresses, and keeping the list available on the web has served us extremely well here.
The professional spammers are apparently not quite that stupid, so the problem became a little different. They were able to sneak past greylisting and conventional content filtering. Also, they were apparently oblivious to email communication and as far as I can tell their Unsubscribe pages are not entirely believeable.
So it was a relief to find that places such as http://e-researchcouncil.com/ are very happy to accept any email addresss you can come up with. Time to enlist a few of our imaginary friends, drawn from the obvious source.
I did ponder the ethics for a few moments. After all, the forms included sentences such as "I certify that I am a US citizen", which was about as true as the assertion that I had signed up via an AOL proxy. But I did not ponder that matter for long. Moments later, most of the spam operators found themselves with new neighbors with odd names and foreign email addresses.
The net result is that the hosts start appearing automagically in the hourly dump of my list of greytrapped addresses and in the daily spamd activity report. With a little luck, we have succeeded in increasing the cost of spamming one tiny increment.
If you found this article useful, enjoyable or irritating, please drop me a line. Material related to this article is available via links from my web space. Some additional material will be made available for reasonable research purposes. If you want more extensive or non-trivial assistance, please contact me (via email or other means) to make arrangements.
Note that the list of greytrapped addresses is updated ten past every full hour, fetching it every minute like some Americans have started doing is not a good use of your resources.
Saturday, March 21, 2009
Saturday, March 14, 2009
How compartmentalization turned into a security disaster. Greed, incompetence and dishonesty was involved.
IT security or the lack of any such thing has grabbed headlines lately here in Norway. A series of high profile public institutions have seen large scale worm infections on their Microsoft based networks.
Late last year the regional government agency responsible for essentially all health care in the western part of the country had a worm infection so bad they essentially shut down their network as a preventive measure.
During the last few weeks, the national police force, of all thinkable organizations, has seen not one, but two large-scale incidents. Use of Microsoft products and sloppy system maintenance are both pervasive enough that similar incidents are likely happening right now elsewhere, somewhere near you too.
The news reports about the Norwegian police force's IT problems contained one item that was particularly shocking to IT types like me:
Apparently large parts of the bureaucracy that is responsible for the confidential and correct processing of criminal matters and all sorts of sensitive personal information associated with the crimes runs essential services on Microsoft Windows NT 4.0.
That version of the Microsoft product is so old it is officially abandonware, and early reports of the police network problems included the oldish news that even the antiware vendors have stopped supporting the system.
Later reports had police IT department officials claim that the worm infections were not that much of a security problem, since at this point all the worm actually did was spread. Break out the popcorn, boys and girls: In an upcoming episode, we will see how the worm infected Windows machines the Royal Norwegian Police did not find or couldn't clean well enough are used in the perpetration of some cybercrime or other.
It's all pretty sickening, and at this point it would be rather tempting to spend the rest of the column ranting about the general stupidity of Windows users. But a smarter approach is to see if there is a lesson to be learned.
To do that, we need to backtrack quite a bit and look at the cult of the little black boxes.
The cult of the little black boxes, and Microsoft the 1980s corporation
We need to go back and take in what the world was like in the nineteen-eighties. This was back when the world was divided into real computers (from the likes of IBM, Digital, and regional quasi-monopolies like our own Norsk Data) and those annoying toys called 'personal' microcomputers, where the 'IBM PC compatibles' had emerged as the surprise leader of the pack.
Computer networks were usually private, corporate things and rarely interconnected much, with the rare exception of those institutions that were part of the US Department of Defense science experiment that was becoming known variously as ARPANET or 'the Internet'.
If you took your programming classes back in the nineteen-eighties, you likely know that we were taught that black boxes were good.
Compartmentalization was the order of the day, meaning that as a developer you were supposed to create code with a well defined inteface, so anybody interacting with your code would be presented with a cleanly predictable result to any given input.
Your code should be a black box and for any particular specificiation there could be written several interchangeable modules to fit the bill.
So far so good, predictability is nice and with compartmentalization comes, we hope at least, clear chains of responsibility.
But several factors combined to take the cult of the black boxes and turn it into a corporate culture at a corporation that was growing from a handful of furry hackers into a corporation at the time, namely Microsoft.
Microsoft' early successes all came from writing software for single-user systems that were so limited that working around the limitations of the hardware became much of a lifestyle.
At the start of the decade, networking on microcomputers in Microsoft's range was pretty much out of the question, and by the end of the eighties any sort of connectivity (even dial-up) was still an expensive optional extra that was still too hard to configure for most people.
On the storage side, we progressed from 128 kilobyte floppies to hard drives of just over a hundred megabytes for personal systems, with the 32 megabyte partition size still a very present limiting factor.
Amazing developments all, but both the applications and the organization grew faster than the hardware could keep up with.
The organization now had several levels of management, and each one demanded maximum productivity from their underlings.
Keeping in mind that each programmer or team would be writing little black boxes anyway, it made perfect sense to set up the software production line so each developer only had access to those parts of the system he or she was supposed to be working on.
That way developers would be concentrating on their main task and minimize time spent waiting for compiles to finish. At predetermined times the developers would then upload the source code for their little black boxes to a central build system.
The only people who had all the parts of the projects were in fact the custodians of the build system.
Source code version control systems were made part of the process, but there is anecdotal evidence that the transition from standalone hacking to a version control regime was a rough one for many early Microsoft developers.
Only a few days ago I offered pretty much the content of the last few paragraphs to a table of youngish free software developers over beer.
The reaction was quick an unanimous:
"That way, nobody really knows what's going on in the software".
That is a very valid point, and it proves how far we've come with free software.
At the same time there is every reason to believe that the extreme compartmentalization that Microsoft established for its product development in the 1980s was the way things were done there until very recently, if indeed it has changed at all.
By the mid-1990s when Microsoft had been dragged kicking and screaming into modern-day network environments, and the ongoing saga of internet-enabled malware started in earnest (I've written a summary in a malware paper -- an updated version is available here), with the company moving from early denial of any bugs whatsoever through a near-constant barrage of emergency hotfixes to today's monthly megapatch regime.
With the source code still a closely guarded (if occasionally leaked) secret, there is really no way for us to know if they've learned any lessons at all.
One indication that they still have some way to go is this Infoworld article about the state of their protocol documentation (summary: it's not to be trusted at all). As for the state of the source code, all we can do is to study the flow of urgent patches.
Much better then to learn how it should be done - Say from Theo de Raadt's AsiaBSDCon 2009 presentation about how OpenBSD's release process works, and if you want more of the gory details, do check his classic exploit mitigation presentation. Also, most likely you could do worse than read Damien Miller's OpenSSH security presentation (full text here).
It's all those little things we do, at FreeCode and in free software in general. If you found this column useful, entertaining or irritating, please drop me a line.