How compartmentalization turned into a security disaster. Greed, incompetence and dishonesty was involved.
IT security or the lack of any such thing has grabbed headlines lately here in Norway. A series of high profile public institutions have seen large scale worm infections on their Microsoft based networks. Late last year the regional government agency responsible for essentially all health care in the western part of the country had a worm infection so bad they essentially shut down their network as a preventive measure. During the last few weeks, the national police force, of all thinkable organizations, has seen not one, but two large-scale incidents.
Use of Microsoft products and sloppy system maintenance are both pervasive enough that similar incidents are likely happening right now elsewhere, somewhere near you too. The news reports about the Norwegian police force's IT problems contained one item that was particularly shocking to IT types like me:
Apparently large parts of the bureaucracy that is responsible for the confidential and correct processing of criminal matters and all sorts of sensitive personal information associated with the crimes runs essential services on Microsoft Windows NT 4.0.
That version of the Microsoft product is so old it is officially abandonware, and early reports of the police network problems included the oldish news that even the antiware vendors have stopped supporting the system. Later reports had police IT department officials claim that the worm infections were not that much of a security problem, since at this point all the worm actually did was spread.
Break out the popcorn, boys and girls: In an upcoming episode, we will see how the worm infected Windows machines the Royal Norwegian Police did not find or couldn't clean well enough are used in the perpetration of some cybercrime or other.
It's all pretty sickening, and at this point it would be rather tempting to spend the rest of the column ranting about the general stupidity of Windows users. But a smarter approach is to see if there is a lesson to be learned. To do that, we need to backtrack quite a bit and look at the cult of the little black boxes.
The cult of the little black boxes, and Microsoft the 1980s corporation
We need to go back and take in what the world was like in the nineteen-eighties. This was back when the world was divided into real computers (from the likes of IBM, Digital, and regional quasi-monopolies like our own Norsk Data) and those annoying toys called 'personal' microcomputers, where the 'IBM PC compatibles' had emerged as the surprise leader of the pack. Computer networks were usually private, corporate things and rarely interconnected much, with the rare exception of those institutions that were part of the US Department of Defense science experiment that was becoming known variously as ARPANET or 'the Internet'.
If you took your programming classes back in the nineteen-eighties, you likely know that we were taught that black boxes were good. Compartmentalization was the order of the day, meaning that as a developer you were supposed to create code with a well defined inteface, so anybody interacting with your code would be presented with a cleanly predictable result to any given input. Your code should be a black box and for any particular specificiation there could be written several interchangeable modules to fit the bill.
So far so good, predictability is nice and with compartmentalization comes, we hope at least, clear chains of responsibility. But several factors combined to take the cult of the black boxes and turn it into a corporate culture at a corporation that was growing from a handful of furry hackers into a corporation at the time, namely Microsoft. Microsoft' early successes all came from writing software for single-user systems that were so limited that working around the limitations of the hardware became much of a lifestyle. At the start of the decade, networking on microcomputers in Microsoft's range was pretty much out of the question, and by the end of the eighties any sort of connectivity (even dial-up) was still an expensive optional extra that was still too hard to configure for most people. On the storage side, we progressed from 128 kilobyte floppies to hard drives of just over a hundred megabytes for personal systems, with the 32 megabyte partition size still a very present limiting factor.
Amazing developments all, but both the applications and the organization grew faster than the hardware could keep up with. The organization now had several levels of management, and each one demanded maximum productivity from their underlings. Keeping in mind that each programmer or team would be writing little black boxes anyway, it made perfect sense to set up the software production line so each developer only had access to those parts of the system he or she was supposed to be working on. That way developers would be concentrating on their main task and minimize time spent waiting for compiles to finish. At predetermined times the developers would then upload the source code for their little black boxes to a central build system. The only people who had all the parts of the projects were in fact the custodians of the build system. Source code version control systems were made part of the process, but there is anecdotal evidence that the transition from standalone hacking to a version control regime was a rough one for many early Microsoft developers.
Only a few days ago I offered pretty much the content of the last paragraph to a table of youngish free software developers over beer. The reaction was quick an unanimous: "That way, nobody really knows what's going on in the software". That is a very valid point, and it proves how far we've come with free software. At the same time there is every reason to believe that the extreme compartmentalization that Microsoft established for its product development in the 1980s was the way things were done there until very recently, if indeed it has changed at all.
By the mid-1990s when Microsoft had been dragged kicking and screaming into modern-day network environments, and the ongoing saga of internet-enabled malware started in earnest (I've written a summary in a malware paper), with the company moving from early denial of any bugs whatsoever through a near-constant barrage of emergency hotfixes to today's monthly megapatch regime. With the source code still a closely guarded (if occasionally leaked) secret, there is really no way for us to know if they've learned any lessons at all. One indication that they still have some way to go is this Infoworld article about the state of their protocol documentation (summary: it's not to be trusted at all). As for the state of the source code, all we can do is to study the flow of urgent patches.
Much better then to learn how it should be done - Say from Theo de Raadt's AsiaBSDCon 2009 presentation about how OpenBSD's release process works, and if you want more of the gory details, do check his classic exploit mitigation presentation. Also, most likely you could do worse than read Damien Miller's OpenSSH security presentation (full text here). It's all those little things we do, at FreeCode and in free software in general.
If you found this column useful, entertaining or irritating, please drop me a line.