Showing posts with label Linux. Show all posts
Showing posts with label Linux. Show all posts

Wednesday, September 14, 2022

Open Source in Enterprise Environments - Where Are We Now and What Is Our Way Forward?

© 2025 Peter N. M. Hansteen

We have been used to hearing that free and open source software and enterprise environments in Big Business are fundamentally opposed and do not mix well. Is that actually the case, or should we rather explore how business and free software can both benefit going forward?

Puffy, the OpenBSD mascot, shiny version

Free and Open Source vs Enterprise and Business: The Bad Old Days

Open source, free software and enterprise IT environments have both been around for quite a while. I'm old enough to remember when the general perception was that the free exchange of source code was merely a game for amateurs, or at best an academic excercise. In contrast, the proper business way of doing things was to perhaps learn general principles and ideas from the academics, but real products for business use would be built to be sold as binary only, with any source code to be kept locked away and secret.

Note: This piece is also available without trackers but more basic formatting here.

If you're a little younger you may remember a time when Windows NT is the future was essentially gospel and all the business pundits were saying we would be seeing the last of Unix and mainframes both within only a handful of years.

Thinking back to the late 1980s and early 1990s it is hard to imagine now how clear the consensus seemed to be on the issue at that point. The PC architecture and a few other proprietary technologies was the way of business and the way forward.

No discussion or dissent seemed possible.

Then, The Internet Happened

Then the Internet happened. What few people outside some inner circles were aware that what actually made the Net work was code that came directly out of the Berkeley Software Distribution. BSD Unix, or simply BSD for short, was a freely licensed operating system that was the result of a rather informal cooperation of researchers in academia and business alike, originally derived from Unix source code.

When the United States Department of Defense wanted work done on resilient, device independent, distributed and autoconfiguring networks, the task of supplying the reference implementation for the TCP/IP stack, based on a stream of specifications dubbed Request for comments or RFCs, fell to the international group of developers coordinated by the Computer Science Research Group at the University of California's Berkeley campus. In short, the Internet came from BSD, which thanks to a decision made by the Regents of the University of California, was freely licensed.

The BSD sourced TCP/IP stack was part of all Internet capable systems until around the turn of the century, when Linux developers and later Microsoft started working on their own independent implementations. By that time it had been forcefully demonstrated to the developer community at least that open source code was indeed capable of scaling to industrial scale and beyond.

Due to a handful of accidents of history, mainly involving imperfect communications between groups of developers and combined with a somewhat misguided lawsuit involving the BSD code, it was Linux that became the general household term for free software in general and the re-emergence of Unix-like systems in the Internet connected server market space. Linux distributions came with a largely GNU userland as well as generous helpings of BSD code.

At roughly the same time Linux emerged, the BSD code became generally available via the FreeBSD and NetBSD projects, and soon after the OpenBSD project, which forked from the NetBSD code base in the mid 1990s. For a more detailed history of these developments, see the three part series on the APNIC blog starting with this piece. If that piqued your interest, you may enjoy this piece about some incremental improvements over time in OpenBSD.

The War on Linux and the Proliferation of Open Source Tools

During the 1990s and early 2000s the Internet and services of all kinds that ran on top of it expanded in all directions. That expansion had the effect of advancing the free unixlike systems such as Linux and the BSDs, which would run quite comfortably on commonly available hardware, along with an ever expanding number of development tools and software of all kinds to new categories of users.

The success of the open source software lead to what would be dubbed The War on Linux, a rather vicious defamation campaign executed in both PR campaigns and lawsuits, and driven mainly by the then-dominant desktop software vendor's ambition to dominate server space as well. One of the more bizarre sequences of Linux-targeting lawsuits was run by proxy, and is extensively documented at groklaw.net (Note: http-only site). It is worth noting that the process eventually lead to bankruptcy for the litigant.

Over the years it became clear to essentially everyone in the industry that open source tools were essential to development, and several practical aspects of developer life lead to ever increasing open source use. During the time of The War on Linux, the likes of Apple, Cisco, Netscaler (later acquired by Citrix) and Sun Microsystems (later acquired by Oracle) either incorporated open source code in their products and workflows, open sourced large parts of their own code or forked freely available code to base proprietary systems on. It may be worth discussing each of these approaches in detail later.

On to the Present: We All Use...

Fast forward to the present day, and I recently had colleagues sum up that in the enterprise environments we move in,

Software is developed on Macs,
deployed on a cloud somewhere,
which more likely than not runs on Linux.

And the software itself is likely built with open source tools and pulls in dependencies from open source projects, possibly hosted on Github or other public sites.

Your software in all probability uses some open source. And even if you are not a developer, you most likely use open source tools that are integrated in your operating system or common application software or web services.

On the client side of things, an ever increasing part of the volume comes from smartphones, tablets and the like, where the market share for open source based systems (Android and IOS) exceeds 90 percent. In a document we will come back to later, the Norwegian National Security Authority (NSM) estimates that approximately 90 to 98 per cent of all software in use to some extent has dependencies on open source software. Other relevant statistics can be found here, here and here. Or, if you're in a bit of a hurry: It is estimated that some 3.1 billion Linux-based Android phones are currently in use. In addtion, there is Apple, which we know has a significant amount of BSD code in their software.

It is of course worth noting that by now even the old open source arch-enemy Microsoft ships their offerings with what amounts to an almost complete Linux distribution as a subsystem. The same company regularly lobs cash over the wall to the likes of The OpenBSD Foundation and regularly contributes to other open source projects. Not to mention that much of what runs in their Azure cloud is one way or the other Linux based.

Security: QA Your Supply Chain, Excercise the Right to Repair

Back in the days of The War on Linux, and to some extent still, we have often been faced with claims that open source software could either never be as secure as proprietary software or that open source software was inherently more secure than the closed source kind, because "given enough eyes, all bugs are shallow".

Both assertions fail because even without access to source code, it is possible to probe running software for vulnerabilities, and on the other hand the shallowness of bugs depends critically on the eyes looking being attached to people with sufficient competence in the field.

The public reaction to a couple of security incidents during recent years that generated a flurry of largely uninformed punditry are worth revisiting for the lessons that can be learned.

The Solarwinds supply chain incident aka SUNBURST (2020) - One of the most widely publicized yet mostly quite poorly understood security incidents in recent years emerged when it was revealed that adversaries unknown had been able to compromise the build computers where the binaries for their widely used network management software was built for distribution.

The SANS institute has produced a fairly thorough writeup of the incident, which breaks down as follows: The first stage of a multi-stage compromise kit was included in binary distribution packages, complete with authentic signatures from the build system, that were largely put directly into production environments by network admins everywhere. The malware then went on to explore the networks they landed in, and through a process that made heavy use of crafted DNS queries and other non-obvious techniques, the miscreants were able to compromise several high security government and enterprise networks.

Several open source component supply chain incidents (2020 onwards) - Soon after the SUNBURST incident several incidents occured where popular open source components that other systems pulled in as dependencies started malfunctioning or were suddenly unavailable, causing complete malfunctions or loss of functionality such as a web service suddenly refusing to interact with specific networks.

The sudden breakage in open source components caused quite a bit of uproar, and predictably the chattering subset of the consulting class set about churning out dire warnings about the risk of using open source of any kind.

Watching from the sidelines it struck many open source oriented professionals, myself included, that the combination of these incidents carry an important lesson. It is obvious in a modern environment we suck in upgrades automatically and frequently, and that no untested code should ever be deployed directly to production.

Blind trust versus the right to read (and educate yourself) and the right to repair - In the case of proprietary, binary-only software, you have no choice but to trust your supplier and that they will address any defects in a timely manner. The upshot is that with proprietary, binary-only you do not have access to two important features of open source software: The right to read and study the code, and the right to repair any defects you find, potentially saving yourself potential service shutdowns or workarounds while the secret parts of your system get fixed elsewhere.

The lesson to be learned is that you need to run quality assurance on your supply chain. You may choose to trust, but you still need to verify. That goes for open source and proprietary software both.

This Norwegian felt slightly elated when reading that the Norwegian National Security Authority (NSM) provides essentially the same assessments in their published recommendations.

Contributing - Cooperating on Maintenance

As with any product it is entirely possible to be a relatively passive consumer, just install and use, and build whatever you need on top, interacting with the community only via downloading as needed from the mirror sites. Communicating via online forums, mailing lists or other channels is entirely optional.

If you are a developer or integrator with an ambition to make one or more opern source products central to your business either by using and contributing to an existing project or starting a new one, several approaches are possible.

Let's take a look at the strategies some big names adopted on open source in their products:

Grab and fork, sell hardware: The Netscaler load balancer and application delivery products were based on a fork of FreeBSD.

They appear to have rewritten large parts of the network stack and devised a multifunctional network product on top, which among other things features a slick web GUI for most if not all admin tasks.

If you look closely, Netscaler (since acquired and rebranded by Citrix) appear to cultivate a menagerie of open source projects to interface with their products.

However they appear not to have in particularly close contact with their main upstream. (It is worth noting that the BSD license does not require publishing changes to the code base.) When dropping to a shell on a Netscaler unit, last time I looked the output of uname -a seemed to indicate that their kernel was still based on FreeBSD 8.4, which the FreeBSD web site lists as End of Life by August 1, 2015.

Grab and fork, sell hardware, keep sync with your upstream: Starting with the initial release of macOS, Apple have maintained the software that drives their various devices, from phones to desktop computers and related services with generous helpings of open source code, along with what appears to be a general willingness to publish code and interact with upstream projects such as the FreeBSD project. Apple maintains the Open Source at Apple site for easy access to the open source components of their offerings.

This mode of open source interaction seems to be rather common, especially among network oriented suppliers of various specialty gear.

Open source everyting, sell support: Despite early scepticism from business circles, several companies have built successful companies on the model of participating or even driving the development of open sources systems or components, making support contracts (which may include early or privileged access to updates) as well as consulting services the main or sole source of company revenue.

Decide what code is both good enough to publish and useful elsewhere: Finally, for those of us in the services or consulting business who will occasionally write code that is not necessarily business specfic, the reasonable middle ground is just that. Identify code that meets the following criteria:

  1. Was developed by yourself and cleared by your organization and other stakeholders such as your customer as such
  2. Is high enough quality that you dare show it to others
  3. Does not reveal core aspects of your clients' business
  4. Is likely to be useful elsewhere too
  5. Would be nice to have exposed to other sets of eyes in order do identify bugs and fix them

If you have code under your care in your organization that meets those criteria, you should in my opinion be seriously considering making that code open source.

Your next adventure will then be to pick an appropriate license.

Now for Policies and Processes - Do You Have Them?

If you have followed on this far, you probably caught on to the notion that it is wise to set up clear policies and procedures for handling code, open source or otherwise.

Keep in mind that

A license is an assertion of authority. A license is a creator's message to the world that states the conditions others must abide by when using, or if they allow it, change and further develop the code.

Without a license the default regime is that only the person or persons who originated the code have the right to make changes or for that matter make further copies for redistribution.

For that reason it is important to ensure that every element of your project has a known copyright and license.

There have been quite a few instances of free software project rewriting functionally equivalent, or hopefully better, versions of whole subsystems because of unacceptable or unclear licenses (see the OpenBSD articles in the Resources section for some examples).

Procedures and policies, you need them. A self employed developer working on their own project is usually free to choose whatever license they please. In a corporate environment, any code developed is likely tied to a contract of some sort, which may or may not set the parameters of who holds the copyright or what licenses my be acceptable. The exact parameters of what can be decided by contract and what follows from copyright law my vary according to what jurisdiction you are in. When considering whether to publish your own code under an open source license, make sure all stakeholders (and certainly any parties to any relevant contract) agree on the policies and procedures.

Keep it simple, for your own sake. There are supposedly several hundred licenses in existence that the Open Source Initiative considers to be open source. In the interest of making life easier for anyone who would be interested in working on your code, please consider adopting one of those well-known licenses.

They range from the simplest, BSD or MIT style ones that run a handful of sentences and can be condensed to you can do whatever you like with this material except to claim that you made it all yourself to elaborate documents (the GNU GPL v3 comes to mind) which set out detailed terms and conditions, may require republication of any changes under the same terms, and could set up a specific regime with respect to patent disputes.

It is also important to consider that components you use in your project may have specific license requirements and that different licenses may contain terms that make the licenses incompatible in practice.

My general advice here is, make it as simple as possible, but no simpler.

Or to rephrase slightly: The general advice for dealing with licenses echoes that of dealing with crypto code: Do not set out writing your own unless you know exactly what you are doing. Avoid that path if at all possible.

When in need, call in Legal (but make sure they understand the issues). Lawyers endure a lengthy education in order to pass the bar and turn to practicing law, but there is no guarantee that a person well versed in other business legalese has any competence at all when it comes to matters of copyright law. When you do turn to Legal for help, be very exacting and stern in insisting that they demonstrate a command of copyright basics and if at all possible have a reasonable real world understanding of how software is built.

As in, you really do not want to spend an entire afternoon or more explaning the difference between static and dynamic linking and why this matters in the face of a certain license, or that specific terms of different licenses deemed open source by the Open Source Initiative may in fact be incompatible in practice.

It is important to keep in mind that doing open source is about making our lives more productive and enjoyable by exchanging ideas between quality professionals, perhaps sharing the load of maintenance and leaving us all more resources to develop our competence and products further.

The Way Forward - The Work Goes On

So this is where we are today. Modern software development and indeed a goodly chunk of business and society in general depends critically on open source software.

If you enjoyed this piece (or became annoyed by any part of it) I would like to hear from you. I especially welcome comments from colleagues who have experience with open source use and/or development in enterprise settings. Of course if you are just curious about open source software in these settings, you are welcome to drop me a line too. I am most easily reachable via email nix at nxdomain dot no.


I want to extend thanks to Malin Bruland and Knut Yrvin for excellent comments and proofreading.

Resources

All things open source (including an almost encyclopedic collection of licenses) at The Open Source Initiative

Wikipedia: Berkeley Software Distribution about where the Internet came from

The GNU Operating System, supported by The Free Software Foundation

The FreeBSD operating system project

Open Source at Apple

Peter Hansteen: What every IT person needs to know about OpenBSD Part 1: How it all started,
What every IT person needs to know about OpenBSD Part 2: Why use OpenBSD?,
What every IT person needs to know about OpenBSD Part 3: That packet filter
(or the whole shebang in the raw at bsdly.blogspot.com)


Bradford Morgan White: The Berkeley Software Distribution

Nasjonal Sikkerhetsmyndighet (NSM): Åpen kildekode i den digitale leverandørkjeden (Norwegian only)

Business of Apps: Android Statistics (2023)

Bank My Cell: How Many Android Users Are There? Global and US Statistics (2023) (Source: https://www.bankmycell.com/blog/how-many-android-users-are-there)

Statista: Market share held by Apple iOS operating system of smartphone shipments from 1st quarter 2011 to 4th quarter 2022

Appendix: License Complexity Measured by Word Count

While presenting on free and open source software in enterprise environments, the topic of license complexity and how to handle licensing matters usually generates questions of the type,

"Does doing open source mean we need to staff an Open Source Program Office?

Does this not add a considerable measure of complexity to the development organization?

Do the open source licenses mean we have to hire even more lawyers?"

So I set out to do a little research. I figured that the number of words in a text is a useful, if not perfect indicator of complexity, so we could use that measure as a useful and easy to obtain proxy for measuring how complex the licenses we are likely to encounter are in practice.

I headed over to the Open Source Initiative website and their excellent collection of open source licenses. I then picked out the more common open source licenses, and for each license I pasted the text into the word counter at wordcounter.net, which in addition to the word count provides an indication of likely target audience "reading level" and estimated reading time as well as a few other measures of the text characteristics.

The results are in the following table:


License complexity by wordcount
Word count Reading
Level
Reading
time
1-clause BSD License 160 College Graduate 35s
2-clause BSD License 191 College Graduate 42s
3-clause BSD License 220 College Graduate 48s
GNU GPL v2.0 2964 College Graduate 10m47s
GNU GPL v3.0 5608 College Graduate 20m30s
Apache License v2.0 1677 College Graduate 5m44s
Microsoft 365 Developer program license 4803 College Graduate 17m28s
Microsoft Windows 11 OS license terms 5766 College Graduate 20m58
Oracle End User License Agreement 2554 College Graduate 9m17s
Adobe End-User License Agreement 450 College Graduate 1m38s
Apple Licensed Application End User License Agreement 1524 College Graduate 5m32s

Once again, strict word count is not a perfect indicator of complexity — other measures such as sentence length and logical structure and interdependencies are likely to matter in real life scenarios.

Thursday, July 1, 2021

The Impending Doom of Your Operating System Going to or Past 11, Versus the Lush Oasis of Open Source Systems

Will the uncertainty over forced obsolescence of fairly recent hardware force Microsoft and Apple users to switch to open source alternatives?

During the last few weeks several items of computing hardware in our household had reached the point in their lifetime when it made sense to trade in for upgrades.

Digi.no published a Norwegian version of this articleEn skummel fremtid med operativ­system som går til 11 eller forbi, eller en rolig oase med fri programvare?

I've written articles about my last two major laptop upgrades and each time detailed (in 2010 and 2017, respectively) how to deal with hardware that was new enough that I had no way to be certain it would work optimally with my chosen operating system, OpenBSD

I have tended to jump from snapshot to snapshot, generally following whatever was -current on OpenBSD/amd64. There were other upgrades during that time, but those were straightforward enough that I did not see a need to write about them.

Note: This piece is also available without trackers but classic formatting only here.

This time around, even though the process involved interactions with OpenBSD developers via the bugs@ mailing list and even trying two separate models from the same manufacturer before settling on what I wanted, I considered just letting this upgrade round just pass relatively undocumented. There was simply not enough drama involved in the process to make for interesting reading or an inspired writing process. 

But then came the announcements from Apple and Microsoft of their operating systems going past 11 or to 11 respectively, spaced not too many weeks apart. In both cases, the announcements indicated that the new operating system versions would not work with older hardware.

At their WWDC event in early June 2021, Apple announced new versions of their system with somewhat vague but only thinly veiled formulations that specific new features of the upcoming system would only be available on the newer ARM architecture "Apple silicon" hardware.

Then a few weeks later into June 2021, Microsoft announced their Windows 11, and the announcement included some fairly confusing statements that seemed to indicate at first that Windows 11 would only work well or at all on hardware based on Intel's 8th generation Core processors or equivalent.

Apple is almost a year into their announced two year transition from Intel-supplied processors, with a base architecture generally known as AMD64, to their own Apple-designed ARM64-based system on a chip cores. Apple has generally kept some level of support for Macs for seven years after release, and with a transition to a new architecture underway, it becomes even less surprising that support for older devices will gradually erode and that some new system features will only be available on newer model hardware.

This contrasts sharply with Microsoft's situation, with the company not really dependent on hardware sales and not with any announced or unannounced but apparent move to a different architecture. Whatever the reason for the cutback in support, the initial response from the public seemed to indicate that there now was a real fear that on installing the new software, upgrading Windows users would be faced with something like


(which is in fact an OpenBSD panic) unless they upgrade to newer hardware before trying the new software release.

The fear of abandonment seemed real and echoed the feelings I have had myself over the years when getting new hardware to run a free operating system on.

The previous articles chronicle some of the experimenting that was needed in the past to make OpenBSD work when the hardware was newer than what yet had time to reach the developers. But in the end we could always be quite certain that we could make what we were interested in work, given time and perhaps some interaction with developers, or if you were up to it, becoming a developer yourself.

Anyway, over time the chance that things would just work increased, and your sweet spot for some time was buying hardware that was released within the last couple of years before the operating system release you were installing.

Hardware drivers would generally be kept in and maintained as long as they appeared to be useful. In general a driver would only be retired from the tree if it was useful only to an architecture that was going out of support such as OpenBSD/vax which went to the attic after the OpenBSD 5.9 release in 2016.

The major lesson here is that the free systems like OpenBSD, Linux or others would keep hardware support around as long as it appears to be useful to somebody, somewhere. 

If major players like Microsoft choose to simply abandon users who do not have the latest hardware to stagnation plus only security updates, moving to a free software alternative may very well be a viable option for users who are not willing to abandon not very outdated hardware as long as their typical use case allows.

In my own experience, with hardware that has been on the market for about a year or possibly more you will encounter few to no problems making things work. My most recent Linux experience on laptops is with 9th and 11th generation Intel Core hardware, both of which will serve you well, including multimedia setups, excluding only those that explicitly tell you that you are on your own (Netflix being a case in point).

Now for an incrementally geekier part. If you are not that interested in OpenBSD, please feel free to skip.

But if you were waiting for the promised OpenBSD on newer hardware runthrough, you will get the fuller picture by reading the following and by looking up the details in the mailing list archives via the links and links in those messages.

The thread AMD Ryzen based Asus ZENBOOK 14 UM433DA-PURE4 14" panic at first boot post install - how to debug chronicles the interactions from "machine installs but does not survive first boot" through finding that the machine's BIOS announced but did not actually implement some features, and the subsequent changes that went in to the mainstream OpenBSD kernel, if I remember correctly just in time to be included in OpenBSD 6.9.

However, as can be seen in ASUS ZenBook X freezes, there were problems in the DRM/xorg area that would prove too hard to debug. Do read the whole thread, it contains useful debug info for when you get into a similar situation yourself.

Returning that system to the shop for a refund while I was still fiddling with the finer points of the next system was an interesting experience in itself.

I tried to restore the system to its pre-OpenBSD state before returning it, but as it turns out the Windows 10 install image Microsoft supplies will not be able to complete an install by itself.

Rather, it will prompt you for hardware driver you are supposed to have to hand for this system.

As a result of this, the machine still had OpenBSD installed -- with my user and home directory removed and only root as an active user -- when I handed the machine in for the refund, and it was immediately clear that the support techs had never seen anything more Unixy than macOS before. Fortunately this only lead to a short delay in the issuing the refund (but I now have a 1 year PC and Mac Support contract which I do not know that I actually need).

Anyway, I had already discovered an offer for a slightly more expensive model with better features, so ordered and took delivery of the machine described in ASUS ZenBook S: SSD unrecognized, possible new iwx variant, which chronicles the relatively light debugging needed to get the system in shape.

In short, after receiving the package with the new machine late in the afternoon, I spent a few hours trying to work around a few items that lead to rather puzzling failures at first, but fortunately they were all relatively easy to fix with a little help from OpenBSD developers who read the bugs@ list.

The first hurdle was that the system apparently did not recognize the built in SSD. This turned out to a matter of finding the BIOS option for turning off RAID controller functionality, which anyway does not make a whole lot of sense in a system where it is physically impossible to fit more than one storage device on a permanent basis.

The option turned out to live in the BIOS' Advanced menu, labeled VMD setup menu, where you set the Enable VMD controller option to Disabled. Once that is done, the SSD shows up as a regular NVMe device:

nvme0 at pci3 dev 0 function 0 "Intel NVMe" rev 0x03: msix, NVMe 1.3
nvme0: INTEL SSDPEKNW010T8, firmware 004C, serial BTNH03460GYE1P0B
scsibus1 at nvme0: 2 targets, initiator 0
sd0 at scsibus1 targ 1 lun 0: <NVMe, INTEL SSDPEKNW01, 004C>
sd0: 976762MB, 512 bytes/sector, 2000409264 sectors

This made it possible to install on the internal SSD proper, and the next issue was that this 11th generation Intel Core system needed a newer revision (version 5.10) of the Linux-derived DRM code. At the time (and still at the time of writing) Jonathan Gray maintained an as-not-yet-committed branch of the OpenBSD kernel with the code I needed in. The reason this DRM code version was not committed to the main tree was that the newer code caused some regressions on older hardware.

On my system, it looked like the stock kernel would panic when loading the iwx(4) driver, but booting the test kernel Jonathan supplied cured that problem, and I have been running once a week checkouts of the drm510 kernel on top of sysupgraded snapshots since.

However, even with the iwx(4) driver now loading, the wireless network device did not work. 

Running doas fw_update -v revealed that several of the relevant firmware files had been corrupted, and after doas fw_update -d iwx and re-installing (doas fw_update iwx), doas /etc/netstart iwx0 worked as expected and with excellent performance.

In the meantime it had turned out that not only was the audio parts of the system in fact supported (it only needed a one line patch to enable it), only minor manipulation to configuration files would make the audio output signal switch correctly between the internal speaker and my headphones, and that for video conferencing a low cost full duplex USB headset was the better choice.

So now I have a gorgeous, lightweight 13.9 inch laptop running OpenBSD with Xorg running with a 3300x2200 pixel resolution and everything I care about working. With a little attention to proper testing, we have reason to believe that all of this will be properly supported without regression for older hardware versions in the upcoming OpenBSD 7.0 release.

As I had hinted earlier, you may very well find yourself better served and supported by the open source operating system of your choice and its developers and users than you can reasonably expect from the commercial, proprietary options.

If you have questions about anything in this article, OpenBSD or other free systems, please let me know in comments here, seek out a local-to-you user group (the ones I am most involved in are NUUG, the national Norwegian Unix User Group, and BLUG, the Bergen (BSD and) Linux User Group), or drop me an email. If you choose the last option, please read my read me first document before sending a second message.




Update 2021-07-07: As reported in the following tweet, the DRM 5.10 update is now in, and I can go back to quiet sysupgrade(8) from snapshot to snapshot:

Which also means OpenBSD 7.0 will seriously rock on this and similar machines.


If you found this piece to be useful, informative, annoying or would for some other reason like to contact me or comment, please do.

You might also be interested in reading selected pieces via That Grumpy BSD Guy: A Short Reading List (also here).


Wednesday, August 29, 2012

Ubuntu Made Easy May Have Achieved Its Noble Goal


A new No Starch Press title sets out with the intention of making Ubuntu Linux accessible to newcomers of the more timid kind. It may very well have succeeded.

If you've been part of the open source community for a while, you will have seen quite a few books and that set out to make some version or other of Linux accessible to beginners.  While some of these efforts have been quite compentent and admirable, others have been too shallow, barely rewritten from a template originally created with other systems in mind.  Others again have failed by being simply too massive and not actually very beginner oriented past the first ten to fifteen pages or so.

It's fair to say that a new Linux beginners' book has the odds stacked against it in several important ways. But then for a greying unixer like myself, there is the always a lingering hope that the next Linux or other Unix for beginners book will finally get it right, and manage to strike the right balance between a gentle learning curve and providing enough information to be genuinely useful.

So when No Starch Press asked me if I would like a review copy of Rickford Grant and Phil Bull's Ubuntu Made Easy (I have a pre-existing business relationship with No Starch Press, see note at the end), I hesitated for a few minutes and told them I'd take a peek.

When my review copy landed on my desk a few weeks ago I had already had access to PDF version for a little while.  Seeing the total page count in the PDF I was at first a little worried that this would be another one for the too big to be useful category.

But I needn't have worried. The writing in the approximately 420 pages of core text flows well and the subject matter is presented in a way that in my limited testing on less experienced users (I do not have easy access to truly fresh newbies, unfortunately) seems to build a useful and gentle learning curve.  A gentle learning curve does not necessarily mean low information density, however -- even my not totally green test subjects all found pieces of new information or useful tips in all chapters.

The text (with fairly frequent and useful illustrations) flows through a total of 22 chapters and four appendixes that takes the reader from a very gentle introduction to the system (even suggesting that you run from the live CD at first, only committing to a permanent install once you've gotten your feet wet) through a sequence of smaller projects in a full range of Linux desktop life topics that are clearly designed to both expand the users' skill set and to build their confidence in their own abilities.

After the project or task oriented chapters that make up the bulk of the book, the later chapters include an introduction to interacting with the Ubuntu community as well as a fairly useful symptom-oriented troubleshooting guide.

More experienced users may opt to skip or browse rather quickly through the early chapters, but as I mentioned earlier, even the more experienced may be able to find new information or at least a fresh perspective on familiar topics in various sections of this book.

So my verdict is that all in all, I think the authors of Ubuntu Made Easy may be very close to succeeding in the much longed-for goal of making a Linux beginner book that is actually useful to beginners.

Title: Ubuntu Made Easy - A Project-Based Introduction to Linux
Authors: Rickford Grant with Phil Bull
Published: No Starch Press (San Francisco), July 2012. 480 pages.
ISBN: 978-1-59327-425-2


Note: I have a pre-existing business relationship with No Starch Press.  The good people there had the patience to work with me through the process of writing three editions of The Book of PF, and they have sent me review copies of more books than the ones I've actually gotten around to reviewing.

Sunday, February 26, 2012

The Linux Command Line Is A Very Appealing Story

William E. Shotts, Jr.'s The Linux Command Line is a delightful read and a book you can hand to junior colleague or friend with more limited Unix shell exposure and be confident that they come back significantly shell-wiser and probably happier.

For almost as long as 'Linux' has been a somewhat familiar term in IT circles, there has been a constant effort to wrap the system in some sort of graphical interface to make the system 'user friendly'. The result is that the run of the mill Linux user is seldom if ever exposed to the classical Unix shell command line interface unless they actively seek it out. There is no shortage of literature describing how to be productive on Linux using the various graphical interfaces either.

With this context in mind, it's quite refreshing to find a book that is written with the purpose of pointing out that the real power and productivity gains of swithcing from something else to Linux (or other free unixlike systems) lies in tapping the power of the shell command line and shell scripting.

The book quite sensibly starts out with the premise that modern Linux users most likely have learned to use their computers mainly or even exclusively through grapical interfaces, and gently introduces the user to the shell via a series of bite-sized but insightful examples and excercises, starting from a few simple commands and navigating the file system hierarchy.

The pace picks up gradually from there, with explanations and tips on the workings of the Unixy environment and how to tailor it to your own preferences, useful administration commands, a smacking of regular expressions and various other basic building blocks that generally find a useful application within the first few pages of their first mention. The reader is instructed early on that the book is intended to be read from beginning to end like a novel, and when you follow that instruction the narrative and buildup works very well. At the two-thirds point in the book, the user is instructed in how to compile their first C program, a task that isn't quite as difficult as may sound if you pick a sensible chunk of code to start with.

The fourth and last section of the book (about third of the total by my eyeballing) is a very well written tutorial on writing useful shell scripts, which touches on a number of shell features and may very well even be a useful refresher for users with a little more shell experience than the intended primary target audience.

Although the author warns that the book is entirely Linux-centric, the BASH shell that is really the centrepiece in the narrative is in fact available on all modern Unixes and unixlike systems, so The Linux Command Line is in fact a quite useful introduction to the shell for users of BSDs and even, I imagine, MacOS users who want to see what can be done outside the wholly graphical interface.

Unix greybeards will find little that's entirely new here, but the book is a delightful read and you can hand it to a junior colleague or friend with more limited Unix shell exposure and be confident that they come back significantly shell-wiser and probably happier.

Title: The Linux Command Line
Author: William E. Shotts, Jr.
Publisher: No Starch Press, Inc.
ISBN-10: 1-59327-389-4
ISBN-13: 978-1-59327-389-7
Published January 2012, 480 pages

Available at better bookstores everywhere and directly from the publisher at http://nostarch.com/tlcl.htm.

Friday, April 29, 2011

RFC1149: Ten Years of In-Flight Internet

It's been ten years since a small group of Bergen hackers implemented RFC1149, the Carrier Pigeon Internet Protocol, making in-flight Internet a reality well ahead of any airline.

On April 28th, 2001, my laptop received four ping packets. That in itself, you might say, was not a particularly noteworthy event.

However, on this occasion, the message was in the medium. The traffic my laptop received that day was transmitted via avian carriers, more commonly known as carrier pigeons.

Note: This piece is also available without trackers but classic formatting only here.

The story of the hows and whys was rather quickly documented in main implementer Vegard Engen's "preliminary writeup" (part of the CPIP Workgroup website (which was slashdotted and for a few days saw traffic in the million hits per day range).

Why did we do it? Because the RFC was implementable, and it was fun. Very few people outside tech circles have ever clued in on this, with a few notable exceptions such as Peter Meyers, who wrote the salon.com article The pigeon protocol that was published within a few days of the event.

If you're interested in this kind of thing (if you're still reading, I suppose you are), other writeups such as my own The kernel hacker speaks to polite people and assists in the flight of network packets - a conspirator's view of Alan Cox' April 2001 Bergen visit may be worth reading too.

After the event, Vegard made a presentation about the workgroup efforts at the next IETF conference, and was presented with a plaque (jpg, 76kB, jpg, 2.1MB) in return.

The CPIP WG activities have proceeded at a more leisurely pace in recent years. In 2005 I went to the AUUG 2005 conference to do an early version of the PF tutorial, and en route I made a presentation in Adelaide about the project (slides and accompanying notes are still avaliable).

We're still looking for independent, interoperable implementations, though. Preferably on other free operating systems besides Linux. If we can entice our old pigeon partners to participate, we're more than willing to arrange for interoperability tests.

The world needs this to be on the IETF Standards Track.

Friday, September 21, 2007

The Great SCO Swindle Winding Down, But Will They All Get Away With It?

Poor Dan Lyons. He thought like a bookmaker and wrote what he thought was right.
You see, a few years back, when Caldera was still Caldera, that company had successfully sued a large corporation and won. Then Caldera changed its name to SCO and sued another huge corporation. Dan the bookie thought it was a sure bet, and started cheering them on. Four years on, the sure bet went south on a technicality. They did not actually own the code they had accused others of stealing. At least that's the way I read his Snowed by SCO article over at Forbes.

My take on this is, Dan, you only had to look at the facts. Knowing a bit of IT history is also a plus. When the SCOX matter came up, I like most people thought that you can never rule out the possibility that some code might have been copied. After all, Unix source code was never particularly hard to get you hands on and was widely used as classroom examples all over the world.

Then if that code was just identified, it would be ripped out and replaced. It's happened before. In the free software world, whole subsystems get replaced when there's a good reason to, and if the reason is copyright violation it gets somewhat urgent. The problem is, in the SCO matter, no code was ever identified.

Some journalists went through an elaborate procedure involving non-disclosure agreements and were, we are told, showed code from Linux and somewhere else which showed remarkable similarity. When Darl McBride used the SCOForum 2003 conference to show something he passed off as ripped off code, it took only hours to identify the exact chunks through the obfuscation (yep, formatting comments in the Symbol font) and the code proved to be irrelevant.

None of these events helped convince techies of their claims, but for me the tipping point was when they claimed to have a reason to sue the BSDs as well. Anyone who had been paying any attention at all to Unix history knew that the ATT vs BSD lawsuit was finally settled in 1994, with most of the terms sealed, but one of the few things that was made public was that the parties had forfeited any right to sue each other over the Unix code base. To me and quite a few others, this was proof positive that they were 'misguided or dishonest', as a commentator put it at the time.

One of my favorite summaries of the facts of the case was written by Greg Lehey (of The Complete FreeBSD fame), who looked at the various announcements from the technical side. He stopped maintaining it after a while, but it's still there at his website, with as far as I tested with all links intact.

Most people seem to be relieved that the matter seems to be over. I beg to differ.

For one thing, the main characteristic of this matter has been the amazing ability of the SCO crowd to drag out the proceedings over irrelevant, mainly procedural matters. They will have more tricks up their sleeves, for certain.

The other thing is, with Dan's friends out on the technicality that they did in fact not have the legal standing to sue, we will never get that detailed walkthrough of the code where Darl and his covert experts are supposed to point out the infringing code. I, for one would have looked forward to that. Then we would have had a chance of getting to know their real motivation too, and possibly some solid leads on the planning and funding. Now that will just not happen.

Then of course there's the stockholder lawsuits and possibly the FTC. If you were one of those chumps who bought SCOX stock at roughly twenty dollars a share based on Dan Lyons' recommendations, wouldn't you feel a little sore now that your investment is about a cent to your original dollar? That is, if you can unload it before SCOX are finally kicked out of NASDAQ for good?

So poor Dan Lyons for not seeing this coming. And damn the technicalities for cancelling the main event.

For those of you eager for news of the book, we're working hard to get it out there.

Update 2007-09-25: Another non-apology, this one from Rob Enderle.

According to linuxtoday, Rob Enderle claims he was tricked by (wait for it) both SCOX and those ever-bullying Linux people.

Actually, there's not much to see there, You can read it as just another non-apologizing apology, with some tall tales about death threats and DOS attacks thrown in (yes, really).

As I've said a few times earlier, enough facts were on the table right from the start of this timewasting story to show that more than likely the SCOX crowd did in fact not have a case.

Now I wonder what, if anything we will be hearing from
John Parkinson, who wrote in CIO:

"a lot of the intellectual property in Linux is actually owned by companies that never officially agreed to make it available under an open-source license."

Interestingly enough, that came without any qualification at all.

That irritated me enough at the time that I wrote to them (pasted into some inane feedback form):

Alleged intellectual property theft

In the article called "The End of Idealism" (http://www.cio.com/archive/070103/et_pundit.html), John Parkinson writes, "a lot of the intellectual property in Linux is actually owned by companies that never officially agreed to make it available under an open-source license."

Please take a moment to consider the seriousness of this allegation. What Parkinson actually says here is, "large parts of Linux consist of stolen property".

Reading such allegations in an article written by a senior executive of Cap Gemini Ernst & Young is quite shocking in itself.

It is only reasonable that Mr Parkinson or Cap Gemini Ernst & Young specify which parts of the Linux kernel they consider to consist of stolen property.

All versions of the Linux kernel, along with detailed change logs and archives of the developer mailing lists are available to the public. Using these resources, all parts of the code base can be traced to the individual who submitted them for inclusion.

In other words, it is quite easy to pinpoint who did what, and Mr Parkinson and Cap Gemini Ernst & Young would be doing the public a great disservice by refusing to help point out code which was illegally included in the open source operating system.

Quite a few articles, well informed and otherwise, have been written about the SCO vs IBM lawsuit and SCO's allegations. I suggest interested readers browse FreeBSD Core member Greg Lehey's overview at http://www.lemis.com/grog/SCO/index.html while we wait for more details from Mr Parkinson or Cap Gemini Ernst & Young.





Slashdot Slashdot It!