A few months ago, I wrote a forum post explaining the context in which The Unix-Haters Handbook existed since, seen through the lens of today’s Linux, it does come across as somewhat ridiculous.
While Eric S. Raymond did write a retrospective review back in 2008, and it has the benefit of him having been involved with its authors when it was written (and I certainly recommend reading it, since he touches on things I didn’t for lack of experience), I still think it feels a little bit too eager to argue “that’s all fixed now”… probably because ESR is coming at UNIX from an “old hand who is used to the technicals” side of things.
Since I wrote my own explanation before I discovered ESR’s post, and I came at it more from the direction of what a user from 2020 would take for granted, I thought there was still worth in not letting my own explanation fade into obscurity, so here’s a polished-up version.
Because it began as a series of notes taken as I read the book, it’s more or less in the same order, even if I didn’t bother to dig up an exact reference to each page I’m responding to.
The Unix-Haters Handbook is a book from 1994 which is best described as “a frustrated rant about how bad UNIX is, mostly by people who experienced or wrote more comfortable systems that got displaced or never caught on”.
I agree that the style definitely gets people’s backs up, to the detriment of its message (something ESR also pointed out), but, to be fair to it, a lot of it was true about certified UNIX implementations back in 1994 when it came out and, to people who aren’t used to the warts, some of it still is.
ESR’s The Art of UNIX Programming (free to read) and Keith Packard’s A Political History of X (YouTube video) both agree on what a mess was made of things by vendor competition on some fronts combined with vendor apathy on others. (Imagine an ecosystem where every single vendor was Canonical and, by the time they gave up on their own Upstarts and Mirs, Linux was coming out and irrelevance was inevitable.)
If The Art of UNIX Programming describes the beauty of the UNIX design as a goal, The Unix-Hater’s Handbook describes how horrendously vendors were botching the implementation of it in the 80s and early 90s. This snip from a quoted e-mail on the topic of platform compatibility really was fitting:
The evaluation process consisted largely of trying to get their test program, which was an early prototype of the product, to compile and run on the various *nixes. Piece of cake, sez I. But oops, one vendor changed all the argument order around on this class of system functions. And gee, look at that: A bug in the Xenix compiler prevents you from using byte-sized frobs here; you have to fake it out with structs and unions and things. Well, what do you know, Venix’s pseudo real-time facilities don’t work at all; you have to roll your own. Ad nauseam.
I don’t remember the details of which variants had which problems, but the result was that no two of the five that I tried were compatible for anything more than trivial programs! I was shocked. I was appalled. I was impressed that a family of operating systems that claimed to be compatible would exhibit this class of lossage. But the thing that really got me was that none of this was surprising to the other *nix hackers there! Their attitude was something to the effect of “Well, life’s like that, a few #ifdefs here, a few fake library inter-face functions there, what’s the big deal?”
The Unix-Haters Handbook, Page 12
Yes, there are some bits that I disagree with, such as that the brevity of commonly-used commands like rm
and mv
is too cryptic to be acceptable. However, for the most part, they were right. (Though late to the party, as ESR points out.)
Dumb Deletion
At the time, shells didn’t ask “Are you sure?” if you accidentally lifted Shift a moment too late and typed rm *>o
(delete all files in the current directory and redirect stdout to a file named o
) rather than rm *.o
(delete all files in the current directory with a .o
extension).
I agree that’s a problem but they blame it on programs being unable to access a raw form of the command line to implement their own “Are you sure?”. Given that they later bemoan allowing applications to bring their own termcap
and curses
, I’m going to have to disagree with them.
Windows has shown that expecting every application to carry along argument-parsing code has enough problems to make that idea “using a sledgehammer to swat a fly” so, as far as pre-checks go, I think making rm
a shell built-in that asks “Are you sure?” is good enough… especially when it’s intended to double as a scripting primitive. Right assessment of the problem, wrong proposed solution.
Now, one thing they bring up which is, if anything, a more reasonable complaint today than in 1994, is the lack of an option to hook in Trash/Recycle Bin support at a low enough level that commands like rm
reliably send files to it.
I don’t think it’s fair to fault UNIX specifically when no major operating system in 2020 does that, but, with today’s democratized computing and hundreds of gigabytes of storage, the lack of a fully reliable way to undo accidental file deletion is glaring.
Tools like rdiff-backup and Apple’s Time Machine certainly help to reduce the scale of the problem, but they are still incremental backup tools that take periodic snapshots, rather than hooks into file deletion which catch everything, so the core complaint remains valid.
Cocked-up Command Lines
First, let me be clear that this was when csh-family shells were common. That alone should explain a lot of the frustration.
For all their weird legacy quirks, modern bash and zsh are significantly more user-friendly than what people were using in 1994, and that’s not even counting options that break scripting compatibility, like fish and Nushell.
Second, at the time, unices were a horrendous mess of commands using inconsistent argument styles. You can still see that in the single-dash long options used in X.org programs and, yes, they are very annoying.
POSIX, as “too little, too late” as it was, and the standardization of Linux and BSD on single dashes being used for single-character short options helped a lot here, as did conventions like those summed up in The Art of UNIX Programming, Chapter 10, Section 5.
Third, this was before the innovation of --
as a standard way to say “no more option flags. What follows are unarguably positional arguments”, which made it needlessly ugly and annoying to properly handle the potential for file or directory names beginning with -
.
Making that even worse, at the time, -
did double duty as both “no more options” and “stdin/stdout as a pseudo-filename”, with no consistency between commands beyond “read the manpage”.
It is perfectly reasonable to complain about that state of things and to expect standardized, OS-provided argument parsing to be the proper solution.
That said, we’re still innovating on argument parsing over 20 years later, so I’d argue that the best solution would have been an optional reference implementation with an alluring API. That, as an implementation of part of a command-line analogue to Apple’s Macintosh HIG or IBM’s CUA would have served as a strong incentive to follow convention without stifling innovation.
UNIX’s greatest strength has always been its ability to evolve to meet new challenges.
Unhelpful Help
While I haven’t used any of the unices touched on in The Unix-Haters Handbook, I fully believe the authors’ accounts of how UNIX commands behaved when presented with incorrect input. That kind of lack of attention to anything but the ideal case is sadly common in software development.
Thankfully, in my experience, the GNU userland tools are generally much better about that… probably because enough coders tripped over problems and decided to make sure nobody else would. That’s the beauty of open source.
However, their complaint that not all shell builtins have manpages is worse. Sure, it’s bad if a program is too sensitive to a very human failure to fully read all provided documentation, but how is a novice supposed to read the documentation if there’s no unified way to do so and no obvious way to identify which reference to consult?
Things are better on that front now… but not perfect. Google aside, try typing man cd
on Ubuntu Linux 16.04 LTS and you’ll still get No manual entry for cd
, because not all shell builtins have corresponding command-line utilties with manpages.
It also really didn’t help that, from what I’ve seen, early 90s commands did have godawful equivalents to what GNU exposed as –help and that, prior to GNU info
and Google, man
was all there was for commands which had outgrown needing just a short page.
While I’ve been spoiled by HTML and search engines and Zeal, GNU info
basically was the solution to all their complaints on that front.
Bad command or filename? No… just bad commands
Let’s start out gently. tar
used to allow you to accidentally backup nothing. Now, GNU tar says tar: Cowardly refusing to create an empty archive
. Bad old days, meet modern sanity.
It’s still true that, if you type cat - - -
, it will take three presses of Ctrl+D to get your terminal back with no feedback on what’s going on, but I’m going to rule on cat
‘s side here. It’s more a shell scripting primitive than a user command and you asked it to read three files from stdin. (Ctrl+D sends an “end of file” signal.) It would be worse to make the command more inconsistent.
That said, to do that wouldn’t even occur to someone anymore. Ctrl+C is the standard way to break out of a command that people are taught, so I can only assume that either Ctrl+C was less robust back then or whatever training that led them to Ctrl+D was deficient.
Likewise, back when people used RCS for revision control, you could accidentally type indent
rather than ident
and mangle a non-C file. indent
will still do that (I tested it on a Python file) but, thankfully, it’s not installed by default, and nobody of note uses RCS anymore, so there’s no reason to type ident
anyway.
…but it’s not all sunshine and roses. If you mistakenly type cc -o hello.c hello
, even modern-day GCC is stupid enough to blank out your source file before doing any sanity checking.
(I don’t have time to take responsibility for things like “Is this still a problem on a newer version?”, but somebody should probably report that.)
Also, I’ll have to disagree with them on how cd
and symlinks interact. It would make no sense for ..
to take you down a route you didn’t come up, just because you crossed a symlink somewhere along the way.
Netadmin-Pattern Baldness
I won’t go into too much detail on the two chapters dedicated to Sendmail, Usenet, and UUCP.
Yes, old Sendmail versions were braindead and had an infamously cryptic configuration syntax. Thankfully, aside from the classic stories it left behind, like The case of the 500-mile email, even modern Sendmail has given way to Postfix and Exim.)
Yes, Usenet had its problems, but it’s basically dead outside its use as a paid-for channel for illicit file-sharing and Gmane‘s proxy for putting an NNTP frontend on mailing lists. (Though the latter use does a pretty good job of cutting away the flawed parts and using the good parts.)
Yes, UUCP was a monstrous pain. Nobody uses it anymore.
Terminal Stupidity
As ESR pointed out, the central thrust of The Unix-Haters Handbook with regard to terminal handling isn’t really something that can be laid at UNIX’s feet.
These are people pining for Apple-like ecosystems where it was reasonable to demand that the terminal or display hardware play a role in auto-configuration, while UNIX was developed to deal with a dog’s breakfast of terminal hardware that could be extremely dumb, and there was limited ability to dictate terms to terminal makers.
Yes, configuring terminals was a mess and, yes, box-drawing before Unicode-aware ncurses was sub-par, but most of that was beyond UNIX’s ability to fix while retaining compatibility with all the hardware people wanted to use it with. These days, terminal emulators all claim to be some extended form of a DEC VT100 with Unicode support, and everything more or less Just Works™.
That said, it’s not all perfect. I can personally attest to what a mess things can become if you want to combine something like GNU Screen or tmux with support for more than 16 colors… urxvt 88-color mode? xterm-compatible 256-color mode? True/hex color mode? You basically have to just manually test your terminal emulator and then force-override the definition Screen exposes to the application.
Beyond that, however, I think it’s a perfectly reasonable complaint that UNIX chose to require that something like termcap be linked into every application, rather than building something akin to GNU Screen into the terminal driver layer to abstract away the differences… especially if you’ve experienced such a design elsewhere on other systems. More difficult to innovate with, perhaps, but also more robust for the cases it does support. (It reminds me of software mixing support in ALSA vs. OSSv4… but Linux has always been an infamous mess when it comes to audio.)
Weighty Windowing
As ESR points out, it’s likely this chapter was mostly written by someone who poured his blood, sweat, and tears into a superior alternative to X11, only to see it lose out for being proprietary. That said…
X11R4
The biggest complaint about X11 itself is the same complaint people had about Emacs. In 1994, it wasn’t reasonable to try to compete with the aesthetics of a system like Windows 3.1 using a network-transparent system written in C. This quote says it all:
X has had its share of $5,000 toilet seats—like Sun’s Open Look clocktool, which gobbles up 1.4 megabytes of real memory! If you sacrificed all the RAM from 22 Commodore 64s to clock tool, it still wouldn’t have enough to tell you the time. Even the vanilla X11R4 “xclock” utility consumes 656K to run. And X’s memory usage is increasing.
The UNIX-Haters Handbook, Page 123
That said, performance wasn’t the only complaint about the core X server. They were talking about X11R4, which definitely had its problems.
Heck, I started using Linux with XFree86 implementing X11R6, and, coming from Windows, it was shameful what an annoyance it was to get it configured and working properly… and I know there are warts I’ve forgotten.
It wasn’t until surprisingly recently that they sat down and implemented the autoconfiguration that we enjoy from X.org today.
What’s worse, X11R4 existed during an era with multiple competing proprietary X distributions, rather than everyone sharing X.org. Imagine every complaint you have about the nVidia binary driver turned up to eleven.
Motif
They also complain about Motif, which is valid. This is back before GTK+ and Qt. Motif was trying to be a clone of Windows widgetry, except dog-slow.
The funny thing is, as much as I’m not a fan of how we got there, we have the design they proposed for dividing the workload between client and server… we download JavaScript or WebAssembly into a web browser to implement single-page web applications, so the division of responsibility can be decided on an application-by-application basis.
Of course, for native applications, X11 spent so long twiddling its thumbs on ensuring coherent rendering during window resizing that modern GTK and Qt got fed up and now do everything client-side and push a pixmap to the X server for the window contents.
Configuration
They also complain about configuring X… and it’s all valid.
If you’ve never tried using xauth
, consider yourself lucky… especially now that the man
page has an examples section and we have Google in case that’s not enough. Just use ssh -X
or ssh -Y
for X remoting and save your sanity.
As for their complaints about xrdb
, yeah. Good aspriations, shoddy execution… and I get to say that. I use things like Tk, xterm, and urxvt on my 2020 desktop, so I got to see the contrast first-hand.
Sub-Par Scripting
They spend a lot of time ragging on shell scripting… and it’s warranted.
First, let me repeat: This was the era of csh-family shells.
Second, this was before things like Python were ubiquitous, so you had to press shell scripting into service in roles it really isn’t very well suited to. Perl existed, but it was young and I’m guessing they hadn’t learned of it yet.
(And let me say that I have experience trying to write shell scripts that are properly portable to GNU-based Linux, busybox-based Linux, FreeBSD, OpenBSD, and macOS. Even if shellcheck had been around, it doesn’t warn you when you use non-portable options to the commands you’re calling, and POSIX is uncomfortably limited. It was not fun.)
They also make the good point that shell scripting forces a command’s user-visible output to become an API that you can’t ever change without breaking things… sort of like how Python’s standard library is where code goes to die, or how Firefox had to get rid of its legacy “override arbitrary browser internals” addon API because it was an albatross around their neck.
Finally, modern find
has -L
. Try to empathize with them for having an archaic version of find
that simply will not follow symlinks no matter how many goats you sacrifice. Plus, find
is a mess, interface-wise. That’s why fd got written.
I could have put find
under “bad commands”, but shell scripting is so anemic that find
really serves double duty as far too many things that other languages would have via built-in primitives or a standard library.
“My Code Compiles, So It Should Run”
Going into the chapters which complain about C and C++, it’s important to have a little context:
- The UNIX-Haters Handbook was published in 1994
- The first ANSI C standard was ratified in 1989, only five years earlier.
- C++ was originally known as “C with classes” and Cfront, the original C++ compiler, was a transpiler that converted C++ to C.
- The second edition of The C Programming Language by Brian W. Kernighan and Dennis M. Ritchie has this to say about the difference between pre- and post-standardized C:
For most programmers, the most important change is a new syntax for declaring and defining functions. A function declaration can now include a description of the arguments of the function; the definition syntax changes to match. This extra information makes it much easier for compilers to detect errors caused by mismatched arguments; in our experience, it is a very useful addition to the language.
The C Programming Language, Page 2
As a programmer in 2020, the idea that C once didn’t let you declare your arguments, and that one of its creators said “in our experience, it is a very useful addition” helps to drive home how different C programming was in the late 80s and early 90s, even before you realize that there were no free C and C++ IDEs for UNIX in the modern sense, no IntelliSense, etc.
Combine this with how even GCC in 2020 can lag behind standards sometimes, and how much vendor fragmentation and site upgrade schedules can hold things back, and you start to get a feel for how painful C could be.
As for C++, these are people who came from languages like SmallTalk which invented object-oriented programming as we know it. What did you expect them to think of C++’s claims to being comparable, when it began as “C with classes”? It’s nothing but leaking abstractions piled on leaking abstractions and that’s being kind.
Now, to be fair to C and C++, I do not agree with the degree to which they seem to love “distribute the program as a memory image. Allow stopping a running program, editing the code in situ and resuming execution, etc.” programming. As an experimental/exploratory programming tool, that’s nice, but I think it’s completely unsuited to developing applications to be distributed, let alone infrastructure. The 2020 programming ecosystem seems to agree since, outside the limited form that has been reinvented by browser developer toolboxes, it’s more or less nonexistent.
Sisyphus, Sysadmin
Have you ever had to work on a system with such lazy checks in its tooling that the partitioning tools would let you create overlapping partitions? That was UNIX.
Have you ever seen the developers of your system send out an e-mail asking for resubmission of all bug reports since such and such a time because they lost them? That happened to 4BSD because dump
was too disruptive and tar
was more or less useless.
It’s important to remember that UNIX wasn’t always the shining titan of uptime and reliability that it’s become. In fact, even over half a decade later, with GNU tools, I still vaguely remember how much manual tuning, configuring, and babysitting a Linux system needed… and that was an improvement. There’s good reason that some Linux YouTubers comment on how Linux has become so much better, even just since they got into it, as to be almost unrecognizable.
How many people remember having to fsck
ext2? I vaguely do, and I believe them when they say that UFS was much worse. We take filesystems which journal at least their metadata for granted these days.
Also, you certainly can’t blame them for lamenting the lack of a logical volume management system like LVM.
Got Root?
What about their complaints about UNIX’s security model? Yes! A simple split between regular users and root, plus SUID, is a pretty braindead security model and this is probably one of the biggest places where I have to disagree with ESR.
ESR argues that it’s bad, but nobody did better… Really? I find it hard to believe that something as simple as splitting root up into a set of capability flags and splitting SUID up into a matching set of extended file attributes was hard to think of in 2008 or even in 1994… but that’s what we finally have with POSIX Capabilities, including the ability to opt a process and all its descendants out of UID 0’s magical privileges should they manage to attain it.
Sure, I disagree with the authors on whether it’s a good idea for processes to inherit permissions from their parents, but UNIX’s permissions were definitely underpowered.
(Though, on the other hand Windows NT actually goes too far in the opposite direction, with a system of ACLs that is too fine-grained and, as a result, complicates things to the point where people are too likely to just grant all permissions in a frustrated last effort to make things work.)
NFS… ’nuff said.
Let me give some context here. I’ve never used NFS on my Linux systems. Do you know why? Because it was braindead even in the mid 2000s.
I hear that NFS 4 finally fixed a lot of the braindeadness, but I’m firmly habituated to using Samba (an implementation of Microsoft’s SMB/CIFS family of file-sharing protocols) between my Linux machines.
Why? Mainly because, until NFS 4 was production-ready, what I could use had no authentication! Whatever UID and GID you claimed to be, those are the permissions you got.
(Or, if that wasn’t the case, they sure thought it was some deep secret. I remember researching for ages because I couldn’t believe a security model so braindead would have survived to the end of the 1980s, let alone the early 2000s.)
It also used a firewall-unfriendly mess of ports and had a fragile stateless design… both things NFS 4 fixed by taking inspiration from alternatives including SMB/CIFS. I’ll leave The UNIX-Haters Handbook to explain the other problems.
The list goes on. In the end, I suppose the lesson is to never underestimate the power of vendors to botch things, to never judge the past by the present, and to recognize how much we take for granted in the world of computing today.
On The Unix-Haters Handbook by Stephan Sokolow is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
By submitting a comment here you grant this site a perpetual license to reproduce your words and name/web site in attribution under the same terms as the associated post.
All comments are moderated. If your comment is generic enough to apply to any post, it will be assumed to be spam. Borderline comments will have their URL field erased before being approved.