Embedding the DPMI Extender for your Open Watcom C/C++ EXE (And Related Knowledge)

Sooner or later, everyone who plays around with 32-bit DOS programming using Open Watcom C/C++ is probably going to see one of these errors when they try to run their programs:

Stub exec failed:
\dos4gw.exe
No such file or directory
Can't run DOS/4G(W)

I don’t know if there are other causes for the second version, but the first one is pretty clear about what went wrong. You forgot to copy dos4gw.exe into the same folder as your EXE file… but why is this necessary?

DOS Extenders

The important thing to understand is that DOS/4GW is a special bundle version of DOS/4G. It was used so widely back in the day because it was hundreds if not a thousand dollars cheaper than the alternatives, but it was designed to upsell programmers with artificial restrictions… and one of those restrictions was having dos4gw.exe in a separate file.

Luckily, Open Watcom C/C++ supports other extenders which can be bundled into the EXE file. One of them (DOS/32A) is even drop-in compatible with DOS/4GW to the point that you can swap it out without recompiling. (Good for lifting restrictions and resolving bugs in closed-source software.)

The extenders which are bundled with Open Watcom C/C++ (and the meanings of their names) are:

  • DOS/4GW by Tenberry Software, Inc. (DOS on 4Gig, Watcom Bundle Edition. The original site is gone now that the author has passed away, and the Watcom manual was always a paid add-on, but it’s API-compatible with DOS/32A and there are bundled and third-party docs.)
  • DOS/32A by Narech K. (DOS on 32-bit, Advanced Version)
  • PMODE/W by Thomas Pytel and Charles Sheffold (Protected Mode, Watcom Edition)
  • CauseWay by Devore Software & Consulting (I couldn’t find an explanation of the name’s origin. The original site is gone, but there’s a manual bundled with Open Watcom C/C++.)

(DOS/4G was named after the memory limit for a 386, in contrast to Tenberry’s other product, the 286-compatible DOS/16M.)

Other extenders people have successfully used with Open Watcom C/C++ include:

  • D3X by Daniel Borca (DOS 32-bit Extender, The original site is gone, but it’s in the Wayback Machine including the download)
  • DPMIONE (GitHub) by Sudley Place Software (DPMI 1.0 implementation. In fact, it claims to be the only DPMI 1.0 implementation complete enough that it can substitute for Windows 3.1x’s internal DPMI extender… though functionality will still be reduced.)
  • HX DOS Extender (mirror, Dr-DOS Wiki) by Andreas “Japheth” Grech (Best option for writing Win32 console applications or making limited use of the Win32 GUI APIs and having your program run under plain old DOS.)
  • PMODE/WI by Thomas Pytel and Charles Sheffold (A cut-down version of PMODE/W for demoscene applications where compactness is more important than features and it’s acceptable for problems during development to be more cryptic.)
  • WDOSX by Michael Tippach (Wuschel’s DOS eXtender)

I haven’t tried it, but, as described, it sounds like an explicit design goal of HX DOS Extender is to allow you to write a Win32 console binary (which would run on 64-bit Windows 10), and then stub it with HX and have it simultaneously be compatible with platforms like FreeDOS, MS-DOS 6.22, and DOSBox.

Note that, for WDOSX, if you want to have the stub bundled in automatically as part of your linker script rather than using its stubit.exe, you’ll need to run stubit -extract to extract the requisite wdosxle.exe file. (See DOC/README.TXT in the WDOSX distribution archive for more details.)

One other bit of trivia: If you look in the Open Watcom manuals and see mention of Phar Lap… that’s the company that co-designed VCPI with Quarterdeck and produced the expensive proprietary DPMI extender that Microsoft licensed for Microsoft C/C++… it’s named after a famous race horse.

Before I demonstrate how to use them, I’d first like to address a few potential tripping points I encountered while preparing this:

Potential Errors

undefined system name: ...

System names are defined in the wlink.lnk file in the bin* folder containing the wlink or wcl386 command you’re running. and the version of wlink.lnk bundled with Open Watcom v2 has a few more of them than Open Watcom v1.9, such as the one for PMODE/WI.

Check wlink.lnk to see what’s available and either choose one of the ones listed in there, edit it to add one, or upgrade Open Watcom. (Note that I’ve had Open Watcom v2 builds produce broken .com files, so don’t upgrade blindly.)

This is a ... executable when trying to run the EXE

If you saw one of these messages when trying to run your EXE…

This is a CauseWay executable
This is a DOS/32 Advanced DOS Extender (LE-style) executable
This is a DOS/32 Advanced DOS Extender (LX-style) executable
This is a DOS/32A DOS Extender w/ Standard stub (LE-style) executable
This is a DOS/32A DOS Extender w/ Standard stub (LX-style) executable
This is a DOS/32A DOS Extender w/ Configurable stub (LE-style) executable
This is a DOS/32A DOS Extender w/ Configurable stub (LX-style) executable
This is a PMODE/W executable
This is a PMODE/WI executable

… it probably means that your compiler emitted one of the following warnings:

Warning! W1093: cannot open cwstub.exe
Warning! W1093: cannot open dos32a.exe
Warning! W1093: cannot open stub32a.exe
Warning! W1093: cannot open stub32c.exe
Warning! W1093: cannot open pmodew.exe
Warning! W1093: cannot open pmodewi.exe

…and fell back to linking the default stub for non-MZ .exe files.

To solve the problem for all the listed stubs except pmodewi.exe, you need to either add your binw folder to the PATH (but not before binl) or copy or symlink them from the binw folder to the binl folder. (I chose the latter since I didn’t want to throw an arbitrary pile of files into my PATH and wlink‘s manual doesn’t mention any other variable it check for a search path.)

This bit of shell script should do the trick when run from inside binl:

for stub in dos32a.exe stub32a.exe stub32c.exe pmodew.exe; do 
	ln -s ../binw/$stub . 
done

As for PMODE/WI (the cut-down demoscene version of PMODE/W), having a predefined system for it doesn’t mean that it’s bundled with Open Watcom C/C++, so you’ll need to download it yourself and unpack it into binw or binl as appropriate. (Renamed to all-lowercase if you’re on Linux.)

Stub Tricks

An interesting side note is that the This is a ... executable messages in Open Watcom’s default EXE stub are actually dynamically generated from the osname values in the wlink.lnk file in your binw or binl folder. (“This is a Windows executable” made general across all outputs that must have a DOS MZ-format EXE at the beginning.)

That means that, if you edit the appropriate wlink.lnk file to make a copy of the entry for an extender that supports taking the binary as an argument (eg. dos32a):

system begin joke
    option osname='high-security process. You do not have permission to run this'
    libpath %WATCOM%/lib386
    libpath %WATCOM%/lib386/dos
    libpath %WATCOM%/lib386/l32
    format os2 le
end

Then wcl386 -l=joke whatever.c will produce a whatever.exe file that says “This is a high-security process. You do not have permission to run this executable” if you try to run it in DOSBox, but runs as expected if you run it as dos32a whatever.exe instead.

…and since the op stub=dos32a.exe line uses the same mechanism as /STUB for setting the EXE stub in PE executables. That means that the same “does one thing in DOS and another in Windows” behaviour of PE EXEs is also available to DPMI EXEs with a slight twist.

(Not that you need to edit wlink.lnk to change the stub. The proper way is to create a file with a name like whatever.lnk, put your op stub=decoy.exe or option stub=decoy.exe in there, and then call wcl386 or wlink with @whatever.lnk as one of the arguments.)

If you pick a DPMI extender that supports taking the EXE as an argument, like DOS/32A, then you can compile decoy.c as a real-mode DOS EXE with wcl -l=dos decoy.c, specify op stub=decoy.exe in your linker script, and have a DOS EXE which behaves one way when executed directly and another when executed as dos32a whatever.exe. (Though, since the DPMI extender uses LE for the 32-bit part, I think it’ll ignore the decoy and run the real thing every time if you run it under OS/2.)

QEMM 97 actually uses this mechanism to put the DOS installer and the launcher for the Windows installer in the same EXE file, so the installation instructions can be simpler.

The Point (Finally)

DOS/4GW is actually the only DPMI extender listed in wlink.lnk that doesn’t support being bundled and, in fact, the only other one that even supports relying on an external helper is DOS/32A… the extender explicitly designed to be usable by end-users as a drop-in replacement for closed-source DOS/4GW applications.

Under typical circumstances, where you’re using the wcl386 build driver, all you need to do is specify the correct target system via the -l=whatever option to wcl386 and you’re good to go.

The following -l= values produce an EXE file which does not require an external DPMI extender EXE:

  • causeway
  • dos32a
  • pmodew

…while the following ones do require an external helper:

  • dos4g
  • stub32a
  • stub32x
  • stub32ac
  • stub32xc

Don’t believe me? Create a temporary DOSBox profile with a conf file that mounts C: and launches hello.exe, create a hello.c containing a “Hello, World!” in it, source owsetenv.sh, and then run this bit of shell script:

for EXTENDER in dos4g causeway dos32a dos32x stub32a stub32x stub32ac stub32xc pmodew pmodewi; do
	print " === $EXTENDER === "
	rm hello.exe 2>/dev/null
	wcl386 -q -l=$EXTENDER hello.c
	dosbox -conf dosbox-0.74.conf &>/dev/null
done

For more advanced situations, you’ll want to read the Open Watcom manuals, but the following example should get you started:

wcc decoy.c
wlink file decoy.o system dos  # produces decoy.exe
wcc386 hello.c
wlink file hello.o system dos32a  # produces hello.exe
wlink file hello.o system dos32a option stub=decoy.exe name decoyed.exe

wlink takes a list of un-quoted directives like you’d put in the aforementioned whatever.lnk and/or an @whatever.lnk file reference like can be passed to wcl386. To invoke wlink‘s help, the command to use is wlink -?.

As for other DPMI extenders, from looking at the system definitions, it seems pretty simple to add definitions for them. Just make a copy of an existing definition and set the appropriate format and stub values.

(It’s a similar story for the CWSDPMI host used by DJGPP, Free Pascal, and FreeBASIC. You can use exe2coff from DJGPP (including cross-compiler builds) to strip the old stub off, and then apply the version of the stub that’s fully self-contained… but don’t try to exchange extenders between Open Watcom and the DJGPP-compatible ecosystem unless they explicitly provide support as with PMODE/DJ. They use different formats and provide different services. )

Posted in Geek Stuff | Leave a comment

Helper for Finding Python 2.x Scripts Still To Be Ported

I’ve written a lot of little scripts over the years, and it’d be nice to be able to migrate to Kubuntu 20.04 LTS when it comes out, so I decided to write a little helper to find creations of mine that still depend on Python 2.x as quickly as possible.

The script I came up with produces output like this:

ERROR:   NO PY3: /home/ssokolow/bin/k3b-rm.py
ERROR:   NO PY3: /home/ssokolow/bin/mergemove.py
WARNING:  NO #!: /home/ssokolow/bin/mpv.py
ERROR: NOT UTF8: /home/ssokolow/src/Human Sort.py

Perfect for running with a command like ~/src/audit_python2.py --ignore read-only ~/bin ~/src 2>&1 | tee ~/python2_potentials.txt

It works by applying a few simple rules:

  • If a folder is named .tox or contains a bin/activate script, skip it to avoid flooding the results with virtualenv files.
  • If the file has a .py or .pyw extension but no shebang, print a WARNING: NO #!.
  • If the file has a shebang line containing python but not python3, print an ERROR: NO PY3.
  • If reading the first line of the file fails with a UnicodeDecodeError and the file has one of the aforementioned extensions, print an ERROR: NOT UTF8.
  • If reading the file fails for any other reason than UnicodeDecodeError, print a general ERROR: READ ERR.

The logging level is configurable with repeated -v and -q arguments and there are liberal INFO and DEBUG messages to detail how it’s traversing the filesystem. It also traverses in sorted order to ensure consistency between runs and folders or files can be ignored using the -i/--ignore option to skip stuff like read-only Git checkouts.

If this sounds useful to you, it’s up on GitHub Gist. For lack of a better name, it calls itself “Python 2.x Auditor”.

Posted in Geek Stuff | Leave a comment

New QuickTile Release… Finally!

For those who follow my programming projects, you may be familiar with the name QuickTile. It’s my most popular project and, in short, it adds hotkeys to your existing X11 window manager to make it easy to tile windows in ways fancier than they usually offer.

Long story short, it fell into neglect for a while for a perfect storm of reasons:

  • Needing to migrate from Python 2.x to Python 3.x at the same time as needing to migrate from PyGTK to PyGObject and from ePyDoc to Sphinx
  • Something about QuickTile causing PyGObject’s PyGTK porting helper to segfault
  • QuickTile relying on GDK stuff that got replaced with PyCairo stuff that seemed to switch from one set of bugs to another as I upgraded it
  • No automated tests
  • Some other problems in my life that made it hard to work on my hobby projects at all

In even fewer words, it was a mess… but not anymore.

I’d like to announce that QuickTile 0.4.0 is now out. (Those who like emjois may insert an appropriate “yay” here)

It has the following following advantages:

  • Completely ported to GTK 3, and with a lot of internal reworking to make the most error-prone bits of the code easier to maintain and improve.
  • A spiffy new website with the demonstration animation I always wanted to get around to making and a detailed manual. (Including explanatory illustrations for all the tiling commands.)
  • A unit test suite that, while nowhere near complete, has full coverage for the rectangle-juggling code that’s most likely to hide confusing bugs. (Great for motivation.)
  • Finally, proper support for dealing with panels that don’t span an entire edge of the desktop! (I had to write a ton of custom rectangle-juggling code, but I did it.)
  • New Ctrl+Alt+Shift+… default hotkeys for the versions of the tiling commands which move windows without resizing them. (If you’ve got an existing quicktile.cfg, you’ll need to add these manually.)
  • Initial support for building a .deb package, though not yet something I publicize in the manual since it’s in such an early state.
  • A few other changes to how the commands behave to make things more consistent. (See the ChangeLog file for details.)

So, if you’re a Linux user (or just curious), check out the new website and enjoy.

P.S. I have some other projects which need TLC just as much as QuickTile did, so I may let it go quiescent for a month or two now that it’s no longer the most urgent thing to address, but I do intend to come back and start checking off the other things on the issue tracker.

Posted in Geek Stuff | Leave a comment

Novel – Decision at Doona

While working on another blog post (still to come), I needed a point of comparison, so I decided to re-read one of my old favourites.

Decision at Doona by Anne McCaffrey

At its core, this 1969 novel is a soft sci-fi first-contact story. You have humans and an alien species known as the hrrubans, and they both wind up missing each other’s surveys and colonizing the same planet at the same time… for the same reason.

Both species are horrendously overpopulated and, to cope with the overpopulation, have pacified their cultures to the point where youth suicides are up, apathy is causing a labour shortage, and the colony on Doona/Rrala is an experiment into returning to an earlier stage of their cultural development.

According to Ms. McCaffrey’s Wikipedia page, the idea for it actually came to her when, at a school play, she watched a teacher tell her four-year-old son to quiet down.

The core driving conflict is that, in both cases, their governments have a “no contacts” policy which would pull everyone back to the overcrowded tedium they thought they’d escaped. For the humans, the big conflict comes from trying to do the wrong things for the right reasons. Specifically, this is humanity’s second first-contact, and an innocent mistake in their first went so disastrously wrong that the other species committed mass suicide. The result was a deep and lasting sense of guilt and a law that says no contact is to be made with indigenous populations under any circumstance, and, as far as they’re aware, the hrrubans are not space-faring.

(Of course, the grin-inducing part is that the hrrubans are actually more technologically advanced, to the point where their village wasn’t found on surveys because they have matter-transmitter technology and brought the whole thing home for the winter.)

I’ve always enjoyed Anne McCaffrey’s work and I’ve always enjoyed first-contact stories, so, on that front, the only thing I would have wished for is more time spent looking at humans from the perspective of hrruban characters. It’s also got two sequels, the latter of which involves humans and hrrubans engaging in first contact with a third species together.

…So, what are the noteworthy characteristics?

First, I like how it introduces things. The first chapter gives the colonization plans from the perspective of the hrruban government officials… but it’s so familiar that, if you skipped past the list of characters at the beginning of the book, it’s easy to mistake titles like “First Speaker” for fancy sci-fi human titles until chapter 8, when they refer to the aliens as “bareskins”. It’s a nice touch for a story intentionally built around how much the two species have in common.

I also like the world-building in details like the hrrubans having developed a treatment for the weak, porous local wood which involves boiling the sap of a local tree until it turns into a penetrating varnish which reinforces and seals it.

(And, while it’s in one of the sequels rather than the original, I just have to mention the pun that easily stuck in my memory for the two decades where I was too distracted by other things to read any Anne McCaffrey: A pilot named Mr. Horstmann who called his ship The Apocalypse just so he could make a Dad Joke everywhere he goes.)

Now, about the downsides…

The first downside is one I don’t consider a downside, but others might. I like Anne McCaffrey, but my brother doesn’t because he feels that all her stories feel a bit too much like rural contemporary stories transplanted into other settings. I don’t get that impression strongly enough for it to be a problem, but you might feel differently.

The second downside is one that will only get worse over time, and which Decision at Doona is probably more hard hit by than her other stories, being both sci-fi and family-centric: While she did a better job than various male authors I’ve read from the same period, and the book clearly wouldn’t pass John W. Campbell‘s dictates as editor-in-chief of Astounding Science Fiction (later Analog) that humanity is always to be depicted as superior to aliens, Ms. McCaffrey couldn’t completely rid her futuristic family-man main character and his wife of 1969-isms.

That shouldn’t be as big an issue if you’re reading a story about a setting like Pern which has regressed to pre-industrial pseudo-fantasy, or one of her other series with an independent woman as the lead. However, reading Doona in 2020, it’s noticeable how important it is to the main character’s psyche to remain “in charge” as a father, and all I can remember about what his wife actually does is that she takes care of the kids and cooks.

To be fair, the book is only 245 pages long and there is a fairly sharp drop-off in how much focus is given to secondary characters so it could partly be that having the main character and his son being primary characters and her being part of the secondary cast exacerbates the sense of implausibility of having a woman of the future having no apparent noteworthy traits outside of the core traditionally feminine pursuits.

To compound that impression, there was a scene which I was certain was alluding to him having spanked his 6-year-old son (whose incorrigibility is very significant to the plot) but, later, that gets called into question when one of the girls who was watching him winds up suffering from alien poison ivy on steroids and the line “She’s never hurt anywhere in her life. How do you explain pain to her?” comes up.

It also occurs to me that, given how many of her books have female leads, McCaffrey may have been accidentally overcompensating in writing Doona, and folded a few too many 1969-isms back in while trying to prevent her male lead from coming across as feminine. It would make sense, given that Doona is the earliest McCaffrey story with a male lead that readily comes to mind.

But, whatever the reason, “the squeaky wheel gets the grease.” I don’t want to give an overinflated impression of how much these flaws grabbed my attention while I was reading. It’s still an enjoyable read and I didn’t notice them at all as a teenager. Likewise, there will no doubt be many people who grew up with fathers like the main character. …I just think it’s something that’s going to only get more discordant for future generations as more families don’t yet have interstellar travel, but have already achieved men not seeing dominance as an essential part of their self identity.

All in all, I’d give it a 4.3 out of 5 rating and, without the anachronistic elements, it’d be a 4.5 for how much I’ve enjoyed my various reads of it over the years. Definitely something to try if you like Anne McCaffrey’s style and the length and style are well-suited to a teenager looking to grow beyond Young Adult fiction.

Posted in Reviews | Leave a comment

Novel – Mother of Demons

Here’s a book I read on the Baen Free Library ages ago and finally got around to buying and reading in print at the beginning of December.

Mother of Demons by Eric Flint (sadly, no longer in the Free Library)

Superficially, it’s another “failed colony” story, similar to the premise used by Anne McCaffrey’s Pern series and Marion Zimmer Bradley’s Darkover series for creating a fantasy story in a fundamentally sci-fi setting.

Darkover is still on my TODO list, but I can already tell that I tend to prefer such because I normally read fantasy despite the simplified morality, rather than because of it, and if a cosmology includes gods, I’m prone to seeing it as a hint of dystopianism… a sign that the children will forever be ruled by their parents… it may also be that my outlook on the world harmonizes better with authors who are drawn to writing such stories.

I love that the book has that mindset. It’s rich with intellectual knowledge, and even the title embodies it, in that, as the story points out, outside of the Abrahamic religions, “demon” essentially means “powerful outsider” without automatically implying “evil”.

On that note, this story isn’t a fantasy story. Rather, it’s more a blend of historical fiction, fantasy, and sci-fi influences. A sci-fi setup, where the technology was lost, seen through the eyes of bronze-age land squids and human PhDs… and that’s what makes it good: It’s engaging to see the humans from the perspective of the aliens, and it’s engaging to read about the insights of the highly educated human characters from an author like Eric Flint.

(If you’ve ever read Eric Flint’s Ring of Fire series (beginning with 1632), you’ll know what I mean when I say that this mixes in sociology, history, tactics, and religion… Eric Flint clearly loves to study history, then share that love in his writing.)

It does have the odd moment of humour, but it’s generally serious. A notable example would be this exchange between the main two surviving human adults:

Julius immediately named their hut “Sodom and Gomorrah.” And he demonstratively refused to come near it, fearing, or so he claimed, the wrath of God.
“You don’t even believe in God!” Indira had once protested.
Julius chewed his lip. No, I don’t. But you never know. And if He does exist, He has two outstanding characteristics. Judging, at least, from the Old Testament.”
“Which are?”
“He’s the most hot-tempered, narrow-minded, mean-spirited, intolerant, anal-compulsive, bigoted redneck who ever lived. And, what’s more to the point, He’s a lousy shot.”
“It’s true!” he insisted, in the face of Indira’s laughter. “Read the Book yourself. Somebody pisses Him off, does He nail ’em right between the eyes like Buffalo Bill? Hell, no! He drowns everything. Or He blasts whole cities, or drops seven lean years on entire nations.

— Mother of Demons by Eric Flint, p. 111

The core premise hinted at by the title, which gets revealed as the story progresses revolves around Indira (the historian and one of only a handful of surviving adults) seeing events unfold through a historian’s eyes, but being tormented by her inability to acknowledge that, no matter what she does, she will be instrumental in shaping history and, no matter what she does, a great deal of suffering is on the horizon.

The simplest metaphor I can give for her dilemma is that a cannibalistic analogue to the Roman army is on the march, and the leader of the next generation of humans has the potential to found an analogue to the Mongol Empire… but, at the same time, that’s not necessarily as bad as it may sound, because the mongols got a lot of bad press in the west. (The Mongol Empire was a beacon of religious tolerance and appreciation for academic knowledge… they just also pissed off medieval Europe with their pragmatism in the art of war and how effective it made them against chivalrous knights.)

Philosophy is a significant focus of the story… more so in the later chapters than the earlier ones. It starts out sparse, but then builds and builds, even as the history, which was never sparse, also builds and carries the readers along for the ride.

I’ve always been too fascinated by deep technical knowledge to dedicate proper effort to being a student of the softer sciences but, just in how its presented, this comes across as an amazing work on the history alone.

The story also makes excellent use of “the colonists are dependent on a local species, which have spiritual beliefs which preclude dissection” to justify limiting how much time is spent on describing the aliens’ biology, which is a clever solution.

That said, for all that I praise the story, I wish the second part had been structured differently, because it feels like it’s told almost exclusively in flashback (regardless of actual temporal progression) and that left me having trouble caring about the humans, because I wanted to go back to the alien characters.

On the other hand, when the time spent on the humans is good, it’s good. I love that it touches on the distinction between K and r breeding strategies and it was very satisfying to read an argument for why it should be inevitable for humans and aliens to have similar emotions (p.199) as a result of similar survival pressures needing to be processed by the brain and surfaced to the conscious mind.

To be honest, I suspect that my issue with the time spent on humans might be in part because I’m an aspie, so others may not have as big a problem with the structure as I did. I have mentioned before that, when I read Poul Anderson’s Tau Zero (nominated for a Hugo award), I skimmed through most of it, bored out of my mind as I waited for the soap opera to give way to “good writing”.

As it approaches the end, there’s a lot of focus on military tactics, which helps to make things interesting in such a fractious setting. It’s a rare story where I enjoy a battle scene in print because I’m sharing it with a character who’s more or less critiquing it. (p. 296)

I suppose, if there was one thing I’d have to pick as the best part of the story, it’d probably be how masterfully Eric Flint piles on the fantasy-style “invent tons of words and cultural elements” world building without it feeling onerous to me. That’s a skill far too often underestimated (it’s one of the few areas that come to mind most strongly as something I don’t understand the rules of) and, if you’re talking subjectively (i.e. feeling onerous to me), it’s a test even professional authors don’t always pass. (For example, I’ve been meaning to read The Left Hand of Darkness by Ursula K. Le Guin and The Bone Doll’s Twin by Lynn Flewelling but I just can’t find a time when I’m in the right mood to get into them.)

By fanfiction standards, it’d definitely be a 5 out of 5 but I’ve been slacking on print fiction for so long that I worry my intuition for how good print works are is out of touch with my current level of insight. As such, I’ll rate it a 4.8 to leave a little room above it for what I anticipate encountering when I read/re-read some of the other books in my collection. (The question is whether stories which avoid the issues I had with this are common enough to merit reserving an entire rating increment or rare enough that they should be distinguished by my equivalent to “nominated for/won an award” category.)

Either way, give it a read.

Posted in Reviews | Leave a comment

A virtualenv indicator that works everywhere

NOTE: While none of the significant elements of this approach require zsh, the formatting syntax and mechanism for updating a prompt dynamically differ between zsh and bash. See the end for a version with zsh-specific bits stripped out.

For a while, I’ve been using a variation on this zsh prompt tweak to get a pretty indication that I’m in a virtualenv. However, I was never quite satisfied with it for two reasons:

  1. It only works for virtualenvs activated through virtualenvwrapper
  2. It goes away if I launch a child shell… which is when I’m most likely to be confused and needing an indicator.

The solution was obvious: Instead of using a virtualenvwrapper hook, put something in my .zshrc which will detect virtualenvs opened through any means.

For those who just want something to copy-paste, here’s what I came up with:

zsh_virtualenv_prompt() {
    # If not in a virtualenv, print nothing
    [[ "$VIRTUAL_ENV" == "" ]] && return

    # Support both ~/.virtualenvs/<name> and <name>/venv
    local venv_name="${VIRTUAL_ENV##*/}"
    if [[ "$venv_name" == "venv" ]]; then
        venv_name=${VIRTUAL_ENV%/*}
        venv_name=${venv_name##*/}
    fi

    # Distinguish between the shell where the virtualenv was activated and its
    # children
    if typeset -f deactivate >/dev/null; then
        echo "[%F{green}${venv_name}%f] "
    else
        echo "<%F{green}${venv_name}%f> "
    fi
}

setopt PROMPT_SUBST PROMPT_PERCENT

# Display a "we are in a virtualenv" indicator that works in child shells too
VIRTUAL_ENV_DISABLE_PROMPT=1
RPS1='$(zsh_virtualenv_prompt)'

First, notice the use of VIRTUAL_ENV_DISABLE_PROMPT. This is because activate will prepend a less attractive indicator to PS1 that also goes away in child shells.

(Just make sure you remove any PS1="$_OLD_VIRTUAL_PS1" you might have added to postactivate or you’ll have no prompt after typing workon projname and be very confused.)

Second, note the use of PROMPT_SUBST. This is actually shared with my code for adding git branch information to PS1, PS2, and PS3 because profiling showed it to be faster than using a precmd function.

Third, note the single quotes for RPS1. That’s necessary to defer the invocation of $(check_virtualenv) so PROMPT_SUBST can see it.

I also added a couple of convenience features:

  • I have had a history of virtualenvwrapper not getting along with Python 3.x, so some of my projects have their virtualenvs at ~/src/<name>/venv rather than ~/.virtualenvs/<name>. This script will display <name> in the prompt either way.
  • If I’m in a child shell where the deactivate function isn’t available, the prompt will show <foo> rather than [foo] to make me aware of that.

Aside from that, it’s just ordinary efforts to avoid performing disk I/O or use $() in something that’s going to get run every time the prompt is displayed, and a function structured so the most common code path executes the fewest statements.

While this StackOverflow answer cautions against using VIRTUAL_ENV to detect virtualenvs, its reasoning doesn’t apply here, because it’s talking about detecting whether your Python script is running under the influence of a virtualenv, regardless of whether activate was used to achieve that. The purpose of this indicator, on the other hand, is specifically to detect the effects of activate so I don’t run something like manage.py runserver or pip install in the wrong context.

Bash Version

bash_virtualenv_prompt() {
    # If not in a virtualenv, print nothing
    [[ "$VIRTUAL_ENV" == "" ]] && return

    # Support both ~/.virtualenvs/<name> and <name>/venv
    local venv_name="${VIRTUAL_ENV##*/}"
    if [[ "$venv_name" == "venv" ]]; then
        venv_name=${VIRTUAL_ENV%/*}
        venv_name=${venv_name##*/}
    fi

    # Distinguish between the shell where the virtualenv was activated and its
    # children
    if typeset -f deactivate >/dev/null; then
        echo "[${venv_name}] "
    else
        echo "<${venv_name}> "
    fi
}

# Display a "we are in a virtualenv" indicator that works in child shells too
VIRTUAL_ENV_DISABLE_PROMPT=1
PS1='$(bash_virtualenv_prompt)'"$PS1"

It’s almost identical to the zsh version, but the following functions which zsh provides for free are left to the reader in the bash version:

  • Implementing a right-aligned chunk of the prompt which stays properly positioned if you resize your terminal.
  • Using tput to retrieve the colour-setting escape sequences for your terminal and then caching them in a variable so you’re neither hard-coding for a specific terminal type nor performing multiple subprocess calls each time you display your prompt.

Posted in Geek Stuff | Leave a comment

How to skip the fortune command when your shell is slow to start

A.K.A. How to get and compare timestamps without external commands in shell script (and without even invoking subshells in Zsh)

I love the fortune command. It’s a charming little addition to each new tab I open… until something (like a nightly backup) has blown away my disk cache or a runaway memory leak is causing thrashing. Then, it’s just a big delay in getting to what I want to do.

The obvious solution to any non-shell programmer is to time everything and invoke fortune only if it’s not already taking too long, but shell script complicates that by having so few builtins. We don’t want to invoke an external process, because that would defeat the point of making fortune conditional, and we don’t want to invoke a subshell because, if we’re thrashing because of memory contention, that’ll also make things worse.

It turns out that bash 4.2 and above can get us half-way there by using a subshell to invoke the printf builtin with the %(%s)T token, but Zsh has a clever little solution that even reuses code that we’re going to need anyway: prompt substitutions!

Here’s the gist of how to pull it off:

# Top of .zshrc
local start_time="${(%):-"%D{%s}"}"

# -- Do all my .zshrc stuff here

local end_time="${(%):-"%D{%s}"}"
if [[ $(( end_time - start_time )) < 2 ]]; then
    if (( $+commands[fortune] )); then
        fortune
    fi
else
    echo "Skipping fortune (slow startup)"
fi

This is a standard “subtract start time from end time to get how long it took, then compare it to a threshold” check, so the only part that should need to be changed in bash is using start_time="$(printf "%(%s)T")". Instead, let’s pick apart how the Zsh version works:

  1. We start with a bog standard ${VAR:-DEFAULT} parameter expansion however, unlike bash, Zsh does consider ${:-always default} to be valid syntax.
  2. The (%) on the left-hand side is a special magic flag, similar to the (?i) syntax used for inline flag-setting in some regular expression engines. It enables prompt expansion of both the (nonexistent) variable’s contents and the fallback value.
  3. %D{...} is Zsh’s prompt expansion placeholder for putting strftime (man strftime(3)) timestamps into your prompt.
  4. %s is the strftime token for “seconds since the epoch”
  5. You have to quote the %D{...} or the ${...} consumes the closing curly brace too eagerly.

That’s the big magic thing. A way to write an equivalent to time(2) from the C standard library in pure Zsh script with no use of $(...) or zmodload and, since we’re using prompt expansion to do it, the only thing we might not already have needed to load into memory is the code for the %D{...} expansion token.

(Unfortunately, there’s no way to get sub-second precision with this approach, so the only two useful threshold values for a well-optimized zshrc are probably “1 second” and “2 seconds”.)

Now for that odd (( $+commands[fortune] )) way of checking for the presence of the fortune command. What’s up with that?

Well, it’s actually a micro-optimization that I use in my zsh-specific scripts. According to this guy’s tests, it runs in half the time the other options take and, in my own tests using his test scripts, I found that, depending on the circumstances, that could go as far as one tenth of the others, and that the others vary wildly relative to each other. (On runs where $+commands is 7 to 10 times as fast as type and which, hash is sometimes twice as fast as type or which and sometimes half as fast.)

Normally, this would be a moot point because any of the portable ways of checking for the existence of a command via a subshell and a builtin would be far too quick for it to matter (ie. I do it just for the heck of it) but, in this case, it felt appropriate.

(Another unnecessary micro-optimization that I didn’t use here was preferring [[ ]] over [ ] in my zshrc scripts. My tests found a million runs of [[ "$PWD" == "$HOME" ]] to take about 1.4 seconds, while a million runs of [ "$PWD" = "$HOME" ] took about 4.2 seconds.)

Posted in Geek Stuff | Leave a comment