On Dehumanization In Fiction

I have to admit it… I have a lot of drafts kicking around in my notes which most people would consider to be perfectly good blog posts but which, for me, were just flashes of inspiration that I wrote down to avoid losing them, but I never felt were “finished”.

While I was looking through the snips I’m accumulating for a book on writing, I rediscovered a couple which, looking at them now, are good enough to share, even if I still feel that there’s more insight to be teased out and more room for the style to be polished.


Dehumanization is at the heart of some of the most effective dark writing.

What hits harder than cruelty? Casual cruelty.
What hits harder than casual cruelty? Institutionalized cruelty.

Dropping a man in the wilderness, hundreds of miles from the nearest human, will cause hardship, but, if you want a man to despair, drop him into the heart of a big city, penniless, alone, and ignored by all who pass… and that’s just from cruelty by neglect.

Humans are social animals to our very core, and slavery is abhorrent precisely because its institutionalized cruelty at its most powerful… forcing the reader to not only observe active dehumanization on a mass scale, but to confront how flawed their optimistic preconceptions of human nature are in a way that rings too true for them to deny.

(Humanity’s social nature is also why solitary confinement is considered torture in many places, but this is much more difficult to communicate to someone who hasn’t experienced it personally.)

Looking at it from another angle, it’s also so powerful because of the specific kinds of emotions it evokes in the reader/viewer via their sense of empathy. It’s not just that the character is experiencing misery or defeat or isolation, it’s that their circumstances evoke a sense of despair AND powerlessness, futility AND hopelessness.

Most telling, I think, is how Chip Conley pseudo-mathematically expressed despair: suffering without meaning… and isn’t that also the perfect starting point for a definition of my own term, “Hardship Porn”. (Fiction where, through intent or incompetence, the author seems to revel in making their hero’s life miserable, not because it makes the writing more powerful but just to gratify some emotional need.)


I also made a related observation that slavery is powerful because it tends to associate itself well to two kinds of atrocities which fall under the other major class of violations we readily recognize: the sanctity of self.

To wilfully and permanently disfigure someone’s body against their desires, or to attack their very psyche, is the most personal form of dehumanization possible… denying you control over the only things that are unarguably, undeniably, unquestionably your own and attacking your thoughts, the one hiding place nobody should ever intrude… let alone tamper with. It is no accident that, as a species who think in metaphor, we often refer to the body as a temple and the mind as a sanctum.

Posted in Writing | Leave a comment

How to Keep Humans From Seeing Your reCAPTCHA

I don’t know how many people know this, but reCAPTCHA is a major pain if you’ve configured your browser to prevent Google from doing things like setting tracking cookies or fingerprinting your <canvas>. Sometimes, it’ll take me a minute or more before the bleeping thing lets me through.

So, for my own sites, I’m very reluctant to make people fill out CAPTCHAs. (Plus, there’s also an aspect of “Is this what we’ve been reduced to? Taking for granted that we must constantly pester legitimate users to prove that they’re human because we’re letting the bad actors set the terms of engagement?”)

Note that I will not be covering the pile of techniques that require JavaScript to implement because, as a dedicated uMatrix user, I find those to also be annoying, though nowhere near as much as reCAPTCHA.

So, let’s think about this problem for a second. What can we do to improve things by reducing the need to display reCAPTCHA?

Well, first let’s think about the types of spam we’re going to receive. I’ve noticed two types, and I’ll start by addressing the kind CAPTCHAs don’t prevent:

Human-Sent Spam

Believe it or not, several times a year, I would receive spam that’s clearly been sent by a human, trying to promote some shady service they think I’ll want (typically SEO or paid traffic).

I tried putting up a message which clearly states that the contact form on this blog is not for this sort of message, but I still occasionally get someone who ignores it… so what more can be done?

Well, I can’t do it with my current WordPress plugin but, for my other sites, how about trying to make sure they actually read it, and making it sound scarier for them to ignore it?

The simplest way to do this is to add a checkbox that says something like “I hereby swear under penalty of perjury that this message is not intended to solicit customers for any form of commercial service” like I did for the GBIndex contact form.

Since you’re guarding against an actual human this time, using a normal browser, you don’t even need any server-side code. Just set required="required" in the checkbox’s markup and their browser will refuse to submit the form until they check the box and draw their attention to it, which is exactly what we want.

Of course, you want it to be clear that it’s not toothless stock text, so there are two other things you should do:

  1. Don’t just copy-paste my phrasing. Identical text is only good in such a declaration if the readers associate consistency with “this has the force of law and has been tested in actual court cases” rather than “this is a stock snip of HTML from www.TopHTMLSnips.blort”
  2. Include a highly visible message somewhere on the page which makes it clear that, if they just blindly check the box, you’ll report whatever they’re promoting to their service providers (domain registrars, web hosts, etc.) for Terms of Service violations.

    (and do follow through. For example, use the global WHOIS database to identify the domain registrar, then use the registrar’s “Report Abuse” link in their site footer or support section. Then use the registrar’s WHOIS lookup service to identify the nameserver provider and use their “Report Abuse” link. If you think the hosting may be with a shared hosting provider different from the nameserver provider, you can use techniques like doing a DNS lookup on on the domain, then reverse DNS lookups on the resulting IP addresses.)

You could also put a Bayesian filter to work on your inbox, but I’m always wary of false positives and don’t want to have to sift through a spam box periodically, so I try to avoid that… and this works well enough.

OK, so, with that out of the way, let’s get to what CAPTCHAs are meant to stop…

Bot-Sent Spam

There are two kinds of bot-sent spam. Stuff meant to be read by humans, and stuff meant to be read by machines. Since some of the techniques used for preventing machine-targeted spam also help to stem the tide of stuff aimed at humans, we’ll address those first.

In both cases, you can certainly apply a Bayesian filter but, as with human-sent spam, I aim for something more deterministic.

Machine-Readable Bot Spam

Machine-readable spam is spam intended to evoke a reaction from another machine. The most typical example of this is manipulating search results by scattering links to their garbage all over the web.

The key to combating machine-readable spam is recognizing that, if the target machine can understand the important characteristics of the message, so can your spam-prevention measures.

1. Block Link Markup

The first layer of protection I like to apply is to detect disallowed markup and present a human-readable message explaining what changes must be made for the message to be accepted.

For example, in my contact forms, which are going to be rendered as plaintext e-mails, the spam that gets submitted comes from bots that mistake them for blog comment fields, and 99% of that can be killed simply by disallowing </a>, [/url], and [/link] in messages, and instructing users to switch to bare URLs.

This is mainly about making the reCAPTCHA less necessary, meaning that you don’t have to trigger it as aggressively, but it also has the added benefit of ensuring that legitimate messages look nicer when I read them.

Spambots can submit bare URLs to get around this, but they generally don’t because it would make their SEO-spamming less effective on sites which don’t block URL markup and my site is nowhere near important enough to get a purpose-built spambot. (And, even if it did, I’d want to keep the check to correct legitimate users’ misconceptions about what markup will actually get interpreted when I see their message.)

2. Detect URLs

A tiny fraction of the spambots I see do submit bare URLs, and we don’t want a solution which will become ineffective if applied broadly enough for spammers to adapt, so the next step is to handle the grey areas… the stuff that has legitimate uses, but also spammy ones.

The simplest way to handle this is to match on a string of text that’s essential for any sort of auto-hyperlinking to function, and then trigger stronger scrutiny (eg. reCAPTCHA) as a result.

For this, I use a regular expression. I use something like (http|ftp)s?:// because my regex is shared with other functionality, but a simple string match on :// would probably do the trick while also catching “let the human change it back” obfuscation attempts like hxxp:// in spam meant only to be read by humans.

I haven’t encountered any spam which uses URLs without the scheme portion but, if you want to guard against auto-hyperlinkable URLs of that form, also check for www.

3. Do some simple sanity checks on the text

Spambots tend to be written very shoddily, so they submit some stuff so broken it’s funny at times. (One bot tried to submit the un-rendered contents of the template it was supposed to use to generate spam messages.)

A few times a year, I would get one such submission which was clearly a variation on common SEO-spam I was already blocking… but it had no URLs in it… just the placeholder text meant to pad out the message.

I decided to block that by adding the following check, which takes maybe three or four lines of code:

  1. Split the message up by whitespace (explode in PHP, split in Python or JavaScript, etc.)
  2. If the splitting function doesn’t support collapsing heterogeneous runs of whitespace characters (*cough*JavaScript*cough*), ignore any empty/whitespace-only “words”.
  3. Count up the words which do and don’t contain URLs (:// or whatever)
  4. If there are fewer than some minimum number of non-URL words or the percentage of non-URL words relative to URLs is too low, reject the message with something like “I don’t like walls of URLs. Please add some text explaining what they are and why you’re sending them to me.”)

Admittedly, some bots use blocks of text stolen from random blogs as padding, which will pass this test, but the point is to whittle away the lazier ones. Also, it can’t hurt, because you’re guarding against stuff you wouldn’t want from a human either:

  1. There’s a minimum length below which a message probably isn’t worth the effort to read. (For ongoing conversations, this will be low, because you want to block things like “+1” and “first” but allow things like “Looks good to me” but, for forms that only handle the initial message, like e-mail forms or the “new topic” form on a forum, the minimum can be higher. I advise “at least three words” as the limit for the ongoing case because “subject verb object”.)
  2. A human can easily pad out a too-short message and re-submit, but a bot won’t know what to do.
  3. It’s rude to send text that’s so URL-heavy that you’re not even giving each URL a friendly title, regardless of whether it’s a bot or a human submitting them.

WebAIM also suggested checking whether fields which shouldn’t be the same contain identical data. I don’t know if spambots which do that to unrecognized fields are still around, but I don’t see how it could hurt… just be careful to avoid the particular firstname/lastname example he gave, where sheer probability suggests that you’ll encounter someone with a name like “James James” or “Anthony Anthony” eventually. If nothing else, maybe it’ll catch lazy humans trying to fill in fake account details.

(Note that all of these sanity checks are structural. We don’t want to resort to a blacklist.)

4. Add a Honeypot

Bots like to fill out form fields. It minimizes the chance that the submission will get blocked because one of the fields is required. This is something else we can exploit.

The trick is simple. Make a field that is as attractive to the bot as possible, then tell the humans not to fill it out in natural language which the bot can’t parse. The things to keep in mind are:

  1. Don’t hide your honeypot field from humans using display: none in your CSS. Bots are getting good at parsing CSS.

    Instead, push it off the left edge of the viewport using position: absolute; so the bot has to assume that, by filling it out, it’s taking a shortcut around clicking through some kind of single-page wizard.

    (Under that rationale, you could also try hiding it using JavaScript. The important thing is to recognize that good spambots are as smart as screen readers for the blind… they just can’t understand natural language like the human behind the screen reader can.)
  2. Name your honeypot field something attractive, like url or phone or password. (url is a good one for e-mail contact forms, because you’re unlikely to need an actual URL field and that’s what WordPress’s blog comment form uses.)
  3. Set autocomplete="off" on the field so the browser won’t accidentally cause legitimate users to fail the test.
  4. Set tabindex="-1" or, if spambots start to get wise to that, explicitly put it after everything else in the tabbing order including the submit button. That way, if it becomes visible (eg. you’re hiding it using JavaScript and JavaScript is disabled) or the user’s screen reader allows them to get into it despite it being hidden, it won’t interfere with filling out the form.
  5. Use a <label for="name_of_the_field"> to provide the message about not filling it in so that assistive technologies can reliably present the message to the human.

Also, consider going light on the HTML5 validation in your other fields. I’ve heard people say that it helps to stop spambots, but I’m not sure how long ago that was and it’s never good to expose the rules defining valid input for a bot to learn from when you could be keeping them server-side and only explaining them to legitimate users in natural language.

I’ve seen multiple suggestions to scramble up the field names for your form fields, so name="url" actually expects a valid e-mail and so on, but this harms maintainability for your code and could scramble up the form auto-fill in browsers like Chrome, so I’d only do it if necessary.

5. Do some simple sanity checks on the user agent

I haven’t needed to do this on the sites I wrote myself (The previous techniques were enough) but, if you need more (or if you’re using something PHP-based like WordPress where you can just hook up Bad Behaviour and call it a day), here are some other things that bottom-of-the-barrel spambot code might get wrong:

  1. Still using the default User-Agent string for whatever HTTP library they use. (eg. cURL, Python’s urllib, etc.)
  2. No User-Agent string.
  3. Typos in the User-Agent string (eg. whitespace present/missing in the wrong places or a typo’d browser/OS name)
  4. Claiming to be some ancient browser/OS that your site isn’t even compatible with
  5. Sending HTTP request headers that are invalid for the HTTP protocol version requested (added in a later version, only allowed in earlier versions, actually a response header, etc.)
  6. Sending the User-Agent string for a major browser but sending request headers which clearly disagree. (eg. not Accept-ing content types that the browser has had built-in support for since the stone age.)
  7. Not setting the Referer header correctly (but be careful. Extensions like uMatrix may forge this to always point to your site root to prevent tracking so you want to require the expected value or a whitelist of what privacy extensions are known to forge to.)
  8. Sending request header values that aren’t allowed by the spec
  9. Sending custom headers that are only set by unwanted user agents
  10. Obvious signs of headless browsers.
  11. Adding/removing unexpected GET parameters on a POST request. (When you submit via POST, it’s still possible to pass things in via query parameters, so sanity-check that… just be careful that, if you’re verifying on the GET request which loads the form, that you account for things other sites might add on the off chance that you use something like Google Analytics.)
  12. Adding/removing unexpected POST parameters. (If a bot is trying to take shortcuts, you might see it missing or filling things a real user wouldn’t.)

…and, of course, sanitize and validate your inputs. (eg. WebAIM points out that spambots might try e-mail header injection, which would be a sure-fire sign of a malicious actor that you can block.)

I’m reluctant to suggest rate-limiting or IP blacklisting as a general solution, since rate-limiting requests is more for protecting against scraping and it’s easy for spammers to botnet their way around IP blacklists while leaving a minefield of blacklisted IPs for legitimate users to receive from their ISP the next time they disconnect and DHCP gives them a new IP address. (Plus, I can’t be the only person who middle-clicks one link, waits for it to load, middle-clicks 10 in rapid succession, and then reads the first while the other ten load.)

However, rate-limiting HTTP POST requests probably is a good idea. I may do a lot of things in parallel, but I’m not sure I’ve ever submitted multiple POST forms on the same site within a five-second window. Heck, even “Oops. I typo’d my search. Let’s try again.” may take longer than five seconds. (And that’s usually a GET request.)

Speaking of crawling, bots have to find your form somehow. While I doubt rate-limiting is going to be useful enough to be worthwhile, what I would suggest is to blacklist robots from your forms using robots.txt and then, using an identically-structured rule, also blacklist a link which immediately blacklists any IP which requests it. This will stop bots which are not only ignoring robots.txt, but using it to find forms.

I’d also suggest adding a link to a “Click here to blacklist your IP address”-style page so spambots which don’t read robots.txt at all can still get caught but curious users who find the link don’t blacklist themselves by accident. (Just remember that the same guidelines apply as for the honeypot field. Don’t display: none or visibility: hidden to hide it because spambots may be wise to that. Thanks to fleiner.com for this idea.)

Measuring the time between loading the page and posting can also be helpful, but you have to be very careful about your assumptions. Measure how long it’ll take a user to load/reload the page (on a really fast connection with JavaScript and external resources disabled) and then paste some text they wrote previously. (eg. I tend to compose my posts in a separate text editor because I haven’t found a form recovery extension I like.)

If you decide to do that, you’ll want to make sure that the bot can’t just change the page-load timestamp. There are two ways I can see to accomplish that:

  1. If your framework supports it, regenerate the CSRF token every time the page containing the form is loaded and, when the form gets submitted, check that the token you receive was generated at least X amount of time ago. (3 seconds is a good starting value)
  2. If you can’t do that for some reason, use something like HMAC to generate a hash for the timestamp and then send both the timestamp and hash to the client in a hidden form field. Without the secret key you’re holding, the bot can’t change the timestamp without invalidating the hash.

Another trick similar to a CSRF token is to serve up an image (like a tracking pixel, but served locally so it doesn’t get blocked) from a dynamic route. When the route handler gets called, have it make a note of the current CSRF token for the session. Then, when the form is submitted, and after checking that the CSRF token is present and valid, verify that the image was loaded and the CSRF token at that time matches the current CSRF token.

That’ll block any bot that tries to save time and bandwidth by not attempting to load images. It’s similar in concept to some of the JavaScript checks, but the odds that a legitimate user who disables JavaScript will also disable the loading of images are minuscule. (Thanks to Alain Tiemblo for the idea)

6. Prefer Structured Input

If you’re accepting submissions for a custom site, rather than just slapping up a basic comment form, structured input isn’t just a way to let submitters do some of the legwork for you.

Every additional field is another opportunity to trip the bot up by expecting it to auto-fill something that can’t be satisfied by randomly generated garbage or plagiarized snippets of someone else’s blog and has requirements only explained in human-readable text.

Structured input also makes your form look less like a blog comment or forum reply form, which may help to deter some smarter spambots.

7. Use Multi-Stage Submission

This one was suggested by WebAIM. The idea being that, if your form enters the submission into the database in some kind of draft form which will time out if not confirmed, and then returns a “Here’s a preview of how your submission will look. Please check it for errors” page that doesn’t contain the submitted fields but, rather, a submission ID and a “Confirm” button, the spambot may not be smart enough to complete the process.

I like this idea because it doesn’t feel like a CAPTCHA or an anti-spam measure to the end user… just a reasonable thing to ask the user to do to make life a little more convenient for whoever’s going to see what was received. (Plus, I find that having a preview separate from the editor helps me to notice my mistakes more readily.)

Human-Oriented Bot Spam

If you’ve ever actively followed a large site that uses Disqus for its comments, you’ve probably noticed that, before the moderators get to them, spam comments which slip through are trying to outwit spam filters by using look-alike characters. Unfortunately, due to limitations in how WordPress handles Unicode, I can’t show you an example of such a thing. (See here)

Now, if the spammer is still keeping the URLs in a form that can be clicked or copied and pasted, you may not need this… but if you can’t afford to require users to fill out a CAPTCHA every time they post, the Unicode people have developed what’s known as the TR39 Skeleton Algorithm for Unicode Confusables.

The basic idea is that, with the help of a big table, people can implement the algorithm for your language of choice (and have done so… usually under some variant of the name “confusables”. The PHP standard library includes one named Spoofchecker) and you can then go skeleton(string_1) == skeleton(string_2) to compare them without the obfuscation.

That said, it’s not quite that simple. The skeleton algorithm intentionally does not duplicate the process of normalizing uppercase vs. lowercase or ignoring combining characters, so you’ll need to do those first as preprocessing steps.

While I haven’t exhaustively tested it, my intuition is that this is the best way to skeletonize your text for spam detection:

  1. Normalize to NFKD and strip combining characters. (Eevee’s The Dark Corners of Unicode has a Python example and explains why you normally don’t want to do this, but the same issues apply to the TR39 skeleton algorithm itself, so it should be fine here.)
  2. Lowercase/uppercase the strings to be skeletonized (Do this after normalizing in case there exist precomposed glyphs with no alternative-case forms in the locale you’re operating under)
  3. Strip out all whitespace characters (To prevent things like “m a k e  m o n e y  a t  h o m e” and remove hidden spaces such as zero-width joiners)
  4. Run the TR39 skeleton algorithm on both strings.

Your strings should now be ready for use as input to whatever system you want to use to assess the probability of spam. (Check out this StackOverflow question if you want to train your own classifier and don’t have a spam corpus handy.)

Posted in Geek Stuff | Leave a comment

Getting Over It With Bennet Foddy: A Somewhat Belated Commentary

Yes, I’m sure that anyone who cares has probably already seen a million rage reaction compilations on YouTube, but this isn’t about that.

Rather, it’s about how different my experience has been so far, since I decided to try the copy I got in a Humble Bundle, and the observations that stem from that.

For people who aren’t familiar with it, Getting Over It With Bennet Foddy is a game where you have to climb a mountain while being hamstrung by odd controls, and there are plenty of opportunities to lose a lot of progress to a single mistake along the way.

While you’re doing it, you hear periodic narrated commentary from the game designer. Now, what everyone probably remembers best in reaction videos is the bits of commentary which are pretty obviously designed to troll players who are prone to raging. Calm encouragements purpose-built to invoke responses such as “I’d like to see you try!” and useless advice such as a reminder that you’ve already done this once, so just do the same thing again.

However, sprinkled among those, as rewards for reaching new progress milestones, are bits of philosophical commentary on the nature of Internet culture which I find surprisingly engaging… but I’m getting ahead of myself…

My first encounter with this game was when it showed up on my subscription to James & Mike Mondays… It looked like the kind of game I’d hate, but I was curious enough that I decided to watch a few other videos before I put it out of mind as just another “not in GOG.com’s guaranteed DRM-free catalogue. Not something I care about.”

That was in March of 2018. Around the beginning of June, 2019, I remembered that’d I’d obtained a copy in a Humble Bundle and decided to try it out of curiosity.

Now, the first thing that’s very important to understand is the mindset I went into this with. I’m not a competitive person, I’m not the “spend a ton of effort training to get that perfect play-through” type, and I strive to not let myself get riled up. When I started playing this, it was purely a matter of curiosity. I just planned to see how my abilities compared to the YouTubers I’d watched, try it until the novelty wore off, and then set it aside to play something more worthwhile.

The first thing I noticed was that, the first few times Bennet tried to make me rage, I actually laughed out loud. The second, that I was doing better than the youtubers I’d watched. The game seems to reward patience and methodical, carefully measured use of the mouse… something which probably doesn’t make for good LPing when made into a habit.

At the same time, as you progress farther, you’re introduced to more and more obstacles which require fast reactions. It reminds me of VVVVVV in that VVVVVV is exhilarating if you’re well-rested but frustrating if you’re not and it all comes down to whether you can walk the tightrope of having to move quickly, but without haste. (In VVVVVV, there are areas which seem tuned so that they must be traversed at a very specific speed that sits just in between the two speeds your tired mind prefers to gravitate to.)

In fact, when I really get into it, there’s a meditative quality to it. The more I play it, and the more I think about the blend of philosophical insight and trollish comments, the more I get the impression that the game is specifically designed to test the player in a more philosophical sense than usual… to “separate the boys from the zen”, per se… that it’s not a game meant to make people rage, but, rather, that getting up the mountain is secondary, and the primary challenge is one of mental discipline.

That would also fit with the dual meaning of the title. To win the game, you must achieve a state of emotional distance from it… you must “get over it”.

(Though, from the commentary, I also get the impression that it’s intended to be an homage to the design principles that went into arcade games and the early console games they inspired.)

In that sense, I don’t see it as a game that you’re supposed to try to beat but, rather, an exercise which you do a little of every day and then, when you finally find yourself on top of the mountain, it takes you by surprise. (While I won’t look up a spoiler, it does leave me curious about what note the game ends on. Does it acknowledge that potential to find yourself feeling lost and adrift after “arriving at the horizon, to find that nothing is beyond it”?)

For that reason, I think the “I’ll understand if you have to take a break” early on is actually a subtle hint that, like classic point-and-click adventure games, a wise player is supposed to play it in short stints. (In the case of a point-and-click, to sleep on the answers to puzzles. In the case of a game like this, because you need to maintain the patience and tranquility necessary to play well… and everyone starts to get sloppy and impatient sooner or later.)

That said, the game’s not perfect. Whether it’s a bug, a bad interaction with my system, or Bennet deciding to go a little too far, I’ve noticed that the game’s mouse sensitivity seems to be variable… or at least purposefully counter-intuitive.

Sometimes, I have to move the mouse a lot to get a small amount of motion when the cursor is close to the centre of the character model but, on other occasions, I find it difficult not to flail around when I’m using almost no mouse movement at all. Given that it seems to stay consistent for long periods of time, for all I know, it’s just some kind of input translation bug related to my running the Linux version fullscreened to 1920×1080 on a three-monitor desktop that’s 4480px wide. (It wouldn’t be the first time a game hadn’t been properly tested on multi-head Linux systems.)

I seriously hope that it’s not intentional, as a way to turn your ability to form muscle memory against you, because intentionally programming it with such variable mouse sensitivity (so that I sometimes see the hammer whip around faster than anticipated at just the right time to knock me out of position while, other times, I see it lag just in time to make me mis)s… that would be a step too far. I don’t mind the difficulty and subtle trolling, but a game’s mechanics should be fair.

In the end, I don’t know whether the game will hold my interest long enough to reach the end, given the stable of other games I can turn to when I just want a moment of “focused calm” with no hurry to “beat the whole game” (Like Hexcells, Sudoku, Tetris, Dr. Mario, and Shisen-sho), but I certainly feel richer for having played it.

Posted in Geek Stuff | Leave a comment

Game – Lumo

I just finished playing Lumo, so I supposed I might as well review it.

When I was a kid, these were always the kinds of games I was curious about but never had (aside from Mario RPG), so I can only critique from a modern perspective… overall, it’s a charming little isometric puzzle-platformer and it worked flawlessly for me on Linux.

The game lets you choose between a modern mode with maps and infinite lives and an old school mode, but I found myself never using the maps because it leaves it up to you to figure out where you are on each one and it was easier to just remember which rooms I’d already passed through based on their appearance and what they connected to. I was, however, very thankful for the infinite lives at some points. I also appreciated the very generous choices for where I respawn on some of the longer rooms.

Playing with an X-Box 360 pad, the controls are about as good as can be expected and I like how it lets you configure how the 45° axes of the isometric perspective get mapped to the 90° inputs of a keyboard, D-Pad, or analog stick. The movement speed is OK but, given the amount of backtracking, I do wish that there was a Run button or that it was running in an emulator so I could hold down “unlimit emulation speed” (A.K.A. fast-forward) to simulate it.

As with a pixel-based isometric game, the perspective is locked, which makes gauging certain jumps difficult. It’s retro-authentic in a game that’s got various 80s references sprinkled throughout it, so I won’t hold that against it. If you’re not used to controlling isometric games, my advice is to use un-mapped “up is north” directions until you get to tricky jumps, then switch to “up is north-west” temporarily for those.

That said, there’s one block-pushing+hopping puzzle in the ice area (about 2/3rds of the way through the game) where the slipperiness when you’re trying to hop on the blocks, the ease of accidentally pushing them in a direction they’ll shatter, and the delay before you can respawn a new ice block combine to make some Angry Video Game Nerd-level bad design… and I’m not one to judge a game’s controls quickly. (I actually have a post on the way about the zen of Getting Over It with Bennet Foddy.)

Also, in the final area of the game, it starts to rely too heavily on spike-block mazes, which drive home how frustrating it can be when you can’t rotate the camera, the spike block is preventing you from seeing your feet/shadow, and simply brushing against a spike will kill you… as well as the occasional optional puzzle which drives home why you don’t mix locked cameras with 3D environments which don’t follow isometric grids.

Beyond that, I’m not a huge fan of how, if I miss a collectable in a secret area, I’ll have to start a new game to get it because I can’t backtrack past certain points. I do know THAT is retro-authentic, however, so I’ll excuse it. (Even if it didn’t give “??” as the total count for certain types of collectables, I don’t plan to start a new game to achieve 100%… I’ll just excuse it. I’ve got far too many games on my backlog to humour a cheap excuse for replayability from a more entertainment-starved era.)

It does a nice job of keeping the puzzles varied as things go on, but there are occasionally some of them which feel like they’re varying far enough to feel ill-fitted to the genre (thought nowhere near as bad as in Fez), such as suddenly having to play an easier variation on Lights Out to progress.

Overall, the main glaring flaw is the storytelling, which has a very “I get the impression there’s a story, but it’s making me guess at what it is and I’m just here for the puzzles” feel to it. First, the intro has you pick a gender and color for a generic looking kid, then spend maybe a minute walking to the Tron scanner and then the actual game starts. It’s pointless, feels very tacked on, and makes a very poor first impression. Second, once you’re in the game proper, you occasionally have some mysterious Black Mage-y characters who feel like they should have significance, but instead just serve as props to set up puzzles.

More subjectively, I’d also have preferred if it were pixel-art rather than 3D. Whatever it is that my childhood has left me wanting from these games is intimately tied to the distinctive isometric look that I would sometimes glimpse. (And, given that some of the collectables are clearly referencing 8-bit micros, but it’s not aiming for a retro-authentic color palette, why not do 320×200 at 256 colors?)

Finally, It’s not a very long game, so you’ll want to buy it at a discount. I finished it in maybe 12 hours.

All in all, I enjoyed it, but it’s nothing special so definitely wait for a discount.

Posted in Geek Stuff | Leave a comment

GUI Error Handler for PyQt 5.x

When I was developing programs with PyGTK, one of my favourite little things to include to make life better for users was a drop-in helper named gtkexcepthook.py which adds a GUI traceback handler for uncaught exceptions.

Well, I finally got around to porting it to PyQt 5.x for one of my more recent projects, and I’ve named it qtexcepthook.py. (what else?)

The original was under “The license is whatever you want.” terms, so, out of respect of the original author’s intentions, I’m releasing the port into the public domain rather than putting it under a permissive license (eg. MIT) like I usually would for a something like this (simple, and I want everyone to use it).

I’ve also done a lot of refactoring to make it more maintainable.

Admittedly, there’s still a little more I’d like to do, and it doesn’t have any automated tests yet, but manual testing seems to give it a clean bill of health and I added a fallback so that, if the most complicated code does contain a bug and that bug triggers an exception, it’ll fall back to a more primitive exception-formatting mechanism (plus a traceback for the more advanced code) rather than failing entirely.

Finally (and, from a user’s perspective, most importantly), I took the liberty of splitting out the old email-based option for one-click reporting of bugs into a callback so you can swap in something more modern (eg. like an HTTP POST) if you so choose.

The code contains a working if __name__ == '__main__' example which can be switched between no callback and localhost-based e-mail reporting just by swapping some comments, so it should be pretty self-explanatory. Enjoy. 🙂

Posted in Geek Stuff | Leave a comment

GIMP Plugin to Automate Setting up to Colorize Manga Pages

I’ve been making a push lately to try to get things cleaned up around here, and I came across an old GIMP plugin I slapped together when I decided to try my hand at colorizing manga pages.

The approach you take will depend on whether you’re dealing with line art (blacks stay black and white regions becomes color) or photo-like grayscale (whites stay white and black becomes color).

I won’t go into too much detail on the approach for photo-style images, but a common technique suggested in tutorials is to set a layer to the “Screen” blending mode and place it above the original image. You can then paint into it to change the hue and saturation of the pixels while leaving their intensity alone.

For line art, where you want the blacks to stay black, but fill in the white and half-toned regions with good-looking color, the technique I learned involves applying “Color to Alpha” to the source image, then painting under it. The blacks stay black, anti-aliased edges blend cleanly, and halftones Just Work™ as you paint in the colors.

…but setting up to do it “correctly” (ie. non-destructively, so you can easily go back and correct oversights) gets tedious when you have to do it for more than a page or two.

This GIMP plugin will set up all the layers necessary so that you can just start selecting regions and filling in colors as you please. Just install it, restart GIMP, and choose “Start Colorizing…” from the Image menu.

I also attempted to eliminate as many sources of annoyance as I could while using it:

  • The plugin will automatically switch the image to RGB mode if it started as Grayscale or Indexed color.
  • The plugin automatically runs “Color to Alpha” on the line art, then sets up a Colors layer beneath it. (This is
  • The original image is kept, unmodified, hidden under an all-white background layer as an alternative target for selection-defining operations which don’t like transparency.
  • All layers except Colors start out locked to minimize the chance that I’ll wind up having to undo some edits because I wound up modifying the wrong layer without noticing.
  • A half-opacity “Fluids” layer is added so that fluid colorizing good enough for all the cases I ran into is as simple as painting some pure white on top of your existing color.
  • A separate “Blush Lines” layer is provided and a decent default red is provided in the “Blush Lines Color” layer using the Screen-based approach. Just cut-paste the blush lines into it and they’ll turn red.
  • For adding a soft glow to the blush, just select the blush lines, grow and feather the selection as appropriate (I think I feathered to 20 pixels back in the day), and then bucket-fill the “Blush Lines Color” into the “Blush Glow” layer.

Windows users will have to look up where to put it, but, on Linux, I installed it at
~/.gimp-2.8/plug-ins/coloring_helper.py. It shouldn’t do anything platform-specific though.

The script is up on GitHub Gist if the embed doesn’t work.

Posted in Geek Stuff | Leave a comment

Fanfiction – Harry Potter and The Iron Lady

For today’s fic, a little something I found because someone else thought I already knew it. Thanks a bunch, Aura Of The Dawn.

Harry Potter and The Iron Lady by mugglesftw

This is a story that had such potential for me to love it, and has a ton of excellent writing… but then messes it up because the author didn’t see what was trying to develop.

The plot starts simply. Suppose Ron Weasley’s squib uncle decided to join the military instead of becoming an accountant, and a chance encounter gave Margaret Thatcher’s frustration at Voldemort’s first reign of terror an outlet. She founds the “Committee of Magical Affairs” (or “Maggie Works” as its members originally from the military take to calling it) and things progress from there. They quickly discover Harry Potter’s situation and one of their members offers to adopt him, since he and his wife had been wanting a second child but the risk of complications in childbirth might kill her.

Up until Quirrelmort is forced to act early during Harry’s first year at Hogwarts, this works beautifully and it has two major aspects which make it especially unusual and satisfying for me.

First, it’s a story that goes above and beyond to make characters interesting. For example, Professor McGonagall’s obsession with quidditch is fleshed out with appropriate bits of flavour text. We also get to see this little treat during a snowball fight:

“We should just surrender,” Hermione said, shivering slightly in the cold .
“The Irish never surrender!” Seamus bellowed, managing to hit Fred with a snowball before he was beaten back under a deluge of icy missiles.
“But we’re not Irish!” Hermione said, dragging Seamus back to safety.
“Speak for yourself!” Neville yelled as he began digging in the snow to form a new barricade and switching to an Irish lilt. “Me mum’s maiden name was Murphy!”
“I knew there was a reason I loved you Neville!” Seamus sputtered, wiping the snow off his face.

Snape is probably the most interesting case of this, because he’s believably cast as a character you can like, without him having met Harry before he comes to Hogwarts, and without Harry having a terrible home life.

He’s still acerbic and self-interested, but didn’t let his hatred of James Potter blind him from seeing the true nature of this Harry who insists on going by his adoptive parents’ family name rather than Potter.

Likewise, when he manages to suss out the little conspiracy that Harry’s a part of, he sees a potential “third option” which won’t leave him trapped between an insane madman and a barmy old fool. It’s all done in a believable way and his loyalty to Lily’s memory over either of his supposed masters is done in a way which feels much more satisfyingly realistic than in canon, where the focus on Harry’s perception of him really crippled Rowling’s ability to develop him as a character.

The second thing this first act does well is that it’s a story explicitly focused on merging the two worlds.

Harry’s adoptive father, being an SAS member and part of the Maggie Works has raised him as most fanfic authors would expect a properly on-the-side-of-good Dumbledore to. He had a good childhood, but he was given martial arts lessons from a young age, as well as carefully supervised shooting training, and he was given ready access to superhero comics likely to give him the right outlook on his abilities.

As a result, by the time Harry is told that he’s adopted at age 8, and that he’s a wizard, they’ve developed mechanisms for hardening electronics against magical interference and Harry winds up seeing his abilities with an appropriately childish but responsible view: Like superheroes such as SpiderMan, he’s an ordinary person with special abilities and it’s his responsibility to use them to help others.

However, that isn’t at the front of your mind when you’re reading the story. (Which is a good thing because I’m rather tired of “super Harry” stories.) More time is spent on Harry either changing people’s minds or provoking people by like Draco Malfoy by threatening their flawed worldview, and his interactions with his friends.

Examples of that include introducing Ron and his siblings to non-magical entertainment like Tetris on Game Boy when they visit and Harry asking his parents to owl over his book on the Apollo missions, which causes a stir with Draco and rekindles Fred and George’s childhood obsession with space.

He also integrates Hermione into the group early, when Ron sees her use a knockback jinx against Draco and his goons on the Hogwarts Express, and befriends Neville Longbottom. I’m not sure what the magic ingredient is, but I find the resulting “golden four, not golden trio” dynamic that develops to be both quite satisfying and oddly unique.

The story puts a lot of effort into being familiar to canon, yet original… something both rare and the mark of an author who has skill in spades. From the troll being defeated by confusing it with repeated uses of Scourgify until Snape arrives, to McGonagall and Snape successfully pressuring Dumbledore to deal with Quirrelmort early and Harry’s friends being present for the drama while Harry is elsewhere and unconscious, the events leading up to Chapter 17 really do feel like a plausible alternative series of dominoes that could have fallen, given the small change at the beginning.

I’d also like to mention a few other details I have yet to see anywhere else:

First, the antagonistic relationship between Harry and Sirius. I think this is the first time I’ve ever read a decent fic where they’re at odds because Sirius refuses to accept that Harry sees two muggles as his real parents to the point where he changed his name.

Second, showing a werewolf’s first full moon from their perspective. (I won’t say who, because it’d be a spoiler.)

Third, this brilliantly creative little quote:

Ginny just shrugged. “Who knows? Luna knows all kinds of things that she probably shouldn’t, because her father doesn’t really monitor what she says or does. Do you know, he put her in muggle school for a few weeks, then forgot about it and my dad had to send in the obliviators everyone because they thought a child was missing?”

(Why is it that the typos always seem to prefer to show up in the most quotable bits?)

Broadly speaking, when it comes to the good parts, I’ve read various stories which incorporate the elements it uses, but none as satisfying. All the others either don’t go far enough, or implement them in too crude a manner.

The problems start to mount around chapter 15 when Quirrelmort is forced to make a break for it… and kills three named characters.

Don’t get me wrong. I’m not one of those people who is categorically against killing off named characters… but it just feels shallow and sloppy. From this point on, the story gets worse, then better, then much worse, because “upping the drama” crowds out everything that made the chapters before so special.

It also doesn’t help that the author seems to fundamentally misunderstand the nature of the Harry Potter setting. Rowling worked very hard to leave things like religion in the HP setting up to the reader’s imagination, with the few elements that one would associate with religion being a self-neutralizing mish-mash of secular contemporary elements and things you’d expect to keep pagan beliefs alive. (eg. Ron Weasley saying “Happy Christmas, Harry” and being focused on the presents rather than talking about Yule, while things like ghosts and the Deathly Hallows suggest that, if anyone’s right, it was the pagans.)

Most good fanfiction authors (and even the mediocre ones) either preserve that feel or build a suitably canon-compatible conception of “wizarding paganism”. This story, on the other hand, is clearly written by a Christian who didn’t stop to think about whether the elements of their faith would be compatible with the elements shown in canon.

For example, chapter 18 feels like a phoned-in Christmas special of sub-standard writing quality and, aside from the Weasleys using “Merry Christmas” multiple times and acting as if they were at some kind of non-magical vernacular cram session while off-camera, they also talk about “praying for” people… something that lends a distinctively “American Christian” feel to these British members of a minority who were canonically persecuted by Christians. (See “Witch Burnings”)

Yes, the latter half of the chapter is spent with muggles who are probably Anglicans, but they’re still saying “Merry Christmas” rather than “Happy Christmas” like muggle Brits do and, given that things like the bible are never mentioned in HP canon, which does depict Christmas, it still feels gratuitous and unnecessary and contributes to making the chapter feel out of place. My advice is to just skip it. You won’t be missing anything important as far as I can tell.

(To be honest, the overall writing quality of chapter 18 compared to earlier chapters with similar elements sort of reminds me of the difference between Rick Cook’s The Wiz Biz and its sequels. He wrote something great, but then didn’t properly understand what made it so special, so the sequels feel like cheap cargo cult copies, matching the superficial details of their successful predecessor, but without the deeper underpinnings which made it work.)

Now, the good thing is that the Christian stuff is shoehorned in and the story wouldn’t feel like it’s missing anything if you were to copy the text into your favourite word processor and delete all mentions of God, the church, and prayer. The bad thing is that, if you don’t, there is a really problematic bit where the author arbitrarily adds holy water to the list of things that can destroy a horcrux, but then writes it out for being known to strip away a wizard’s magic and, thus, kill them.

I could write an entire blog post about how fundamentally broken an idea that is in the Harry Potter setting, but, in the name of brevity, I’ll just say that it reminds me of a particularly sick and twisted piece of rhetoric from American Christians that basically says “If you use God’s power to murder someone, you’ll go to heaven. If you use magic not of God to heal someone, you’ve earned eternal damnation.” (That is, that even the most sick and depraved things are moral if God says so, but power not from God will damn you no matter how virtuous a use you put it to.)

I’m also worried about how the author presents a conclusion Dumbledore and company have jumped to that Harry gave up The Potter Family Magic™ and made Neville the Boy Who Lived when he took on his adopted family’s name. It’s bad enough that unbreakable vows exist in Harry Potter canon (where there’s ceremony and some degree of knowledge that you’re making a consequential decision) without doubling down on what makes them a problem. I’ll admit that it’s possible that Dumbledore and friends are mistaken and grasping at straws, but the way reminders of it are paced makes it feel more like foreshadowing. Leave this “names have power that you can use to hang yourself without realizing it” stuff to settings like The Dresden Files which are supposed to be inherently dark. (Especially when it’s such a stupid idea to imply that all canon Harry had to do to achieve being “just Harry” like he wanted was to literally cast off the Potter name at the horrendous cost of “becoming ‘just Harry'”.)

However, fundamentally, the problem with the story is that it has three phases: Great, Good, and Mediocre… in that order.

After Quirrelmort makes a break for it, the story just isn’t the same. The parts which showed the most promise (like the battle of ideas between Harry and Draco, the Weasley Twins’ interest in rocketry, and introducing Harry’s friends to the muggle things they’ve been missing out on) get crowded out, some parts (eg. chapter 22) feel rushed, and the plot swings in a direction that I’ve already read a million times before and grown tired of.

Then, near the end, it kicks out into open war against the Voldemort-infiltrated ministry and muggle Britain’s emergency broadcast provisions are used to shatter the Statute of Secrecy.

I will have to admit, despite not being the “Military, F*** Yeah” type, I did find it satisfying to see the incredibly rare twist of having open conflict break out between a secret branch of the British muggle military and aurors sent by the Voldemort-infiltrated Ministry of Magic. However, while it is satisfying in the moment to see the muggle side uses their emergency broadcast provisions to shatter the Statute of Secrecy, I’ve yet to see a fic which survives such a drastic change. Given that the final author’s note makes it clear that the sequel will be even more different, I’m not sure if it would interest me but, if I do decide to read it, I don’t have high hopes. Even before you include the implications of all-out world war, chapter 52 is already feeling far too similar to various other fics I read and found wanting. (Mostly male power fantasies with horribly simplistic views of human nature and the causes of our social problems which enable the hero to become a benevolent dictator.)

When you break from canon that drastically, you “cut the umbilical cord”. Just counting the ones that are either pure Harry Potter or have it as the primary story in a crossover, I’ve read over a thousand of these fics and only one of them readily comes to mind as being written with the requisite skill to pull that off and survive. (The Pureblood Pretense series)

That said, I can at least try to analyze the problem, and it seems to stem from three issues: First, it’s just plain difficult (possibly impossible) to properly prepare readers for such a drastic shift in the kind of story that’s being told. Second, when you have a story that’s had this kind of character focus, it’s like threading a needle to acknowledge something of such a massive scope as the outbreak of a world war while still maintaining a healthy character focus, and this didn’t feel like it pulled that off. Finally, Harry Potter fanfiction is almost universally written around a worldview that works in pure fantasy and some kinds of science fiction, but is unacceptably simplistic when applied to a contemporary setting where history and personal experience make it abundantly clear how complex human society is.

The story would have flowed much more naturally if chapter 52 had been omitted and the relevant details were fed to the reader slowly, as they were made known to Harry and company. Heck, the best solution would have been to follow the slow-burn progression in the Queen Who Fell To Earth series. It’d have allowed what worked so beautifully about the pre-Quirrelmort chapters to continue to shine.

I think the biggest thing the story does get right is the amount of time and effort spent on characters and ideas who/which were present in canon, but could have been developed more. I didn’t notice many technical errors, but there are a few. The most consistent one seems to be using “Mrs.” to address or refer to characters like Nymphadora Tonks when “Ms.” seems to have been intended.

Overall, I think I have to give it a 3.7 out of 5 (0.7 on a more intuitive scale from -2 to +2). The early stuff is at least a 4.5 out of 5, chapter 18 is a 2.5 out of 5, what follows is more of a 4.0 out of 5, and the breakout of hostilities at the end is a 3.5 at best.

(While the early stuff has a spark and uniqueness, the drama which takes the forefront following Quirrelmort’s break lacks that spark and isn’t particularly novel, regardless of how well-executed it is.)

If it weren’t so easy to edit out the out-of-place religious stuff without leaving any traces that something had been removed, I’d have rated it even lower.

Posted in Geek Stuff | Leave a comment