Displaying An Image or Animated GIF in Qt With Aspect Ratio-Preserving Scaling

When it comes to me and organizing images, GQView (now Geeqie) has always been a “best of a bunch of bad options” sort of thing and, with my move off Kubuntu 14.04 LTS, it’s become downright unusable in some cases. (eg. Freezing up at 100% CPU for several minutes to load certain collections)

As a result, I’ve been pushed to prioritize my efforts to replace at least the bare minimum subset of that functionality and, since I don’t want to rely on gtk3-mushrooms to make my own creations tolerable to me and Rust doesn’t have mature Qt bindings, that means PyQt5.

It’s not perfect (Qt doesn’t have incremental loading like GdkPixbufLoader, so I have to rely more heavily on my prototype code for asynchronously loading a bunch of upcoming images ahead in the background while I dawdle looking at the current one) but it’ll have to do… and I’ve filed a bug about that.

Now Qt has always been weird about how to get a displayed image to preserve its aspect ratio properly. It’s probably the one really glaring oversight in an otherwise very nicely designed and documented set of APIs. Given how much I had to fiddle around with things, I decided that I definitely wanted to share what I came up with.

What made it more difficult is that I’ve always wanted a GQView-alike which also displays animated GIFs with their animation, and Qt doesn’t have a unified solution for that. (QImage handles static images and QMovie handles GIF and MNG, but not actual movies, which you need to use the multimedia backend for.)

It turns out that getting smooth upscaling with QMovie is a tricky thing in itself because it’s very easy to accidentally build a widget tree that does the upscaling at a point in the pipeline where fast/ugly upscaling gets used, so a big thanks to Spencer on StackOverflow who figured it out.

Anyway, enough talk. Here’s the code:

(Yeah. I was too eager to post it, so this prototype hasn’t actually been split into the design which allows me to put the cache after the images get decoded. Still, it should be useful for most people.)

Posted in Geek Stuff | Leave a comment

On Dehumanization In Fiction

I have to admit it… I have a lot of drafts kicking around in my notes which most people would consider to be perfectly good blog posts but which, for me, were just flashes of inspiration that I wrote down to avoid losing them, but I never felt were “finished”.

While I was looking through the snips I’m accumulating for a book on writing, I rediscovered a couple which, looking at them now, are good enough to share, even if I still feel that there’s more insight to be teased out and more room for the style to be polished.

Dehumanization is at the heart of some of the most effective dark writing.

What hits harder than cruelty? Casual cruelty.
What hits harder than casual cruelty? Institutionalized cruelty.

Dropping a man in the wilderness, hundreds of miles from the nearest human, will cause hardship, but, if you want a man to despair, drop him into the heart of a big city, penniless, alone, and ignored by all who pass… and that’s just from cruelty by neglect.

Humans are social animals to our very core, and slavery is abhorrent precisely because its institutionalized cruelty at its most powerful… forcing the reader to not only observe active dehumanization on a mass scale, but to confront how flawed their optimistic preconceptions of human nature are in a way that rings too true for them to deny.

(Humanity’s social nature is also why solitary confinement is considered torture in many places, but this is much more difficult to communicate to someone who hasn’t experienced it personally.)

Looking at it from another angle, it’s also so powerful because of the specific kinds of emotions it evokes in the reader/viewer via their sense of empathy. It’s not just that the character is experiencing misery or defeat or isolation, it’s that their circumstances evoke a sense of despair AND powerlessness, futility AND hopelessness.

Most telling, I think, is how Chip Conley pseudo-mathematically expressed despair: suffering without meaning… and isn’t that also the perfect starting point for a definition of my own term, “Hardship Porn”. (Fiction where, through intent or incompetence, the author seems to revel in making their hero’s life miserable, not because it makes the writing more powerful but just to gratify some emotional need.)

I also made a related observation that slavery is powerful because it tends to associate itself well to two kinds of atrocities which fall under the other major class of violations we readily recognize: the sanctity of self.

To wilfully and permanently disfigure someone’s body against their desires, or to attack their very psyche, is the most personal form of dehumanization possible… denying you control over the only things that are unarguably, undeniably, unquestionably your own and attacking your thoughts, the one hiding place nobody should ever intrude… let alone tamper with. It is no accident that, as a species who think in metaphor, we often refer to the body as a temple and the mind as a sanctum.

Posted in Writing | Leave a comment

How to Keep Humans From Seeing Your reCAPTCHA

I don’t know how many people know this, but reCAPTCHA is a major pain if you’ve configured your browser to prevent Google from doing things like setting tracking cookies or fingerprinting your <canvas>. Sometimes, it’ll take me a minute or more before the bleeping thing lets me through.

So, for my own sites, I’m very reluctant to make people fill out CAPTCHAs. (Plus, there’s also an aspect of “Is this what we’ve been reduced to? Taking for granted that we must constantly pester legitimate users to prove that they’re human because we’re letting the bad actors set the terms of engagement?”)

Note that I will not be covering the pile of techniques that require JavaScript to implement because, as a dedicated uMatrix user, I find those to also be annoying, though nowhere near as much as reCAPTCHA.

So, let’s think about this problem for a second. What can we do to improve things by reducing the need to display reCAPTCHA?

Well, first let’s think about the types of spam we’re going to receive. I’ve noticed two types, and I’ll start by addressing the kind CAPTCHAs don’t prevent:

Human-Sent Spam

Believe it or not, several times a year, I would receive spam that’s clearly been sent by a human, trying to promote some shady service they think I’ll want (typically SEO or paid traffic).

I tried putting up a message which clearly states that the contact form on this blog is not for this sort of message, but I still occasionally get someone who ignores it… so what more can be done?

Well, I can’t do it with my current WordPress plugin but, for my other sites, how about trying to make sure they actually read it, and making it sound scarier for them to ignore it?

The simplest way to do this is to add a checkbox that says something like “I hereby swear under penalty of perjury that this message is not intended to solicit customers for any form of commercial service” like I did for the GBIndex contact form.

Since you’re guarding against an actual human this time, using a normal browser, you don’t even need any server-side code. Just set required="required" in the checkbox’s markup and their browser will refuse to submit the form until they check the box and draw their attention to it, which is exactly what we want.

Of course, you want it to be clear that it’s not toothless stock text, so there are two other things you should do:

  1. Don’t just copy-paste my phrasing. Identical text is only good in such a declaration if the readers associate consistency with “this has the force of law and has been tested in actual court cases” rather than “this is a stock snip of HTML from www.TopHTMLSnips.blort”
  2. Include a highly visible message somewhere on the page which makes it clear that, if they just blindly check the box, you’ll report whatever they’re promoting to their service providers (domain registrars, web hosts, etc.) for Terms of Service violations.

    (and do follow through. For example, use the global WHOIS database to identify the domain registrar, then use the registrar’s “Report Abuse” link in their site footer or support section. Then use the registrar’s WHOIS lookup service to identify the nameserver provider and use their “Report Abuse” link. If you think the hosting may be with a shared hosting provider different from the nameserver provider, you can use techniques like doing a DNS lookup on on the domain, then reverse DNS lookups on the resulting IP addresses.)

You could also put a Bayesian filter to work on your inbox, but I’m always wary of false positives and don’t want to have to sift through a spam box periodically, so I try to avoid that… and this works well enough.

OK, so, with that out of the way, let’s get to what CAPTCHAs are meant to stop…

Bot-Sent Spam

There are two kinds of bot-sent spam. Stuff meant to be read by humans, and stuff meant to be read by machines. Since some of the techniques used for preventing machine-targeted spam also help to stem the tide of stuff aimed at humans, we’ll address those first.

In both cases, you can certainly apply a Bayesian filter but, as with human-sent spam, I aim for something more deterministic.

Machine-Readable Bot Spam

Machine-readable spam is spam intended to evoke a reaction from another machine. The most typical example of this is manipulating search results by scattering links to their garbage all over the web.

The key to combating machine-readable spam is recognizing that, if the target machine can understand the important characteristics of the message, so can your spam-prevention measures.

1. Block Link Markup

The first layer of protection I like to apply is to detect disallowed markup and present a human-readable message explaining what changes must be made for the message to be accepted.

For example, in my contact forms, which are going to be rendered as plaintext e-mails, the spam that gets submitted comes from bots that mistake them for blog comment fields, and 99% of that can be killed simply by disallowing </a>, [/url], and [/link] in messages, and instructing users to switch to bare URLs.

This is mainly about making the reCAPTCHA less necessary, meaning that you don’t have to trigger it as aggressively, but it also has the added benefit of ensuring that legitimate messages look nicer when I read them.

Spambots can submit bare URLs to get around this, but they generally don’t because it would make their SEO-spamming less effective on sites which don’t block URL markup and my site is nowhere near important enough to get a purpose-built spambot. (And, even if it did, I’d want to keep the check to correct legitimate users’ misconceptions about what markup will actually get interpreted when I see their message.)

2. Detect URLs

A tiny fraction of the spambots I see do submit bare URLs, and we don’t want a solution which will become ineffective if applied broadly enough for spammers to adapt, so the next step is to handle the grey areas… the stuff that has legitimate uses, but also spammy ones.

The simplest way to handle this is to match on a string of text that’s essential for any sort of auto-hyperlinking to function, and then trigger stronger scrutiny (eg. reCAPTCHA) as a result.

For this, I use a regular expression. I use something like (http|ftp)s?:// because my regex is shared with other functionality, but a simple string match on :// would probably do the trick while also catching “let the human change it back” obfuscation attempts like hxxp:// in spam meant only to be read by humans.

I haven’t encountered any spam which uses URLs without the scheme portion but, if you want to guard against auto-hyperlinkable URLs of that form, also check for www.

3. Do some simple sanity checks on the text

Spambots tend to be written very shoddily, so they submit some stuff so broken it’s funny at times. (One bot tried to submit the un-rendered contents of the template it was supposed to use to generate spam messages.)

A few times a year, I would get one such submission which was clearly a variation on common SEO-spam I was already blocking… but it had no URLs in it… just the placeholder text meant to pad out the message.

I decided to block that by adding the following check, which takes maybe three or four lines of code:

  1. Split the message up by whitespace (explode in PHP, split in Python or JavaScript, etc.)
  2. If the splitting function doesn’t support collapsing heterogeneous runs of whitespace characters (*cough*JavaScript*cough*), ignore any empty/whitespace-only “words”.
  3. Count up the words which do and don’t contain URLs (:// or whatever)
  4. If there are fewer than some minimum number of non-URL words or the percentage of non-URL words relative to URLs is too low, reject the message with something like “I don’t like walls of URLs. Please add some text explaining what they are and why you’re sending them to me.”)

Admittedly, some bots use blocks of text stolen from random blogs as padding, which will pass this test, but the point is to whittle away the lazier ones. Also, it can’t hurt, because you’re guarding against stuff you wouldn’t want from a human either:

  1. There’s a minimum length below which a message probably isn’t worth the effort to read. (For ongoing conversations, this will be low, because you want to block things like “+1” and “first” but allow things like “Looks good to me” but, for forms that only handle the initial message, like e-mail forms or the “new topic” form on a forum, the minimum can be higher. I advise “at least three words” as the limit for the ongoing case because “subject verb object”.)
  2. A human can easily pad out a too-short message and re-submit, but a bot won’t know what to do.
  3. It’s rude to send text that’s so URL-heavy that you’re not even giving each URL a friendly title, regardless of whether it’s a bot or a human submitting them.

WebAIM also suggested checking whether fields which shouldn’t be the same contain identical data. I don’t know if spambots which do that to unrecognized fields are still around, but I don’t see how it could hurt… just be careful to avoid the particular firstname/lastname example he gave, where sheer probability suggests that you’ll encounter someone with a name like “James James” or “Anthony Anthony” eventually. If nothing else, maybe it’ll catch lazy humans trying to fill in fake account details.

(Note that all of these sanity checks are structural. We don’t want to resort to a blacklist.)

4. Add a Honeypot

Bots like to fill out form fields. It minimizes the chance that the submission will get blocked because one of the fields is required. This is something else we can exploit.

The trick is simple. Make a field that is as attractive to the bot as possible, then tell the humans not to fill it out in natural language which the bot can’t parse. The things to keep in mind are:

  1. Don’t hide your honeypot field from humans using display: none in your CSS. Bots are getting good at parsing CSS.

    Instead, push it off the left edge of the viewport using position: absolute; so the bot has to assume that, by filling it out, it’s taking a shortcut around clicking through some kind of single-page wizard.

    (Under that rationale, you could also try hiding it using JavaScript. The important thing is to recognize that good spambots are as smart as screen readers for the blind… they just can’t understand natural language like the human behind the screen reader can.)
  2. Name your honeypot field something attractive, like url or phone or password. (url is a good one for e-mail contact forms, because you’re unlikely to need an actual URL field and that’s what WordPress’s blog comment form uses.)
  3. Set autocomplete="off" on the field so the browser won’t accidentally cause legitimate users to fail the test.
  4. Set tabindex="-1" or, if spambots start to get wise to that, explicitly put it after everything else in the tabbing order including the submit button. That way, if it becomes visible (eg. you’re hiding it using JavaScript and JavaScript is disabled) or the user’s screen reader allows them to get into it despite it being hidden, it won’t interfere with filling out the form.
  5. Use a <label for="name_of_the_field"> to provide the message about not filling it in so that assistive technologies can reliably present the message to the human.

Also, consider going light on the HTML5 validation in your other fields. I’ve heard people say that it helps to stop spambots, but I’m not sure how long ago that was and it’s never good to expose the rules defining valid input for a bot to learn from when you could be keeping them server-side and only explaining them to legitimate users in natural language.

I’ve seen multiple suggestions to scramble up the field names for your form fields, so name="url" actually expects a valid e-mail and so on, but this harms maintainability for your code and could scramble up the form auto-fill in browsers like Chrome, so I’d only do it if necessary.

5. Do some simple sanity checks on the user agent

I haven’t needed to do this on the sites I wrote myself (The previous techniques were enough) but, if you need more (or if you’re using something PHP-based like WordPress where you can just hook up Bad Behaviour and call it a day), here are some other things that bottom-of-the-barrel spambot code might get wrong:

  1. Still using the default User-Agent string for whatever HTTP library they use. (eg. cURL, Python’s urllib, etc.)
  2. No User-Agent string.
  3. Typos in the User-Agent string (eg. whitespace present/missing in the wrong places or a typo’d browser/OS name)
  4. Claiming to be some ancient browser/OS that your site isn’t even compatible with
  5. Sending HTTP request headers that are invalid for the HTTP protocol version requested (added in a later version, only allowed in earlier versions, actually a response header, etc.)
  6. Sending the User-Agent string for a major browser but sending request headers which clearly disagree. (eg. not Accept-ing content types that the browser has had built-in support for since the stone age.)
  7. Not setting the Referer header correctly (but be careful. Extensions like uMatrix may forge this to always point to your site root to prevent tracking so you want to require the expected value or a whitelist of what privacy extensions are known to forge to.)
  8. Sending request header values that aren’t allowed by the spec
  9. Sending custom headers that are only set by unwanted user agents
  10. Obvious signs of headless browsers.
  11. Adding/removing unexpected GET parameters on a POST request. (When you submit via POST, it’s still possible to pass things in via query parameters, so sanity-check that… just be careful that, if you’re verifying on the GET request which loads the form, that you account for things other sites might add on the off chance that you use something like Google Analytics.)
  12. Adding/removing unexpected POST parameters. (If a bot is trying to take shortcuts, you might see it missing or filling things a real user wouldn’t.)

…and, of course, sanitize and validate your inputs. (eg. WebAIM points out that spambots might try e-mail header injection, which would be a sure-fire sign of a malicious actor that you can block.)

I’m reluctant to suggest rate-limiting or IP blacklisting as a general solution, since rate-limiting requests is more for protecting against scraping and it’s easy for spammers to botnet their way around IP blacklists while leaving a minefield of blacklisted IPs for legitimate users to receive from their ISP the next time they disconnect and DHCP gives them a new IP address. (Plus, I can’t be the only person who middle-clicks one link, waits for it to load, middle-clicks 10 in rapid succession, and then reads the first while the other ten load.)

However, rate-limiting HTTP POST requests probably is a good idea. I may do a lot of things in parallel, but I’m not sure I’ve ever submitted multiple POST forms on the same site within a five-second window. Heck, even “Oops. I typo’d my search. Let’s try again.” may take longer than five seconds. (And that’s usually a GET request.)

Speaking of crawling, bots have to find your form somehow. While I doubt rate-limiting is going to be useful enough to be worthwhile, what I would suggest is to blacklist robots from your forms using robots.txt and then, using an identically-structured rule, also blacklist a link which immediately blacklists any IP which requests it. This will stop bots which are not only ignoring robots.txt, but using it to find forms.

I’d also suggest adding a link to a “Click here to blacklist your IP address”-style page so spambots which don’t read robots.txt at all can still get caught but curious users who find the link don’t blacklist themselves by accident. (Just remember that the same guidelines apply as for the honeypot field. Don’t display: none or visibility: hidden to hide it because spambots may be wise to that. Thanks to fleiner.com for this idea.)

Measuring the time between loading the page and posting can also be helpful, but you have to be very careful about your assumptions. Measure how long it’ll take a user to load/reload the page (on a really fast connection with JavaScript and external resources disabled) and then paste some text they wrote previously. (eg. I tend to compose my posts in a separate text editor because I haven’t found a form recovery extension I like.)

If you decide to do that, you’ll want to make sure that the bot can’t just change the page-load timestamp. There are two ways I can see to accomplish that:

  1. If your framework supports it, regenerate the CSRF token every time the page containing the form is loaded and, when the form gets submitted, check that the token you receive was generated at least X amount of time ago. (3 seconds is a good starting value)
  2. If you can’t do that for some reason, use something like HMAC to generate a hash for the timestamp and then send both the timestamp and hash to the client in a hidden form field. Without the secret key you’re holding, the bot can’t change the timestamp without invalidating the hash.

Another trick similar to a CSRF token is to serve up an image (like a tracking pixel, but served locally so it doesn’t get blocked) from a dynamic route. When the route handler gets called, have it make a note of the current CSRF token for the session. Then, when the form is submitted, and after checking that the CSRF token is present and valid, verify that the image was loaded and the CSRF token at that time matches the current CSRF token.

That’ll block any bot that tries to save time and bandwidth by not attempting to load images. It’s similar in concept to some of the JavaScript checks, but the odds that a legitimate user who disables JavaScript will also disable the loading of images are minuscule. (Thanks to Alain Tiemblo for the idea)

6. Prefer Structured Input

If you’re accepting submissions for a custom site, rather than just slapping up a basic comment form, structured input isn’t just a way to let submitters do some of the legwork for you.

Every additional field is another opportunity to trip the bot up by expecting it to auto-fill something that can’t be satisfied by randomly generated garbage or plagiarized snippets of someone else’s blog and has requirements only explained in human-readable text.

Structured input also makes your form look less like a blog comment or forum reply form, which may help to deter some smarter spambots.

7. Use Multi-Stage Submission

This one was suggested by WebAIM. The idea being that, if your form enters the submission into the database in some kind of draft form which will time out if not confirmed, and then returns a “Here’s a preview of how your submission will look. Please check it for errors” page that doesn’t contain the submitted fields but, rather, a submission ID and a “Confirm” button, the spambot may not be smart enough to complete the process.

I like this idea because it doesn’t feel like a CAPTCHA or an anti-spam measure to the end user… just a reasonable thing to ask the user to do to make life a little more convenient for whoever’s going to see what was received. (Plus, I find that having a preview separate from the editor helps me to notice my mistakes more readily.)

Human-Oriented Bot Spam

If you’ve ever actively followed a large site that uses Disqus for its comments, you’ve probably noticed that, before the moderators get to them, spam comments which slip through are trying to outwit spam filters by using look-alike characters. Unfortunately, due to limitations in how WordPress handles Unicode, I can’t show you an example of such a thing. (See here)

Now, if the spammer is still keeping the URLs in a form that can be clicked or copied and pasted, you may not need this… but if you can’t afford to require users to fill out a CAPTCHA every time they post, the Unicode people have developed what’s known as the TR39 Skeleton Algorithm for Unicode Confusables.

The basic idea is that, with the help of a big table, people can implement the algorithm for your language of choice (and have done so… usually under some variant of the name “confusables”. The PHP standard library includes one named Spoofchecker) and you can then go skeleton(string_1) == skeleton(string_2) to compare them without the obfuscation.

That said, it’s not quite that simple. The skeleton algorithm intentionally does not duplicate the process of normalizing uppercase vs. lowercase or ignoring combining characters, so you’ll need to do those first as preprocessing steps.

While I haven’t exhaustively tested it, my intuition is that this is the best way to skeletonize your text for spam detection:

  1. Normalize to NFKD and strip combining characters. (Eevee’s The Dark Corners of Unicode has a Python example and explains why you normally don’t want to do this, but the same issues apply to the TR39 skeleton algorithm itself, so it should be fine here.)
  2. Lowercase/uppercase the strings to be skeletonized (Do this after normalizing in case there exist precomposed glyphs with no alternative-case forms in the locale you’re operating under)
  3. Strip out all whitespace characters (To prevent things like “m a k e  m o n e y  a t  h o m e” and remove hidden spaces such as zero-width joiners)
  4. Run the TR39 skeleton algorithm on both strings.

Your strings should now be ready for use as input to whatever system you want to use to assess the probability of spam. (Check out this StackOverflow question if you want to train your own classifier and don’t have a spam corpus handy.)

Posted in Geek Stuff | Leave a comment

Getting Over It With Bennet Foddy: A Somewhat Belated Commentary

Yes, I’m sure that anyone who cares has probably already seen a million rage reaction compilations on YouTube, but this isn’t about that.

Rather, it’s about how different my experience has been so far, since I decided to try the copy I got in a Humble Bundle, and the observations that stem from that.

For people who aren’t familiar with it, Getting Over It With Bennet Foddy is a game where you have to climb a mountain while being hamstrung by odd controls, and there are plenty of opportunities to lose a lot of progress to a single mistake along the way.

While you’re doing it, you hear periodic narrated commentary from the game designer. Now, what everyone probably remembers best in reaction videos is the bits of commentary which are pretty obviously designed to troll players who are prone to raging. Calm encouragements purpose-built to invoke responses such as “I’d like to see you try!” and useless advice such as a reminder that you’ve already done this once, so just do the same thing again.

However, sprinkled among those, as rewards for reaching new progress milestones, are bits of philosophical commentary on the nature of Internet culture which I find surprisingly engaging… but I’m getting ahead of myself…

My first encounter with this game was when it showed up on my subscription to James & Mike Mondays… It looked like the kind of game I’d hate, but I was curious enough that I decided to watch a few other videos before I put it out of mind as just another “not in GOG.com’s guaranteed DRM-free catalogue. Not something I care about.”

That was in March of 2018. Around the beginning of June, 2019, I remembered that’d I’d obtained a copy in a Humble Bundle and decided to try it out of curiosity.

Now, the first thing that’s very important to understand is the mindset I went into this with. I’m not a competitive person, I’m not the “spend a ton of effort training to get that perfect play-through” type, and I strive to not let myself get riled up. When I started playing this, it was purely a matter of curiosity. I just planned to see how my abilities compared to the YouTubers I’d watched, try it until the novelty wore off, and then set it aside to play something more worthwhile.

The first thing I noticed was that, the first few times Bennet tried to make me rage, I actually laughed out loud. The second, that I was doing better than the youtubers I’d watched. The game seems to reward patience and methodical, carefully measured use of the mouse… something which probably doesn’t make for good LPing when made into a habit.

At the same time, as you progress farther, you’re introduced to more and more obstacles which require fast reactions. It reminds me of VVVVVV in that VVVVVV is exhilarating if you’re well-rested but frustrating if you’re not and it all comes down to whether you can walk the tightrope of having to move quickly, but without haste. (In VVVVVV, there are areas which seem tuned so that they must be traversed at a very specific speed that sits just in between the two speeds your tired mind prefers to gravitate to.)

In fact, when I really get into it, there’s a meditative quality to it. The more I play it, and the more I think about the blend of philosophical insight and trollish comments, the more I get the impression that the game is specifically designed to test the player in a more philosophical sense than usual… to “separate the boys from the zen”, per se… that it’s not a game meant to make people rage, but, rather, that getting up the mountain is secondary, and the primary challenge is one of mental discipline.

That would also fit with the dual meaning of the title. To win the game, you must achieve a state of emotional distance from it… you must “get over it”.

(Though, from the commentary, I also get the impression that it’s intended to be an homage to the design principles that went into arcade games and the early console games they inspired.)

In that sense, I don’t see it as a game that you’re supposed to try to beat but, rather, an exercise which you do a little of every day and then, when you finally find yourself on top of the mountain, it takes you by surprise. (While I won’t look up a spoiler, it does leave me curious about what note the game ends on. Does it acknowledge that potential to find yourself feeling lost and adrift after “arriving at the horizon, to find that nothing is beyond it”?)

For that reason, I think the “I’ll understand if you have to take a break” early on is actually a subtle hint that, like classic point-and-click adventure games, a wise player is supposed to play it in short stints. (In the case of a point-and-click, to sleep on the answers to puzzles. In the case of a game like this, because you need to maintain the patience and tranquility necessary to play well… and everyone starts to get sloppy and impatient sooner or later.)

That said, the game’s not perfect. Whether it’s a bug, a bad interaction with my system, or Bennet deciding to go a little too far, I’ve noticed that the game’s mouse sensitivity seems to be variable… or at least purposefully counter-intuitive.

Sometimes, I have to move the mouse a lot to get a small amount of motion when the cursor is close to the centre of the character model but, on other occasions, I find it difficult not to flail around when I’m using almost no mouse movement at all. Given that it seems to stay consistent for long periods of time, for all I know, it’s just some kind of input translation bug related to my running the Linux version fullscreened to 1920×1080 on a three-monitor desktop that’s 4480px wide. (It wouldn’t be the first time a game hadn’t been properly tested on multi-head Linux systems.)

I seriously hope that it’s not intentional, as a way to turn your ability to form muscle memory against you, because intentionally programming it with such variable mouse sensitivity (so that I sometimes see the hammer whip around faster than anticipated at just the right time to knock me out of position while, other times, I see it lag just in time to make me mis)s… that would be a step too far. I don’t mind the difficulty and subtle trolling, but a game’s mechanics should be fair.

In the end, I don’t know whether the game will hold my interest long enough to reach the end, given the stable of other games I can turn to when I just want a moment of “focused calm” with no hurry to “beat the whole game” (Like Hexcells, Sudoku, Tetris, Dr. Mario, and Shisen-sho), but I certainly feel richer for having played it.

Posted in Geek Stuff | Leave a comment

Game – Lumo

I just finished playing Lumo, so I supposed I might as well review it.

When I was a kid, these were always the kinds of games I was curious about but never had (aside from Mario RPG), so I can only critique from a modern perspective… overall, it’s a charming little isometric puzzle-platformer and it worked flawlessly for me on Linux.

The game lets you choose between a modern mode with maps and infinite lives and an old school mode, but I found myself never using the maps because it leaves it up to you to figure out where you are on each one and it was easier to just remember which rooms I’d already passed through based on their appearance and what they connected to. I was, however, very thankful for the infinite lives at some points. I also appreciated the very generous choices for where I respawn on some of the longer rooms.

Playing with an X-Box 360 pad, the controls are about as good as can be expected and I like how it lets you configure how the 45° axes of the isometric perspective get mapped to the 90° inputs of a keyboard, D-Pad, or analog stick. The movement speed is OK but, given the amount of backtracking, I do wish that there was a Run button or that it was running in an emulator so I could hold down “unlimit emulation speed” (A.K.A. fast-forward) to simulate it.

As with a pixel-based isometric game, the perspective is locked, which makes gauging certain jumps difficult. It’s retro-authentic in a game that’s got various 80s references sprinkled throughout it, so I won’t hold that against it. If you’re not used to controlling isometric games, my advice is to use un-mapped “up is north” directions until you get to tricky jumps, then switch to “up is north-west” temporarily for those.

That said, there’s one block-pushing+hopping puzzle in the ice area (about 2/3rds of the way through the game) where the slipperiness when you’re trying to hop on the blocks, the ease of accidentally pushing them in a direction they’ll shatter, and the delay before you can respawn a new ice block combine to make some Angry Video Game Nerd-level bad design… and I’m not one to judge a game’s controls quickly. (I actually have a post on the way about the zen of Getting Over It with Bennet Foddy.)

Also, in the final area of the game, it starts to rely too heavily on spike-block mazes, which drive home how frustrating it can be when you can’t rotate the camera, the spike block is preventing you from seeing your feet/shadow, and simply brushing against a spike will kill you… as well as the occasional optional puzzle which drives home why you don’t mix locked cameras with 3D environments which don’t follow isometric grids.

Beyond that, I’m not a huge fan of how, if I miss a collectable in a secret area, I’ll have to start a new game to get it because I can’t backtrack past certain points. I do know THAT is retro-authentic, however, so I’ll excuse it. (Even if it didn’t give “??” as the total count for certain types of collectables, I don’t plan to start a new game to achieve 100%… I’ll just excuse it. I’ve got far too many games on my backlog to humour a cheap excuse for replayability from a more entertainment-starved era.)

It does a nice job of keeping the puzzles varied as things go on, but there are occasionally some of them which feel like they’re varying far enough to feel ill-fitted to the genre (thought nowhere near as bad as in Fez), such as suddenly having to play an easier variation on Lights Out to progress.

Overall, the main glaring flaw is the storytelling, which has a very “I get the impression there’s a story, but it’s making me guess at what it is and I’m just here for the puzzles” feel to it. First, the intro has you pick a gender and color for a generic looking kid, then spend maybe a minute walking to the Tron scanner and then the actual game starts. It’s pointless, feels very tacked on, and makes a very poor first impression. Second, once you’re in the game proper, you occasionally have some mysterious Black Mage-y characters who feel like they should have significance, but instead just serve as props to set up puzzles.

More subjectively, I’d also have preferred if it were pixel-art rather than 3D. Whatever it is that my childhood has left me wanting from these games is intimately tied to the distinctive isometric look that I would sometimes glimpse. (And, given that some of the collectables are clearly referencing 8-bit micros, but it’s not aiming for a retro-authentic color palette, why not do 320×200 at 256 colors?)

Finally, It’s not a very long game, so you’ll want to buy it at a discount. I finished it in maybe 12 hours.

All in all, I enjoyed it, but it’s nothing special so definitely wait for a discount.

Posted in Geek Stuff | Leave a comment

GUI Error Handler for PyQt 5.x

When I was developing programs with PyGTK, one of my favourite little things to include to make life better for users was a drop-in helper named gtkexcepthook.py which adds a GUI traceback handler for uncaught exceptions.

Well, I finally got around to porting it to PyQt 5.x for one of my more recent projects, and I’ve named it qtexcepthook.py. (what else?)

The original was under “The license is whatever you want.” terms, so, out of respect of the original author’s intentions, I’m releasing the port into the public domain rather than putting it under a permissive license (eg. MIT) like I usually would for a something like this (simple, and I want everyone to use it).

I’ve also done a lot of refactoring to make it more maintainable.

Admittedly, there’s still a little more I’d like to do, and it doesn’t have any automated tests yet, but manual testing seems to give it a clean bill of health and I added a fallback so that, if the most complicated code does contain a bug and that bug triggers an exception, it’ll fall back to a more primitive exception-formatting mechanism (plus a traceback for the more advanced code) rather than failing entirely.

Finally (and, from a user’s perspective, most importantly), I took the liberty of splitting out the old email-based option for one-click reporting of bugs into a callback so you can swap in something more modern (eg. like an HTTP POST) if you so choose.

The code contains a working if __name__ == '__main__' example which can be switched between no callback and localhost-based e-mail reporting just by swapping some comments, so it should be pretty self-explanatory. Enjoy. 🙂

Posted in Geek Stuff | Leave a comment

GIMP Plugin to Automate Setting up to Colorize Manga Pages

I’ve been making a push lately to try to get things cleaned up around here, and I came across an old GIMP plugin I slapped together when I decided to try my hand at colorizing manga pages.

The approach you take will depend on whether you’re dealing with line art (blacks stay black and white regions becomes color) or photo-like grayscale (whites stay white and black becomes color).

I won’t go into too much detail on the approach for photo-style images, but a common technique suggested in tutorials is to set a layer to the “Screen” blending mode and place it above the original image. You can then paint into it to change the hue and saturation of the pixels while leaving their intensity alone.

For line art, where you want the blacks to stay black, but fill in the white and half-toned regions with good-looking color, the technique I learned involves applying “Color to Alpha” to the source image, then painting under it. The blacks stay black, anti-aliased edges blend cleanly, and halftones Just Work™ as you paint in the colors.

…but setting up to do it “correctly” (ie. non-destructively, so you can easily go back and correct oversights) gets tedious when you have to do it for more than a page or two.

This GIMP plugin will set up all the layers necessary so that you can just start selecting regions and filling in colors as you please. Just install it, restart GIMP, and choose “Start Colorizing…” from the Image menu.

I also attempted to eliminate as many sources of annoyance as I could while using it:

  • The plugin will automatically switch the image to RGB mode if it started as Grayscale or Indexed color.
  • The plugin automatically runs “Color to Alpha” on the line art, then sets up a Colors layer beneath it. (This is
  • The original image is kept, unmodified, hidden under an all-white background layer as an alternative target for selection-defining operations which don’t like transparency.
  • All layers except Colors start out locked to minimize the chance that I’ll wind up having to undo some edits because I wound up modifying the wrong layer without noticing.
  • A half-opacity “Fluids” layer is added so that fluid colorizing good enough for all the cases I ran into is as simple as painting some pure white on top of your existing color.
  • A separate “Blush Lines” layer is provided and a decent default red is provided in the “Blush Lines Color” layer using the Screen-based approach. Just cut-paste the blush lines into it and they’ll turn red.
  • For adding a soft glow to the blush, just select the blush lines, grow and feather the selection as appropriate (I think I feathered to 20 pixels back in the day), and then bucket-fill the “Blush Lines Color” into the “Blush Glow” layer.

Windows users will have to look up where to put it, but, on Linux, I installed it at
~/.gimp-2.8/plug-ins/coloring_helper.py. It shouldn’t do anything platform-specific though.

The script is up on GitHub Gist if the embed doesn’t work.

Posted in Geek Stuff | Leave a comment