Open Source: The Scientific Community in Technology

tl;dr Free/open-source software is to producing software as science is to accumulating knowledge… science just had a “different upbringing” and more time to build up formal structures.

Note: This was originally written in October of 2010 as an essay for my scientific reasoning course. The citation format has been modified, moving URLs into hyperlinks, and additional hyperlinks have been added to elaborate on terms discussed primarily in my textbooks but, otherwise, this is identical to what was submitted and marked.

When most people look at the open source software community, their reactions typically take one of two forms: Some react with puzzlement at how such a thing can possibly work while others react with wonder at how this marvellous new form of collaboration arose, seemingly out of nothing. The truth is that, while the expression is new, the open source community itself is merely a new manifestation of the same drives, behaviours, and goals already present in a much older and more formalized institution: the scientific community.

Perhaps the biggest obstacle to recognizing this relationship is the fundamental difference in goals. While, in science, the goal is to gain the most accurate representation of the universe possible, the purpose of software development is simply to satisfy the target user group as well as possible. This difference is crucial because it completely changes participants’ motivations for seeking generality. While scientists seek generality because it improves their ability to comprehend the universe, programmers seek general appeal for their creations because it spreads the maintenance burden between a larger group of volunteers and, in the absence of formal awards, gives them an avenue by which to gain community acclaim. This is a critical difference because it means that, at some point, the project reaches a “terminal velocity” where the effort saved by a larger developer group is balanced by the effort expended managing social friction among developers with varying personalities and potentially conflicting visions. Furthermore, from a purely goal-oriented standpoint, perfect generality is impossible in the world of software as an unavoidable side-effect of human nature. Name any design element for any piece of software, and you’ll be certain to find users who disagree over it. This pattern occurs at all scales and large-scale disagreements akin to those between supporters of the caloric and kinetic theories of heat have occurred frequently enough in the history of software to gain a typically tongue-in-cheek geek moniker from less tribal users: holy wars.i

To illustrate the commonalities between the scientific community and the open-source world, consider, for a moment, the open source phenomenon known as forking. Forking occurs when, all other avenues of conflict resolution having failed, a portion of the developer base for a project exercises their license-granted rights and leaves to form a competing project, taking a copy of the source code with them. Because of the sudden requirement to duplicate all activities performed by the existing project, forking is not undertaken lightly, but it serves as a necessary safety mechanism, ensuring that development on a software project behaves like theoretical research, tied to field-leaders only so long as their benefit to the field outweighs the cost of dealing with them.ii

As an example, compare the history of heliocentrism with that of the GNU Compiler Collection, one of the core components of any free operating system.1 By 1997, Richard Stallman had been shepherding his GNU project for over a decade and what he desired more than anything else was a stable compiler that the Free Software Foundation (FSF) could show to the world. This conservatism frustrated various developers who wanted to implement more experimental improvements to the compiler and, in the end, several nascent forks were begun.iii These forks quickly coalesced into a single fork named EGCS (Experimental/Enhanced GNU Compiler System) which, like the heliocentric model of the solar system, was initially less useful, but attracted many new participants interested in the plethora of opportunities to explore and experiment. By April 1999, EGCS had proved so successful that the FSF officially retired their fork, accepted EGCS as the new, official GCC, and adopted EGCS’s more open model for contributions.iv

It should, however, be noted that forking does not always end this way. Just as general relativity did not stop us from using Newton’s equations for calculating motion on Earth, not all forks end with one branch withering or being absorbed into the other. If the community for a project is large enough and the circumstances are right, a fork may lead to two stable projects with common ancestry serving slightly different user bases with neither group willing to expend the effort to reconcile the two ever-diverging codebases. This is the fundamental difference between science’s slowly-unifying tree and open source’s more Darwinian ever-branching, extinction-pruned one.

Typically, forks which don’t unify are formed due to irreconcilable differences between the management of the original project and a contributor or group of contributors large enough to easily maintain their own fork. Probably the most well-known example of this is the fork between the GNU Emacs and XEmacs programmers text editors, begun when GNU Emacs refused contributions Lucid Inc. developed to make GNU Emacs suitable as the base for one of their products.v vi More recent examples include the forking of the Joomla web content management system from Mambo CMSvii and the LibreOffice project from OpenOffice.orgviii when their respective developer communities felt that Mambo Inc. and Oracle weren’t acting in the best interests of their respective communities. As the OpenOffice-LibreOffice fork is still young, reconciliation a la GCC/EGCS is still a possibility.

This illustrates another commonality between the scientific and open source communities: Their social mores and the personality traits necessary for a good project leader. In the open source world, a good project manager is expected to be willing to consider new ideas and accept worthy contributions, yet have the vision and drive to complete the project alone if necessary and the wisdom to only accept contributions which won’t detract from the wholeix. In this sense, projects are more akin to fields of study than individual research projects. This is due to the amount of effort necessary to refactor existing work to fit into new projects. A program is a long-term endeavour and, just as science is ill-suited to fad-like behaviour, open source software development has yet to find a solution to the problem of producing “disposable software” like video games which cannot be refined over the course of several years of use and adjustment.

Furthermore, open source has its own “pseudoscience” to which participants react poorly. Just as work done in isolation without input from the greater scientific community is one of the most common indicators of pseudoscience, so too are “patch dumps” an indication of bad actors in the open source community. Simply put, a patch dump is a large set of modifications to a project’s code, dumped on the management without warning… often accompanied by a “take it or leave it” attitude.x Most patch dumps are simply ignored, but occasionally, one comes along which can’t be accepted, but also can’t be simply rejected. The most recent example of this is probably Google’s changes to the Linux kernel for their Android smartphone platform which languished in the so-called staging tree until eventually being removed for lack of maintainershipxi. Attempts are ongoing to reconcile the two branches, but the Linux maintainers see Google’s non-trivial changes as a flawed solution to the problem being addressed and Google, having the development muscle to maintain their own codebase, has made little effort to reach a compromise. Given the tremendous interest in maintaining a single kernel against which everyone can work, a solution is inevitable, but it may take a very long time.xii

Possibly less visible but just as wasteful in the long run are users who, whether due to hubris or a lack of confidence, keep their projects secret like Darwin did until it’s almost too late. Unlike the events surrounding the theory of natural selection, keeping an eventual open-source project secret serves no useful purpose because public exposure takes the place of empirical testing… a flaw revealed recently with the Diaspora project, an attempt to produce a decentralized alternative to Facebook which, having raised sufficient funding for a summer of full-time work, only released their source at end of the season, long after many flawed assumptions about system requirements had already been incorporated into the code.xiii For the world of software, the effects Darwin experienced are still possible, but come from publishing early yet being unable to commit to an implementation. Projects like GNU HURDxiv and Duke Nukem Foreverxv have come to epitomize the concept of “vaporware”, software which is promised but never delivered… in both cases, because the project leads insist on chasing a moving target. Of course, the traditional danger of being beaten to publishing also still applies as, despite the ability for the software market to support multiple contenders, network effects tend to heavily favour earlier arrivals.

With that, we come to peer review. Perhaps surprisingly, this familiar idea helps to demonstrate that many of the most familiar formalisms of science are only incidental to the process. Rather, scientific formalisms as we know them are, like so much else, organically grown responses to a specific set of circumstances. In this case, a procedural carapace, protecting the search for a single, unified truth from attacks, both external (religion, pseudoscience) and internal (hubris, expectations, bias). Just as there is no single scientific method, peer review is, when examined, equally ephemeral. To the world of open source software, peer review means public development and frequent releases. The former to protect against nefarious contributions and improve the chances of bugs being noticed and fixed, the latter to ensure that user feedback on new changes is swift and varied.xvi If you’ve ever tried software while still in beta, you’ve helped to peer review it. The court of public opinion is a fickle master, even among those who don’t report bugs, and a developer’s skills in quality control and release management are essential parts of their reputation.

Reputation also takes on a surprising role in the world of open source software as, in concert with copyright, experience, social pressures, and the aforementioned threat of forking, it composes the core of open source development’s current analogue to tenure. This relationship cuts both ways. Outspoken, abrasive people with vision and skill like Theo DeRaadt have had funding cut, only to be defended by the community and rescued by group fundingxvii while, simultaneously, people who write good code have been cut down for still being a detriment to the project as a whole. This further reinforces the analogical relationship between fields of study and programming projects. This change in reputation can be extremely sudden, as when the XFree862 management, spooked by a failed fork named Xouvert, changed the license to something incompatible with the GNU General Public License3 and found themselves deserted by the majority of their developers to form X.org in mere weeks. X.org X11 has now supplanted XFree86 as the option of choice for graphics on Linux. However, change can also be slow, as with the current migration by various projects from using GLIBC4 to a variant5 named EGLIBC due to the borderline abusive behaviour of its lead developer and maintainer, Ulrich Drepper, while refusing patches he feels are unnecessary.xviii xix (Including one to fix a shuffling function so it no longer gives skewed distributions)

This focus on sharing over personal gain also expresses itself in the ideological and political facets of the ecosystem to significant effect. The Software Freedom Law Centre, a pro bono legal organization has litigated many cases over violations of the GPL family of licenses and, in public appearances, its chairman, Eben Moglen, has noted that their “we don’t want money, we just want compliance” stance on violations has resulted in a significant reduction in legal deadlock. Moglen has also clarified his stance by analogizing software licensing with a hypothetical “math licensing” regime under which companies must pay a per-seat fee for each field of mathematics they need, always having to budget and ration.xx xxi Even the GNU GPL license, itself a legal document, is written to be just as much a philosophical statement as a legal document, enshrining what Richard Stallman refers to as “the four software freedoms”. (The freedoms to use the software for any purpose, to study and customize the program, to help your neighbours by sharing, and to help the community by distributing any improvements you make)xxii

However, perhaps the most illustrative example of the deep commonalities between the scientific and open source communities is the history of the open source movement itself. Proposed by Chris Peterson in a strategy session in 1997, the term “open source” was intended to describe Netscape’s release of the source code to their waning Navigator web browser without the moral baggage and linguistic ambiguity of the existing term, “free software”.xxiii Free software as a term and a movement, in turn, has its roots in Richard Stallman’s GNU project, an attempt to produce a free clone of the UNIX operating system with the express purpose of bringing back the academic culture of sharing Stallman had grown accustomed to at the MIT computer labs… a culture under threat in the late 1970s and early 1980s from corporate software vendors like Microsoftxxiv, a position most famously stated in 1976 when a young Bill Gates famously accused the majority of computer hobbyists of “stealing their software” in his Open Letter to Hobbyists.xxv

Fundamentally, the open source ethos is the scientific ethos. Software freedom is peer review, collaborative research, and open data, stripped of their formalisms and opened to all willing participants. Open source is the world’s first large scale experiment in organic, ad hoc, massively collaborative problem-solving and, while it may have its flaws, it has succeeded beyond our wildest dreams despite still being in its infancy. The core principles of the scientific community are alive and well and companies like Google and IBM are already recognizing the value of funding grant and mentorship programs like the Summer of Code and hiring full-time programmers to work on non-proprietary projects. While the formalisms that will develop as the community matures are yet to be determined, open source has proven that scientific principles aren’t just for scientific problems and that many of the most fundamental elements of the scientific community may be an emergent property of human nature itself.

Footnotes

1. A compiler translates human-readable source code into executable machine code. Without a compiler, modern computer programming would be impossible.

2. The graphical subsystem used by all Linux distributions which offer a desktop interface.

3. The GNU General Public License (GPL for short) is the most popular license used for open source software and using a GPL-incompatible license is considered equivalent to patenting your research and enforcing your patents very actively.

4. The GNU implementation of the standard library of functions for the C programming language, required by every program written in C… which happens to be the most common language used to write free software.

5. A variant is similar to a fork, but attempts to retain as much commonality with the parent project as possible. Variants are usually started when a set of important contributions are rejected but the parent project is otherwise progressing in a healthy fashion.

References

i Raymond, Eric. holy wars. (December 29, 2003). The Jargon File version 4.4.7. Retrieved October 9, 2010.

ii Hill, Benjamin. (August 7, 2005). To Fork or Not To Fork. Retrieved October 9, 2010.

iii A Brief History of GCC. (January 1, 2008). GCC Wiki. Retrieved October 9.

iv Bezroukov, Nikolai. The Short History of GCC development. Portraits of Open Source Pioneers. Retrieved October 9, 2010.

v Zeth. History of Emacs and XEmacs. (March 25, 2007). Command Line Warriors. Retrieved October 9, 2010.

vi Stallman, Richard. The FSF Point of View. Xemacs vs. GNU Emacs. Retrieved October 9.

vii Joomla. Wikipedia. Retrieved October 9, 2010.

viii Nitot, Tristan. Welcome to Document Foundation and LibreOffice. (September 28, 2010). Standblog. Retrieved October 9, 2010.

ix Srijith, Krishnan. (October 2002) Study on Management of Open Source Software Projects [PDF]. Retrieved October 9, 2010.

x Collins-Sussman, Ben. The Risks of Distributed Version Control. (November 10, 2005). iBanjo. Retrieved October 9, 2010.

xi Hartman, Greg. Android and the Linux kernel community. (Feb 2, 2010). linux kernel monkey log. Retrieved October 9, 2010.

xii Vaughan-Nichols, Steven. Android/Linux kernel fight continues. (September 7, 2010). ComputerWorld Blogs. Retrieved October 9, 2010.

xiii Zer-Aviv, Mushon. Diaspora’s Kickstarter $$$,$$$ success endangers both Diaspora, Kickstarter & you. (May 14, 2010). Mushon.com Networking Loose Ends. Retrieved October 9, 2010.

xiv Hillesly, Richard. GNU HURD – Altered visions and lost promise. (June 30, 2010). The H Open Source. Retrieved October 9, 2010.

xv Kuchera, Ben. The death and rebirth of Duke Nukem Forever: a history. (September 7, 2010). Ars Technica. Retrieved October 9, 2010.

xvi Raymond, Eric. Release Early, Release Often. (September 11, 2000). The Cathedral and the Bazaar. Retrieved October 9, 2010.

xvii Brockmeier, Joe. DARPA Cancels OpenBSD Funding. (April 23, 2003). Linux Weekly News. Retrieved October 9, 2010.

xviii Jarno, Aurelien. Debian is switching to EGLIBC. (May 5, 2009). Aurelien’s weblog. Retrieved October 9, 2010.

xix Ark Linux switches to eglibc (May 13, 2009) Ark Linux: Development and planning of Ark Linux. Retrieved October 9, 2010.

xx Moglen, Eben. (November 21, 2006) Software and Community in the Early 21st Century. Retrieved October 9, 2010.

xxi Glass, Geof. Eben Moglen on Free Software and Social Justice. (November December 10, 2006). a whole minute. Retrieved October 9, 2010.

xxii The Free Software Definition, version 1.92. (October 10, 2010). Philosophy of the GNU Project. Retrieved October 9, 2010.

xxiii History of the OSI. Retrieved October 9, 2010.

xxiv Stallman, Richard. (October 3, 2010). The GNU Project. Retrieved October 9, 2010.

xxv Gates, Bill. (February 3, 1976). An Open Letter to Hobbyists. Retrieved October 9, 2010.

Posted in Geek Stuff | Leave a comment

“Tangled” and Disney’s Big Weakness

Since rediscovering and posting my little essay on the problem with having a big bad, I’ve been thinking on how this relates to the problems in Disney “cheapquels” and their more modern animated works like The Princess and the Frog and Tangled.

The Princess and the Frog has pretty much the same flaw as I already explored in Don Bluth’s Anastasia, but Tangled is a little more interesting.

For those who haven’t seen the film, Tangled is Disney’s take on the fairy tale, Rapunzel. It’s a 3D-animated film and they do it with a degree of panache and flair that makes me think of the second Shrek movie and, in some ways, makes it a candidate for their best animated feature ever… except for one small thing: Mother Gothel (the witch who kidnaps Rapunzel)

While the movie’s characterization is excellent everywhere else (Including Rapunzel doing a pretty good job at being clever, resourceful, and not being a damsel in distress more than her circumstances would demand), Mother Gothel is conspicuous by how uncomplicated her character is.

In fact, compared to Lady Tremaine (Cinderella’s wicked stepmother) who has two daughters of her own to care for (and who, it’s possible, harbors some sort of personal resentment for Cinderella) or characters like Maleficent or Ursula who not only have no reason to care for the heroine, but are also inhuman enough to at least get the benefit of the doubt, Mother Gothel is so circumscribed by the needs of her role that, for someone used to seeing nuance in entertainment, the shallowness of her character can be downright grating compared to the rest of the cast.

Despite raising Rapunzel for 18 years and, supposedly, spending that time alone with her, Mother Gothel is still portrayed as caring only for the youth-giving magic of Rapunzel’s hair. Now, while there are certainly many people in the world that sociopathic, casting one as the villain in a story that is otherwise so well-polished just doesn’t work.

Rather than (admittedly very clever) lines like “You want me to be the bad guy? Ok. Now I’m the bad guy” and Disney’s signature cliché “villain falls to their death” scene, a more fitting end could have at least involved some sign of raising Rapunzel alone in the woods having affected her.

But, in the end, the plot calls for an evil witch of a villain, so Mother Gothel is forced to remain nothing but a “big bad”. She can’t grow as a character; she can’t have a “rock and a hard place” moment where her fear of age and death battle with a desire for the happiness of the girl she raised and unintentionally grew attached to. She can’t even hesitate or show regret that things could have played out differently. She’s just an evil old witch, hurting and lying to others for unexplored selfish desires because, if she’s human, then the viewers might not want her to die… and obviously a villain’s only purpose is to be defeated in a dramatic fashion (and serve as a blind attempt at continuing a pattern that worked in the past).

This interpretation becomes even more obvious with all of the themes and twists which feel borrowed from other Disney successes, like the post-death save by an admission of love from Beauty and the Beast. I worry that, if this trend continues, Disney may take the dubious honor of being the first studio to not just milk a franchise dry, but combine so much character talent and so little talent for plots that they milk an entire class of plots dry. (Their excessive investment in female royalty, as one person put it, and the associated preconceptions about what plots to use don’t help)

Fundamentally, Disney’s big problem seems to be that their skills lie very firmly in the ground-level details like how to write entertaining characters and good comedy while, when they have to construct plot details on their own, they fall flat.

This is most clearly shown in their “cheapquel” sequels like The Lion King 2: Simba’s Pride or The Little Mermaid 2: Return to the Sea. When borrowing a good, time-tested plot, they can enhance it to classic status by replacing the traditionally simplistic characters with more entertaining, personable ones. However, when they need to cook up their own plots or, as with Tangled or Peter Pan 2: Return to Never Land (more on that in a moment), expand an existing plot too simple for a feature film, their efforts fall flat.

This may be because they put up a wall between their high-level plot and their low-level details through which ideas can only flow in one direction. They do an excellent job on the little details and characters can affect plot points that are unimportant… but when push comes to shove, the plot has veto power over the characterization. If you look at something like Lion King 2, there’s almost a direct inverse correlation between a character’s importance to the plot and how interesting they are… The less the plot is yanking the character around, the more interesting they are.

Peter Pan 2 is actually an interesting example because, while it is a “cheapquel”, like Tangled, it wasn’t created from whole cloth. Rather, a few years after the original Peter Pan play had gained success, J.M. Barrie wrote an additional scene titled An Afterthought (essentially an epilogue) in which Peter Pan returns when Wendy has children of her own. It is that scene which, like the Rapunzel fairy tale, Disney inflated into a feature-length film and, with only the budget of a cheapquel behind it, Return to Never Land feels as if the Disney magic is going “putt-putt sputter, putt-putt cough” as the mix of borrowed and created plot elements constantly changes.

I’m very happy to see that Disney is still very capable of producing good films and I’m also happy to see them receptive to the idea of realistic female leads for the 21st century, but a truly good story has to be character-driven, with a free flow of influence back and forth between the plot and characters, not locked-down so the characters become little more than marionettes. If Disney wants to continue their tradition of casting a nasty, selfish woman as their main antagonist, they’re going to have to work at least as hard on her as they do with everyone else in the cast.

Note: The portions of this regarding Peter Pan 2: Return to Never Land were originally written near the end of October of 2009 and were rediscovered alongside my previous two posts on writing, but weren’t complete enough to be posted on their own. Thankfully, my brother’s less wordy commentary several months ago on Mother Gothel’s motivations and death scene came to mind and, once I’d pulled out Tangled and watched it, the rest was easy.

Update 2011-08-21: The Noah’s Ark segment of Fantasia 2000 is  a good, simple example of how the Disney magic works when it really works. A proven plot which is spiced up with distinctive, memorable art, characters, and music as appropriate to the type of project.

Posted in Writing | 8 Comments

Why Disney Cartoons Grow Up With You And Looney Tunes Don’t

Note: I originally wrote this in April of 2009, but I forgot about it until recently when I started tidying up my notes on how to write better fiction.

Many of the most iconic Warner Brothers and Disney cartoons were produced in the 1930s through 1950s. However, there’s a subtle, but often very noticeable difference in how these two sets of cartoons feel and a very simple reason for it. Warner Brothers cartoons primarily center around basic, adversarial slapstick. The problem with this is that, as we grow up, we start to notice the flaws in this approach, illustrated by this hypothetical conversation:

“These two thinking, self-aware characters don’t like each other. They fight amusingly.”
“Why?”
“What do you mean _why_?”
“Why don’t they like each other?”
“That’s just what rabbits and hunters (or cats and birds or whomever else) do. Stop over-thinking things.”

By contrast, Disney generally based their cartoons around caricatures of daily life. When adversarial conflict did occur, it felt more natural. Everyone knows dogs often don’t get along with smaller furry animals like chipmunks and cats and who hasn’t felt like Donald Duck getting into a fight with an inanimate object at some point?

We can enjoy both kinds of cartoons, but Disney’s approach doesn’t tarnish as we grow up. Pluto is still a dog, but Bugs Bunny is a person in different clothes, just like Mickey Mouse and Donald Duck. Why, then, should we expect it to feel natural to see Elmer Fudd or Yosemite Sam chasing him with a gun?

Of course, this isn’t to say the classic-era Warner Brothers writers were bad at what they did. The originality and variety seen in their gags makes them highly memorable. (eg. Wile E. Coyote’s tiny umbrella, portable holes, Roadrunner’s ability to run through painted scenes to outwit Coyote, and the elevation of anvils from “just another heavy object to be used for gags” to an icon of cartoon slapstick, just to name a few.) They simply didn’t realize the importance of certain cartoon design decisions.

This perceptual shortcoming (possibly stemming from successive generations of animators learning by “getting a feel for it” rather than by analyzing the “why” of prior successes) is in no way limited to one company or one time period. On Disney’s “The Chronological Donald, Volume 4”, Leonard Maltin finishes by introducing a modern Donald Duck cartoon which he presents as evidence that Donald Duck is still going strong. However, my first impression was that it was disappointingly boring. I later realized that this was because it felt like Donald didn’t deserve what he was receiving. In the modern cartoon, the Aracuan bird relentlessly torments Donald Duck who just wants to take a picture for Daisy. In the classics, whether by having Donald throw the first punch (eg. smashing Huey, Dewey, and Louie’s snowman or putting lit firecrackers in their trick-or-treat bags and then pouring water on them) or by having him fight an inanimate object, the impression is given that Donald is in the wrong… or at least an innocent victim of bad luck with no thinking entity behind it.

I think this is also why I always enjoyed Road Runner and Coyote cartoons more than other types. As with Donald Duck, many of Wile E. Coyote’s failures are his own fault and, for the rest, the guy’s just obsessed. Go eat some easier food for cryin’ out loud! It certainly also helps that Wile E. Coyote draws on the same fount of humor via expressions without speech that have proven so successful with Pluto.

Posted in Writing | Leave a comment

Why A Big Bad Is A Bad Idea

Note: I originally wrote this at the beginning of December of 2008, but I forgot about it until recently when I started tidying up my notes on how to write better fiction.

Update 2011-08-14: I now also have a more instance-specific post which touches on this in the context of Disney’s Tangled as a side-effect of exploring Disney’s biggest weakness as a source of creative endeavors.

A big bad is a character, usually poorly defined, shallow, and explicitly evil, whose quest for power/revenge/evil/pizza dominates and drives the story, forcing character development and any other subplots to fit themselves in around it.

These days, having a “big bad” has almost attained cliché status. While you can still get away with it, it takes a great deal of skill and, as such, should not be attempted by amateurs. (Sadly, the people who do it most) Doing so will most likely cause your story’s perceived quality to take a nose-dive. (Especially if the “big bad” isn’t introduced until part-way through the story, because it causes the reader’s first impression of the story to be disproven after they’ve gained an emotional attachment to it. I often find myself feeling a twinge of resentment when authors do this.)

This problem is exacerbated by the fact that all fiction is, more or less, the same stories being re-told with different details. Regardless of personal taste, the core of the matter is that people read stories for the characters and the ideal story for a given reader at a given point in their life is a story which provides them with just enough depth (both of characters and of plot) to comfortably entertain them and no more. (I say “comfortably” rather than “effectively” because even if a reader *can* follow the plot of an overly complicated story, their taste in complexity will vary with their mood. At some points, they may feel like challenging themselves to a brain-bending mystery story while, at others, they may just want to rest their fatigued mind in the company of a good book)

Keep in mind that not all complexity is the same. A complex plot with simple characters will be easy to write, but not very satisfying to read. A simple plot with complex characters will be difficult to write but, all other things being equal, will satisfy the readers very well. This trait is also what helps to distinguish classic fiction like Frankenstein from pop culture. Society changes and technology changes, but human nature remains eternal1. It takes practice to write truly deep characters, but just keeping this in mind should help you to quickly find an acceptable mix of complex characters and complex plot for your current skill level.

To illustrate how others have achieved this as well as the importance of avoiding the lure of a “big bad”, I’ll compare several popular animated feature films and deconstruct the tendency for Disney films to have higher-quality plots.

A good place to start would be Disney’s Beauty and the Beast. At its core, it’s a romantic story of two characters who are unsatisfied with their lives and end up finding the solution in each other. The plot is very simple, made more so by the limitations of presenting a story as a film, yet the end result definitely earns the title of “animated classic”. Cursory examination is all that’s needed to conclude that “memorable characters” play a large part, but you can’t add them as icing on the cake and, as a crucial tell, you can’t immediately point to one character and say “that’s the bad guy.”

For example, Belle isn’t just your ordinary female lead. She’s a literate woman in the pre-industrial french countryside, misunderstood by everyone but her father, and she dreams of living beyond the bland existence of country life. The beast is a prince, spoiled and immature, who will mature and learn kindness throughout the course of the story. He also provides a crucial tell as to the story’s quality in that, in his initial appearance, one could easily mistake him for a villain despite his status as the lead male protagonist.

Gaston is probably the most telling character though, because he’s the closest thing the story has to a villain. Selfish, egotistical, and popular, Gaston is every bit a pre-industrial, country-bumpkin jock and, most importantly, nothing more. At most, his subplot shares the story equally with the developing relationship between Belle and Beast.

The lesson to be taken from this is that, unless you’re either writing an epic, or skilled enough to not need advice at all, you shouldn’t use a “big bad”. They’re two-dimensional, boring, and tend to make for simple, predictable stories. Also, keep in mind that “epic” is a commonly misunderstood word. In common conversation, it can be used merely to mean impressive or grand (eg. an epic voyage), but in the context of literature, it refers to a tale where events on a grand scale (eg. the fate of middle earth) are determined by small, seemingly ordinary protagonists. (The heroes. eg. Frodo Baggins and friends)

Now, let’s contrast this with Don Bluth’s Anastasia, produced by 20th Century Fox. As with Beauty and the Beast, there are two subplots and one is romantic, the characters are memorable, and the production quality is high. However, in this story, the non-romantic subplot is being driven by a madman (Rasputin) bent on killing the main protagonist (Anastasia) and this is to the detriment of the story because, despite some clever character design work, his motivations are suspect and his personality falls flat. Most importantly, despite attempts to change him to a “basket-case wanting out of limbo”, Rasputin never overcomes his role in the prologue as a cookie-cutter “madman out for revenge” whose reasoning was neither explained nor justified. However, in all fairness, solving that problem would require a prequel of its own or it would do more harm than good to the story as a whole. Hence, my argument that, except in very special circumstances, the problem of using a “big bad” is a Catch 22, and the only solution is not to do so in the first place. More significant to this instance however, is Rasputin’s role in the story: the happy ending can only occur once he’s dead, and that means that his subplot is dominant, hence his status as a “big bad”.

Finally, to clarify the importance of that, I’ll examine a slightly more subtle case. Disney’s Aladdin. At first glance, it would seem to disprove my argument, since it has a big bad (Jafar) and the story can’t end until he’s defeated, but his role is key. In Anastasia, the romance is subordinate, dancing to the tune of Rasputin’s plot; In Aladdin, the romance is dominant. The main conflict has always been that Aladdin is a commoner and Jasmine is a princess and, despite his best efforts, Jafar’s subplot merely places additional obstacles to their love, rather than commanding the flow of the story as a whole. This distinction between the villain’s plot being dominant or subordinate can be illustrated further by analyzing the sequels: In “Return of Jafar”, Iago’s subplot dances to the tune of the dominant “Jafar, the big bad, returns and must be defeated for good” story and, in “Aladdin and the King of Thieves”, the “Aladdin and his father reunite and reconcile” subplot is driven by Cassim’s greed and eventual defeat. If it weren’t for the quality of characters like Genie and the emotional connection to them already developed by the original film, the story would be ordinary, un-exceptional Saturday morning cartoon fare.

In essence, in a good story, it’s the characters’ story and the bad guy is in the way; In a bad story, it’s usually the bad guy’s story… and not in a good way. You can write a good story about a bad guy, but then it’s the characters’ story and the (good) bad guy is in the way.

1. For example, there is an ancient greek play where a father complains of how his teenage son loves to sleep in and lay around all day listening to the bards. If that doesn’t convince you, I doubt anything will.

Posted in Writing | 1 Comment

Notes on notification-daemon

Just a few brief things I’ve learned about notification-daemon while trying to make mine less annoying:

  • It’s been five years and notification-daemon is still effectively undocumented as far as I can tell (Aside from the odd example, the best you can do is work sideways from Ubuntu’s guide to the API it shares with NotifyOSD)
  • You can change the theme via the notification-properties command
  • You should be able to select which screen corner will play host to notifications via notification-properties but it’s apparently broken and current versions of notification-daemon will always force top-right corner.
  • There’s no setting for controlling which monitor you’re using it on. It will pop up on the one your mouse happens to be occupying, even if that means covering your browser’s Close Tab button and annoying the hell out of you.

Time to expedite my switch to Awesome and tweak Naughty to get what I want, I guess.

Posted in Geek Stuff | Leave a comment

Game Review: VVVVVV

With the Humble Indie Bundle 3 set to end in less than a day (bad timing, I know), I finally got around to writing my review of VVVVVV.

VVVVVV is a puzzle platformer with a special twist. Unlike other platformers, you can’t jump… you can only reverse the direction you fall. (I’m guessing the main character is supposed to be weightless and wearing magnetic boots)

The first thing I have to say is that this is a game that’s firmly targeted at the “nostalgia niche”. Its graphics are an intentional knock-off of games from the Commodore 64 era, the soundtrack is composed of chiptunes, and gameplay, while simple and intuitive, is about as far from “push button to win” as you can get while still keeping me as a satisfied player.

However, you don’t have to be an old Commodore gamer to enjoy it (The SNES was the first gaming system I owned) and it’s actually a very refreshing game as long as you aren’t impulsive or impatient. Some levels require to you to stop and think before you attempt them while others are centered on “more speed, less haste”. This is definitely not a game to be played while tired. However, if you’re well-rested and can think well within levels designed to make you hurry (and to kill you if you get lazy), I found it not only satisfying, but rather relaxing. (Just make sure you’re playing the newer C/C++ rewrite. The old flash version’s collision detection is painfully inadequate.)

I especially enjoyed the scrolling “The Tower” segment… which is a bit of a surprise, given that I hate self-scrolling levels everywhere else in gaming. There was a sort of “DDR Zen” to it (plus catchy music) that appealed to me… aside from that jerk move of putting that second trinket far enough from its branch point to keep me from getting everything in the first run.

In fact, aside from when I got tired and started making mistakes, the whole game had a sort of relaxing flow to it.  All in all,  it’s a very well-designed game built around switching between stopping to think one moment and requiring you to be cool and collected under pressure the next.

While I wouldn’t want that in all my games, in moderation, it’s very refreshing. I suppose I’m starting to understand what some reviewers see in the first Castlevania game. It also helps that there’s so much catchy music in the soundtrack.

Not a long game, but well worth it if you can spare the cash.

Posted in Geek Stuff, Reviews | Leave a comment

A Few Suggestions/Pleas to Authors of Mystery-oriented Fiction

Just a quick couple of suggestions for people looking for something a little more original than The Loch Ness Monster, Atlantis, or Alien Abductions to work into their creative endeavors. (Because they occurred to me while I was wandering Wikipedia and this is my blog)

First, think long and hard before writing stories about lost technology or an Advanced Ancient Acropolis. It’s far too easy to get caught up in the excitement of writing them and misjudge your ability to do them well. (And, more subjectively, I’m getting tired of reading about them too)

Second, please please please don’t write a You Go Too Far! story. Science Is Bad stories are far too common and, given how you probably wouldn’t have time to write without its fruits, it’s, at best, frustratingly hypocritical and ungrateful to add to the pile of fiction implying that all progress is bad. I see your Frankenstein and raise you Plato’s Cave.

Finally, if you’re going to build your story on unsolved mysteries, vive le difference. Open up Wikipedia’s Open problems category and drill down to something like Undeciphered writing systems or Uncracked codes and ciphers or List of unexplained sounds. The more little-known, the better. A good story is fascinating because it’s not too familiar and a good mystery even more so.

If you must go to something like Category:Mysteries which has a high incidence of entries covering likely hoaxes, paranormal events of low credibility, and so on, please try to either stick to the sub-categories that are less hokey (like Lost works and Missing ships) or proceed with extra caution.

I have nothing against a good contemporary sci-fi or fantasy story… but I’m getting really tired of stories where the author has an engaging writing style or captivating characters, but the plot depends on me seriously considering the real-world existence of fairies or unicorns or the big bad wolf. Magical thinking has its place, but don’t over-do it. (I love The Dresden Files, but ridiculously improbable things like Bigfoot bore me when it’s implied that the author believes they’re probably real and expects me to believe likewise.)

As for general writing advice, pick something you’ve never seen used before (or if you don’t, resolve to do whatever you pick better than what you’ve seen) and, as Mark Twain said, “Get your facts first, then you can distort them as you please.”

Posted in Web Wandering & Opinion, Writing | 1 Comment