Complexity is the mind-killer

Chess moves

Last week I read an article from the MIT Technology Review that looked into what causes people to make mistakes. The approach was one I hadn’t seen before: it used data mining on a set of chess games. Their rationale was that:

  • There’s a huge database to feed your model from,
  • Chess is a deterministic domain where you can objectively evaluate good and bad moves,
  • There are clear criteria for the skill level of the participants, even when disparate, which allows you to balance.

As spoiled by the title, complexity was the main factor whenever a player made a mistake.

*The bottom line is that the difficulty of the decision is the most important factor in determining whether a player makes a mistake. *

So what’s the big deal?

The conclusion shouldn’t be all that surprising. Complexity makes things more difficult. Software engineering is a complex endeavor. Still, years of experience teach us how to deal with that complexity in ways a junior developer couldn’t. Plus we have best practices, like object-oriented design patterns, to help us tame this complexity and benefit from prior design knowledge.

Well… The authors aren’t done yet.

In other words, examining the complexity of the board position is a much better predictor of whether a player is likely to blunder than his or her skill level or the amount of time left in the game.

Read that again. Let it sink in.

We are not chess players

What we do is a lot more complicated.

Even if you don’t play chess, chances are you know it has some basic characteristics.

  • There is no hidden information - anything you need to know is visible on the board.
  • There is a limited, well-defined set of rules and possible states.
  • There’s a gigantic historical database of games and best practices, which has been fine-tuned for centuries.
  • There isn’t any randomness involved.

We could argue that software shares similar characteristics - in theory. Then again, in practice…

  • There is no way to predict the data or input you will have to deal with, so while there might be no hidden information during a post-mortem, there is during development.
  • We have a nice set of design patterns, for some specific situations. Their applicability is local, however. They also require judgment calls and haven’t been battle-tested for generations.
  • The number of possible states you have to deal with, once you consider the combination of real-world data events, might as well be incalculable.

More importantly, as a chess game progresses, the number of potential moves (and possible future states) goes down. With software, the longer it lives, the more environment-specific data and variables accrete.

And yet we continue to hand-wave complexity during development. We treat it with no small amount of arrogance. We can deal with it, right? We’ve been doing this for a while. It’s just a matter of discipline.

Sources of complexity

There are two sources of incidental complexity in most applications that we just take for granted.

The first is mutable data. I’m not going to get tired of saying this: if you can’t be 100% certain of the state of your data at any given point, you are not in control. If you need to keep a mental map of which objects in your application might be holding or modifying any particular elements, you are adding unnecessary complexity. This will lead to mistakes.

We can deal with it through immutable data. I’ve written enough about it as is, so I’m not going to harp on it too much here.

The second is the language itself. Whether we like it or not, we have limited mental capability. Remembering the syntactic sugar and edge cases of your language takes brain cycles. Any brain cycles you’re using to remember how you’re supposed to be saying things in a particular case are not used in solving your problem.

A compiler’s syntax checking will help. Tools may point out when you’re making potential mistakes. But these only help post-facto, when you have already spent those brain cycles, to reduce the mistakes that get through. They don’t reduce language complexity, and they don’t bring your brain cycles back.

If instead we use languages with a simple, consistent grammar, and with forthright semantics, we are freeing up our attention to focus on what matters: the problem we are actually trying to solve.

A little humility goes a long way

I get it. We sit on a desk, we bang on some keys, and we make magic happen. It makes you feel amazing. On the other hand, there’s intense competition in the market. When everyone else is bragging about how they monad their Malbolge on Node, it can make you feel like you are falling behind.

It’s easy to try and cover the imposter syndrome with piles of cockiness and to judge your skill by the number of chainsaws you’re juggling. It’s other people’s brains that can’t handle complexity, your brain is just fine.

If you suspect you might be doing this, I suggest you put your tools down and go out for a walk. When you come back, look at them with fresh eyes. Question if libraries or frameworks you take for granted might be hairballs. Try and solve a similar problem using a different approach, starting from basic principles.

Instead of valuing the complexity of your tools, take pride in the complexity of the problems you’re solving.

Remember: even masters in a well-studied, deterministic domain like chess make most of their mistakes when dealing with complexity. Whatever automagic method-interception dependency-injection ball of wax you’re relying on will eventually trip you as well.


Published: 2016-07-11