Divide by Zero

Simple solutions

Simple solutions are the best, aren’t they? When things just work, have a small amount of easy to test code, and are understandable. Isn’t that what we all want? I think a resounding “yes” would be muttered if asking that question of any software development team you’re in. Unfortunately systems are almost always far more complicated than is necessary so there appears to be a discrepancy between desire and practice.

Let’s start with a question: what makes a simple solution simple? It’s when a problem has been distilled to its fundamentals and the solution has been encoded using the right tools. The right tools will depend on many factors, including your environment. For example due to regulatory purposes client data must be hosted in both the UK and the US (but nowhere else) in my current contract. This means AWS and GCE are ruled out because they don’t have UK regions yet. In this environment self-hosting is currently the name of the game, so that rules out certain types of solutions.

So how do you distill a problem to its fundamentals? By the process of distillation of course! The dictionary actually gives a nice definition that aids in this metaphor: “the process of purifying a liquid by successive evaporation and condensation”. If we throw some software related terms in there then we have a pretty neat tag line: “the process of simplifying problems by successive understanding and refactoring”.

It’s hard—and sometimes impossible—to have all the required information upfront when undertaking all but the most trivial programming tasks. That isn’t just about the information given about the feature either; it’s the current state of the system, the direction it’s taking, your colleagues, the business domain, etc.

Start by encoding what you have and what you understand with the intention of refactoring (or rewriting) it in a short while. See what’s missing, what patterns appear, and step away for a while to let your subconscious take over. Ask for the answers for things that were ambiguous whilst coding. Look at the environment you’re coding in; is there as-yet undiscovered relationships between what you’re building and what’s already there (keeping in mind that similarity doesn’t necessarily mean a relationship exists)? An additional challenge I use for myself that often helps me in the understanding phase is to set the target that this feature/bugfix/whatever should make the codebase smaller, not bigger. You would be amazed at how often this is actually achievable.

Don’t confuse simple and easy. An easy solution might be to add a couple of if statements throughout a workflow, a simple solution might be a more polymorphic design. It usually makes sense to start with the easy solution before going through the distillation process.

Conversely and something I see more often from experienced programmers is don’t confuse simple and complex. Your polymorphic design may be built on false relationships, your abstractions may be difficult to use or leaky, etc. The most common way I see this happening is a programmer trying to make the project more flexible by making an assumption of how they think the software will change.

For example a feature is requested to call an external service to get information about a client’s history. The programmer builds a library that allows the service to be easily replaced or to add more services to provide the information using various interfaces and adapters. Now instead of adding more backends, new features are constantly being added to the existing one, so the adapters, interfaces, and concrete classes are all being modified to add these features. In almost every example I’ve seen when a new backend is eventually added it’s to provide an entirely separate suite of functionality so the entire suite of adapters and interfaces need scrapped in favour of consuming both backends at once for different features. The code had become much more rigid and difficult to work with, and the backends ended up with no relationship in the end.

This is a real example that I’ve seen a painful number of times. I’ve seen other much more subtle and nuanced versions of the same story, wherein it’s assumed that the business will pivot on data in one direction so extra complexity is added to easily support changes in said direction, but it’s actually another entirely so the code is now more rigid when trying to adjust in this way.

On the overall system architecture level the same sort of mistakes can have far worse repercussions. We’ve all seen the confusion of easy and simple on the architecture level where a system effectively has no architecture, there’s no boundaries, lots of global state, etc. but what about the confusion of simple and complex? I’ve seen a few bad examples of this, especially in the name of scalability which adds an insane amount of hidden complexity. If Stack Exchange and github can run on the order of ten servers each, then you probably don’t need to think about how you’re going to run on 1,000 (unless you do, in which case more power to you). Google has a rule of writing software for ~10x growth with a rewrite before ~100x. Minus disaster recovery, backups and so forth most reasonably designed web applications can reach millions of users on a single laptop. You can scale that vertically to get 10x more on a single server. Adding the understanding of CAP, at-least and at-most once message systems, supervising, orchestration, and so on is adding complexity which in this case isn’t needed.

To simplify the architecture of your system it should be distilled to its simplest solution as well. Again, this is achieved through successive understanding and refactoring. This is usually done on a much bigger and much slower level than the coding level. Because these changes are usually larger and higher risk, you must ensure you have a decent test suite so changes can be made confidently. The good news is that if you have a good test suite and you’re applying distillation in the small, then it becomes much easier to do in the large as well. Without information on how the software is expected to grow don’t try and guess. Let the software grow in the way it needs to here and now, and successively refactor and break parts out as necessary. Regularly have checkpoints to see what directions the system is being pulled in and focus your effort on modularizing for those directions. Apply sacrificial architecture, especially when there are a lot of unknowns.

Like design patterns, there are lots of architectural patterns to suit various kinds of problems, and you often don’t see the correct pattern until after a few distillations. Boundaries tend to take a while to form as well but it’s important to try and discover them and encode them, or else you’ll end up with a ball of mud.

To work towards simplicity is a simple process, but it isn’t easy. It takes time, discipline and sometimes a hard reflection of your own work. But by applying these principles a system will stay healthy for much longer than without, and it will stay more relevant to the business that it’s built for.

Written by Matthew Hotchen on