There's something that's been on my mind a lot lately: the significance of meaningful boundaries. It's a theme that keeps coming up wherever I look. Not just in software engineering either. The absence of meaningful boundaries means trouble in just about every area of life.
Relationships without healthy boundaries encourage nosy friends and meddling family members, often bringing an endless supply of anxiety along with them.
Poor boundaries with news sources can easily give you a skewed sense of the world.
A business without meaningful boundaries is likely an aimless yet frantic organization where no one knows exactly what's going on or who's responsible for it. Everybody is responsible and nobody is responsible, all at the same time. Need to know what’s going on with a particular initiative at the company? You’ll have to ask somebody in every corner of the organization and piece it together yourself.
It’s no different in software engineering. When a piece of software looks like that, we call it a Big Ball of Mud.
A Big Ball of Mud is a haphazardly structured, sprawling, sloppy, duct-tape-and-baling-wire, spaghetti-code jungle. These systems show unmistakable signs of unregulated growth, and repeated, expedient repair. Information is shared promiscuously among distant elements of the system, often to the point where nearly all the important information becomes global or duplicated.
If you've been building software for long, you've likely had a tour of duty in one of these monstrosities. Every change is another opportunity for failure to ripple throughout the system. You start to seriously question why you ever got into this field.
A piece of software with boundaries in the wrong place is a whole other kind of pain. The wave of interest in Microservices in the 2010's brought about a whole generation of what we've come to call the Distributed Monolith, a system with "distinct" services that are actually deeply tangled up with each other. You often can't even make a simple change to the system without updating multiple services and coordinating a simultaneous code release. Every inch of progress is exhausting.
Separation of Concerns isn't merely some overwrought principle peddled by purists too busy fussing over theoretical rightness to ever actually ship something. (Though without a doubt the purists do enjoy fussing over it.) It is fundamental to building effective, resilient, malleable, performant, secure, maintainable software. In other words, high-quality software.
In software engineering, there's a pair of factors we must confront to prevent the system's natural trend toward disorder.
Interdependency is the degree to which a part of the system relies on or is relied upon by other parts of the system or other systems, especially in non-obvious ways.
Comprehensibility is the degree to which a part of the system can be easily understood for the sake of making accurate changes with predictable results.
These two key factors have an inverse relationship. As interdependency increases, the size of the system that must be accounted for grows and comprehensibility decreases dramatically. And as a result, risk of failure rises exponentially.
Striking the right balance is all about meaningful boundaries. How do you reduce interdependency? Separate the system into distinct parts and make their interactions explicit. That’s the core idea behind Separation of Concerns.
(We’ll dig into how to identify and organize those distinct parts of the system in an upcoming post.)
You can often make tremendous progress in a greenfield project by throwing caution to the wind, but the cost of mixing everything together will inevitably catch up with you. We spend most of our time working within existing code, and once a software system becomes difficult to reason about, every aspect of quality takes a major hit when someone makes a change. To ignore Separation of Concerns is to invite the sclerotic disasters discussed earlier.
At the end of the day, what does Separation of Concerns mean for the business and the people involved?
- Faster feature development because changes don’t require a complete, expansive understanding of the entire system.
- A far better chance of hitting estimate windows because you won’t discover halfway through that your changes are tangled up with and complicated by other parts of the system.
- A much tidier codebase when things need to be removed or refactored, which contributes to future comprehensibility.
- Protection against costly failures since problems in one spot are far less likely to ripple throughout the system. Meaningful boundaries help contain the blast radius when something goes wrong.
- A quicker recovery when things do go wrong because the issues are likely already isolated, and bug fix testing doesn’t need to be comprehensive.
- Less turnover thanks to happier developers who spend their mental fuel on the fun work of shipping new features instead of constantly getting pulled away to troubleshoot and untangle the knots in the system.
- Quicker onboarding, which means new developers are meaningfully contributing in days not months.
Sure sounds to me like it’s worth the cost of a being a little more diligent about system design.
Many thanks to Ryan Singer for the enlightening Synthetic A Priori podcast episode Design as a Multi-Scale Network, Risk as Network Structure which was instrumental in clarifying my thinking here. If you’re interested in getting into the weeds on the theory behind all of this, I’d highly recommend giving it a listen.