This post is part of an ongoing series on the books that I have read as part of my continual professional development (CPD). All of my CPD posts are available at the following link: Continual Professional Development
This is not a software book. Donella Meadows was an environmental scientist and systems analyst who spent her career studying how complex systems behave, fail, and adapt. Her book Thinking In Systems: A Primer is about ecology, economics, and global resource flows. And yet it’s one of the most useful things I’ve read for understanding how software teams, architectures, and organisations actually work.
The timing feels relevant. We’re in a moment where generating code is cheaper than it’s ever been. A good Large Language Model (LLM) can produce working functions in seconds. But the skill that hasn’t been automated, and won’t be any time soon, is understanding why that function exists, where it belongs in the broader system, and what happens when it interacts with everything else. That’s systems thinking.
I’ve spoken with developers who resist AI-assisted tooling because “I like the typing part.” I understand the instinct, genuinely. But it’s a bit like a worker on a car manufacturing line resisting robotic welding systems because they enjoy the sparks. The welding was never the job. The job was building a car that works. The typing was never the job either. The job was building a system that solves a problem. Meadows’ book helped me articulate why that distinction matters more now than ever.
You Set Up the Conditions
The flu virus does not attack you; you set up the conditions for it to flourish within you.
This single sentence reframes how you think about failure. When a production incident happens, the instinct is to find the person who caused it. But Meadows’ framing asks a different question: what conditions did the system create that allowed this failure to flourish?
A developer who pushes a breaking change to production didn’t “cause” the incident in isolation. The system allowed it. Missing tests, absent code review, no staging environment, no deployment gates. Those are the conditions. Fix the conditions and you prevent the next incident. Blame the individual and you’ve just taught everyone else to hide their mistakes. I wrote about this dynamic in depth in my post on psychological safety in development teams; the connection between systemic conditions and team behaviour runs deeper than most leaders realise.
The same applies to AI-assisted development. When an LLM generates code that introduces a bug, the question isn’t “why did the AI produce bad code?” It will sometimes produce bad code; that’s a given. The question is why the system allowed that code to reach production. Missing tests, no human review step, no fitness functions. The conditions, not the individual keystroke, determine the outcome.
Judge by Behaviour, Not by Stated Intent
If a frog turns right and catches a fly, and then turns left and catches a fly, and then turns around backward and catches a fly, the purpose of the frog has to do not with turning left or right or backward but with catching flies. If a government proclaims its interest in protecting the environment but allocates little money or effort toward that goal, environmental protection is not, in fact, the government’s purpose.
The frog analogy is one of those ideas that, once you’ve heard it, you can’t stop applying it. Judge a system by what it does, not by what it claims to do. If your organisation says it values code quality but ships untested features under deadline pressure every quarter, code quality is not its actual purpose. Shipping features is. The stated intent and the observed behaviour are two different things.
This is why DORA metrics matter. If you want to know what your team actually optimises for, look at what it measures and rewards. If you measure velocity but not stability, stability isn’t the purpose. If you measure lines of code but not the outcomes those lines produce, you’re optimising for typing, not for systems.
The same logic applies to team structure. If the system produces more managers than makers, then management overhead is the system’s actual output, regardless of what the org chart says the purpose is. Meadows would tell you to stop listening to the org chart and start watching the frog.
Start Simple
Life started with single-cell bacteria, not with elephants.
One of the shortest quotes in the collection, and one of the most useful. Every complex system that works evolved from a simple system that worked. This is the strongest argument against “big bang” rewrites, over-engineered architectures, and trying to build the perfect microservices platform before you’ve shipped your first feature. As I explored in my CPD review of Fundamentals of Software Architecture, the value of iteration in architecture cannot be overstated.
When developers use AI to generate entire applications from prompts, the result often looks like an elephant that was never a bacterium. It has complexity without the evolutionary pressure that would have made that complexity necessary. The modules exist because the LLM generated them, not because the system’s growth demanded them. Real systems earn their complexity through iteration; they start simple, encounter problems, and add complexity where the problems require it. Skipping that process produces something that looks sophisticated but collapses under real-world pressure.
Build the Feedback Loop into the System
‘Intrinsic responsibility’ means that the system is designed to send feedback about the consequences of decision making directly and quickly and compellingly to the decision makers. Because the pilot of a plane rides in the front of the plane, that pilot is intrinsically responsible. He or she will experience directly the consequences of his or her actions.
The pilot feels the consequences of their decisions immediately. In software, we often build systems where the people making decisions are insulated from the consequences. The architect who designs the system doesn’t operate it at 3am. The product manager who sets the deadline doesn’t debug the incident it causes. The further the decision maker sits from the feedback, the worse the decisions get.
“You build it, you run it” is the DevOps version of intrinsic responsibility. When developers are on call for the systems they build, the feedback loop tightens. Quality improves, not because anyone mandated it, but because the system’s structure creates the incentive. You write better code when you know you’ll be the one woken up if it breaks.
Teams that use AI to generate code but have no feedback loop are flying a plane from the back seat. The AI produces output; the system tells you nothing about whether that output works in context. Without tests, without monitoring, without on-call ownership, you’ve removed the intrinsic responsibility that keeps the system honest. The AI doesn’t care if the code breaks at 3am. You need to make sure someone does.
Know When to Change Course
‘Stay the course’ is only a good idea if you’re sure you’re on course. Pretending you’re in control even when you aren’t is a recipe not only for mistakes, but for not learning from mistakes. What’s appropriate when you’re learning is small steps, constant monitoring, and a willingness to change course as you find out more about where it’s leading.
Sunk cost is the enemy of good systems thinking. Teams that have invested two years in a migration are reluctant to admit it’s going wrong. Leaders who championed a technology choice defend it long after the evidence has turned against them. “Stay the course” becomes a mantra for avoiding the uncomfortable conversation about whether the course was ever right.
Meadows’ prescription is straightforward: small steps, constant monitoring, willingness to change. This is iterative development in a single sentence. It’s also the strongest argument for treating AI-generated output as cheap experiments rather than finished artefacts. When you can prototype a solution in hours rather than weeks, the cost of throwing it away and trying a different approach drops dramatically. Systems thinkers recognise this; they treat speed not as a way to produce more output, but as a way to learn faster and change course sooner.
Redraw the Boundaries
Mental flexibility; the willingness to redraw boundaries, to notice that a system has shifted into a new mode, to see how to redesign structure; is a necessity when you live in a world of flexible systems.
In software, boundaries are service boundaries, team boundaries, module boundaries. The teams that struggle most are the ones that treat these as permanent. Conway’s Law tells us that system structure mirrors organisation structure; changing one often requires changing the other. But recognising when to change is the harder skill.
When a system shifts into a new mode (scaling from startup to growth stage, adopting AI tooling, moving to cloud-native infrastructure), the old boundaries may no longer serve. The team structure that worked for five developers doesn’t work for fifty. The service boundaries that made sense for a monolith don’t make sense for a distributed system. Meadows’ point is that flexibility isn’t optional; it’s a survival skill. The organisations that thrive in changing conditions are the ones willing to redraw their boundaries rather than defend them.
You Can’t Just Say ‘Now All Change’
Social systems are the external manifestations of cultural thinking patterns and of profound human needs, emotions, strengths, and weaknesses. Changing them is not as simple as saying, ’now all change,’ or of trusting that he who knows the good shall do the good.
Teams are social systems. You can’t mandate cultural change by sending an email or updating a wiki page. Changing how a team works requires understanding the underlying needs, emotions, and incentive structures that produce the current behaviour. That understanding is emotional intelligence, and it’s the skill most tech leaders were never formally taught.
This is why “we’re adopting AI-assisted development starting Monday” fails. The social system has inertia. Developers who identify with the craft of writing code aren’t being stubborn; they’re responding to a deep professional identity that has served them well for years. Telling a welder “you don’t weld anymore; the robot does” without addressing what their new role is, what skills they’ll develop, and how their expertise still matters is how you lose your best people. The same applies to developers and AI. Effective adoption means understanding what developers actually value about their work (problem-solving, craftsmanship, the satisfaction of making something work) and showing them how AI amplifies those things rather than replacing them.
Watch the Whole System
When you’re walking along a tricky, curving, unknown, surprising, obstacle-strewn path, you’d be a fool to keep your head down and look just at the next step in front of you. You’d be equally a fool just to peer far ahead and never notice what’s immediately under your feet. You need to be watching both the short and the long term; the whole system.
This is the tension every tech leader navigates: sprint-level delivery versus long-term architectural health. Focus only on what’s immediately in front of you and you’ll accumulate technical debt that eventually stops you moving at all. Focus only on the long-term vision and you’ll miss the obstacles that trip you up today. Systems thinkers hold both views simultaneously, and that’s the skill that separates effective leaders from busy ones.
The Job Was Never the Typing
Meadows wrote this book about ecosystems and economies, not about software. But the principles transfer because software systems are systems, governed by the same dynamics of feedback, purpose, boundaries, and adaptation that govern everything else.
Car manufacturing didn’t eliminate the need for skilled workers when robotic welding arrived. It changed what “skilled” meant. The workers who thrived were the ones who understood the whole production line, not just their station. They could see how a change in one process affected quality three steps downstream. They could identify when the system needed restructuring, not just when a single weld needed redoing.
Software engineering is going through the same transition. Code generation is the robotic welding arm. It does one part of the job faster than any human. But it can’t tell you what to build, where the feedback loops should be, whether the system’s stated purpose matches its actual behaviour, or when to change course. Those are systems thinking skills. They were always the real job. AI just made that harder to ignore.
If your team is navigating the shift from code-first to systems-first thinking, or if you’re trying to integrate AI tooling in a way that actually works, that’s a conversation I have regularly with clients. Let’s talk.
