There’s something missing.
The newly released UK apprenticeship standard for Systems Thinking locates systems practice “in arenas where complex problems exist that cannot be addressed by any one organisation or person, but which require cross-boundary collaboration within and between organisations.” This is great. It neatly identifies why the language of systems is coming back into vogue: As the world becomes more complex, we’re waking up to the fact that problems can’t be simply pinned down to one person, one team, one organisation, one population.
But I want to look at this quotation more closely, because it highlights what for me is a big gap in the systems world. In particular, I want to pull out the two concepts of “complex problems” and “cross-boundary collaboration”. Firstly, let’s just pause to appreciate what a wonderful thing it is to be able to read these two phrases in the same sentence in a government-backed standard! So how do systems practitioners actually create “cross-boundary collaboration” to address “complex problems”? How does it actually work in practice? Well, one of the things a systems intervention will invariably involve is some form of collaborative modelling. I’m using modelling here in a very generic sense; even if nothing is written down, and the intervention simply amounts to a series of conversations across organisational boundaries, this will still have the effect of shaping the mental models of those involved in the conversation.
But that’s not most people’s experience of systems modelling. Across most of the major methodologies, the model will be co-constructed into an explicit, usually visual form by the participants. Here are a few examples:
Now, even to the uninitiated, it’s obvious just from looking at these kinds of models that they are addressing complex problems. In fact, that’s often the point; their creators want them to appear as convoluted as possible in order to emphasise that there are no easy solutions. For most people though, the complexity is just off-putting – the most obvious thing the examples above have in common is that they all look a bit like spaghetti1.
And this is where I think the systems world is missing a trick, because to my mind, once you want to tackle a complex problem with cross-boundary collaboration, and you try to articulate the complexity through a visual model of some kind, then surely you have to address the issue of legibility.
Étienne Wenger, in his ideas on communities of practice, popularised the concept of boundary objects – the artefacts that span the gap of meaning between different communities. When you are involved in a system intervention, you are typically trying to get a representative of as many different parts of the system as possible (the communities involved in or affected by the complex problem) into the room at the same time in order to co-create the model. So for the duration of the session, the model is a boundary object, a focus of shared meaning for those taking part. And in my experience, that’s exactly what happens. When you start with a blank sheet of paper and slowly build up a map of the system (obviously assuming skilled facilitation that’s keeping everyone attentive and involved), the model becomes a record of the shared insights that emerge. The group listens to perspectives they haven’t heard before, overlays them into a composite form, and watches as patterns emerge that no one group by themselves would have predicted. When run well it’s a hugely rewarding experience, and you end up with a map that is rich in shared meaning.
So what’s the problem? The problem is that the meaning stays in the room! The boundary object only holds the shared meaning for as long as the group that creates it stays in one place. It’s practically illegible to everyone else. Even if it’s laid out in a relatively clear form, with clear handwriting and not too many tangled lines, the simple appearance of complexity is enough to put off virtually everyone who wasn’t in the room from even attempting to navigate it. And even if they do, how well equipped are they to determine what the group meant by each of the words they left behind?
So there’s a gap. I think systems practice is missing a whole sub-discipline, which is how to make models of systems meaningful across the system. How to make models that cross organisational boundaries, where you don’t need to have been present at their creation to understand what they are saying, but which nevertheless spark systemic conversations because they don’t just reflect one group’s perspective of the whole. How to help the whole system see itself, if you like, not just the representatives who turn up to systems interventions. What sort of things might fill this gap?
I’ll not try to answer the whole question here, as the main point of this post is just to draw attention to the fact that the gap exists, and to be honest, I don’t think anyone is remotely close to understanding how to fill it. But just to give an indication of the kinds of things that might be involved, I would say we need a much greater appreciation of the extent to which humans naturally make sense of the world through stories, rather than systems. The two are not independent, though. The brain is effectively a pattern-spotting device, tuned to recognise and respond to different situations. A system is an inter-connected set of situations that recur in a predictable way. A story is a collection of these situations told in sequence. So here’s at least one rabbit hole to jump down: What if the next step you took with the kinds of models I showed above was not to turn them into Powerpoint slides, but to turn them into sets of visually interconnected stories? What if instead of just using the output to show everyone how very complex the situation is, you built up to the complexity by laying multiple narrative threads on top of one another, until the overall pattern became unmistakeable?
If this sounds like an interesting idea, you might want to read some more thoughts on it here.
- Some people, criticise these kinds of diagrams in the other direction i.e. that they are reductionistic, fail to adequately capture the emergent nature of the system’s behaviour, interpret the complex as it were merely complicated, imply that the practitioner is not part of the system, and so on. While there’s validity in these concerns, I’d repeat my earlier point about modelling: Even the forms of words people use to voice these criticisms of the model are themselves models. There’s no escaping the need to model – as ever, the question is whether or not the model is useful.