Dorner bases his work on a series of computer simulations of real world challenges (such as managing a drought-prone region of Africa).
The advantage of simulations is that they can be run multiple times to compare many different people’s approaches to managing the same system. Dorner uses this to identify the patterns of the majority of players who end up causing catastrophic droughts and crop failures,. He compares these with the minority who learn to manage successfully.
We can only summarise here, but we see useful ideas for facilitators – we also have to engage with complex systems (people!) and try to avoid catastrophes.
One of the key reasons people fail is they make assumptions about what will work. They get attached to rigid goals, and focus on implementing their idea rather than testing it. As a result, they tend to ignore the feedback they get from the system.
In Dorner’s simulations, for example, some participants focussed on reducing tsetse flies. The idea was to allow stronger cattle populations to feed the people. This led to unsustainable cattle growth which eventually caused a collapse of the ecosystem. Others focussed on solving water shortages by drilling more wells – but these eventually caused the exhaustion of the water table leading to a drought.
By fixating on simple outcomes, unsuccessful players missed the subtle signals of the complexity of the system.
The more successful players understood that they were generating hypothesesabout the system and then testing them – rather than formulating “truths” and executing them. They were more attentive to feedback, and thus tended to make more decisions, adapting as they learnt.
In our facilitation practice, we often talk about the power of tweaks. We’re interested in how small changes can lead to interesting consequences. We are not operating groups like a machine, instead we are working with living systems.
We’re often asked if a process is “working” as if there is some simple pass/fail test. We’d argue that it’s more important to notice how things are working, and not to get wedded to a simple outcome. It’s easy to get attached to a list of deliverables and miss the richer learning that can happen when a group of people interact.
One example: many clients get attached to events ending on a high. This can create a lot of pressure to generate a list of actions or end with a boisterous game. We often prefer a more open feedback process that encourages people to share the full range of their experience, including what they are struggling with. This may not create a high, but it usually provides a much greater sense of the aliveness and diversity that happens among groups that work together.
We’ll be exploring these ideas more at our forthcoming workshops in Melbourne and Cambridge
(We’ve shared this article as part of our twice monthly newsletter – you can subscribe here.)