“Everything I’ve ever done has ultimately failed”

One of the positive experiences to come from the pandemic is the chance to connect with people across the world with common interests in systems change.  In many webinars and video conferences, it’s pleasing and affirming to find that there is a similar passion to do things differently and to experiment with different ways to tackle some of the more intractable issues faced by public services. 

A recurring refrain is the importance of foregrounding learning. This suggests significant changes to how we think about evaluation, untangling it from the trap of performance management that undermines useful data and ‘corrodes learning’. In parallel, there is a need to think differently about accountability. 

Nobody I’ve heard is arguing that accountability isn’t important – after all, we are talking about spending public money, but true accountability requires learning.  In a recent Centre for Public Impact/ANZOG webinar Dr Subho Banerjee, a former Deputy Secretary in the Australian Public Service, suggested that being reflective and having data feedback loops produces better advice.  He suggests a better form of accountability than targets and outcomes would be to ask, “did you learn anything?” acknowledging along the way that the ‘politics of experimentation are murderously difficult’.  Getting to the point where this kind of question can be asked is not easy, but it’s worth thinking about how we might begin to answer. This would require a narrative or explanation for the decisions we make as we go along, how our original assumptions are being challenged or affirmed, what differences this makes to our actions and with what results. 

I call this action inquiry, an embedded evaluative learning practice.  It is not simply reflection on our experience, but a more ambitious and challenging reflexive process of thinking with others about our own ways of thinking, our assumptions, purpose, values and actions.

Resisting the notion that ‘it’s not the proper work’, we need to create space and time for very different forms of learning practices, as an integral part of our experimentation. Toby Lowe of CPI makes the helpful observation that it is notable that the people who are ‘doing the accountability’ are very often not included in the experimentation. In such situations, the old ways readily impose themselves through performance frameworks of KPIs that bear very little relation to the actual work in hand, and aren’t measures of the changes in perspective, power and participation needed for system change. We need to be able to co-create meaningful measures that provide timely feedback to support learning in action, so we track what matters, because it matters, rather than simply because the data is available.

Across the world, people are keen to hear examples of success in changing ways of working and adaptive practice.  Yet, when asked for examples of mainstream or large-scale system change, commentators might seem to dodge the question.  They fairly suggest that there’s lots of innovation happening, but it’s very often hidden.  Some is small scale, below the radar, where people have managed to carve out a small ‘permission space’ to try something different.  There’s an ‘asymmetry of risk’, such that credit for success will be shared, whilst we fear that failure will lead to individual blame.  Who wouldn’t be cautious in this culture?  Yet, perhaps this small-scale operation is inevitable and not at odds with our big ambitions for change at scale.  System change becomes less daunting when we accept that we have to start where we are, do some good things at a smaller level, allied with a ‘coalition of the willing’, be prepared to show people the difference that’s being made and invite them to join in.  You might call this ‘nurturing emergent development’ to achieve both scale and sustainability. 

But there’s another aspect to this in situations where ‘where nothing is clear, and everything keeps changing’. Myron Rogers recently gave a fascinating answer to the ‘dreaded question’ about examples:

“Examples?  Yes – and everything I’ve ever done has ultimately failed. Failed in the sense that it just doesn’t persist, through time and space. It changes, it adapts, it evolves, it moves into a different direction. You can’t say ‘well, this is what we intended to do’, and four months later, ‘this is what we got’, because if you’re not paying attention, at five months later, the way in which it changes itself, the very change you’re trying to do, changes.”

He goes on to propose that the most important thing to do is to nurture the capabilities to have ‘learning-full conversations’ in which we look at what’s actually gone on, and how it measures up to what it is we are trying to do, whether it works or doesn’t and what happens as a result. This ability to learn is what will persist through time and space.  

In this way we will be able to learn from successes (however small) and failures (however fleeting) as we study the ‘the articulations, workarounds and muddling-through’ as our work unfolds over time.  We need to go beyond saying ‘yes, we have insights’ or talk of ‘lessons learned’ to being able to say how our thinking and practice has changed in the light of our learning and with what consequences for those we seek to serve – a true accountability for learning and action.

This entry was posted in 5th generation evaluation, Leadership, Living systems research, What is action research?. Bookmark the permalink.