Making the right errors
How Precision Estimation helps our teams pay attention to the right errors
When I was at university I started going to life drawing classes. I remember being struck by the richness of the visual world that I hadn’t noticed before. When you’re looking to make a drawing look more realistic and represent your subject, you have to capture subtle shades, colours, contours and textures. These qualities are always there in our world, but for most of our lives unless you are an artist, you won’t pay much attention to them.
At the same time, when you’re trying to work out whether your rice has turned bad, or are foraging for mushrooms, your sensitivity to colour and shade naturally becomes more finely attuned. How does this work? How does my mind manage between what is important one moment and unimportant the next?
Another question - as someone who wants to build an Intelligent Organisation, what should I help my team attention to? I have an infinitely rich landscape of information, where should I look?
The answer to both sets of questions may lie in the topic of Precision Estimation.
Precision Estimation is part of the Predictive Processing model.
[ A quick reprimer on Predictive Processing:
We use models and predictions about the world as the starting point of our cognition (e.g this cup of coffee is hot)
When our sensory experiences don’t match the model (this cup feels cold to my touch), this is called prediction error
When we experience prediction error we update our model (the coffee and cup are cold)
Cognition works by quickly sampling the world, and testing and updating our internal models whenever we experience prediction error. Our mind works by trying to reduce prediction errors through updating the models and through our actions]
How do we know which information to pay attention to? Sometimes my senses do experience something that should trigger prediction error and change my model, but this doesn’t actually happen.
Here’s a couple of examples where this could happen:
You know somewhere in your mind that the coffee has been sitting out for an hour, nonetheless you have a model and a prediction that the coffee is hot. You pick it up and your fingers feel the cold, but you still expect it to be hot when you drink it, it’s only when it gets to your mouth that you properly pay attention to the cold coffee and experience Prediction Error and update the model
You are 5 years old and lost in the supermarket aisles, you think that the person wearing your mother’s trousers but not her top and with a different face is your mother. No prediction error. You run up to them and hug their leg only to discover to your horror that it’s just a stranger wearing similar trousers. Prediction Error. Model updated.
In his 2015 paper Andy Clark describes how part of predictive processing involves the relative weighting of the usefulness of different prediction errors. If we are more certain about a particular sense, we turn down the ‘volume’ on errors about it, and if we are uncertain about another sense we turn up the volume of prediction error. In a dark room I am less certain about the information coming to my eyes, so I pay the visual information I do get lots of attention. So not only do you you start with predictions and minimise prediction error, but you are constantly adjusting which errors to pay attention to.
This method allows us to use lots of existing prior information about our world to help us process our experience. The fancy term for this is Bayesian inference, which boils down to ‘I take into account past information when I think about the information in front of me’.
If most of my experiences with picking up my coffee mug involve it being hot, my body down-weights the sensory information from my fingers that it is in fact cold. This is Precision Estimation. It is an efficient and useful strategy the vast majority of times. It’s only when we experience the peculiar and sometimes embarrassing errors of hugging a random leg in Tesco that this approach feels costly or noticeable. We are unaware of all the other times when our predictions and Precision Estimates have served us effectively - which is all the time!
Probably right now as you read this blog post you are down-weighting sensory information coming to your body in order to make yourself sensitive to the prediction errors coming into your eyes from these words. And these words. And these words.
This explains the question I asked at the beginning about shade and colour. We down-weight prediction errors related to the colour of things or the shade unless they are very relevant to the tasks at hand. In this way, precision estimation explains how attention works.
As we get older, if we get better at paying attention to the right prediction errors for our goals this becomes intelligence and wisdom. We learn to incorporate more sensitivity about which information is important and relevant to the situations we find ourselves in.
From a team point of view this is a great model to help us think about teams in different stages of their development and what they need. A startup trying to build a new online game will be very sensitive to the prediction errors coming from their potential customer about the essence of the gameplay and less sensitive to those about cost and efficiency of running the team.
Equally if I’m a manager at Toyota and I hear that people don’t like cars any more I might down-weight that signal because it’s at odds with the overwhelming majority of my experience, and I’ll be much more sensitive to the prediction errors relating to the costs of importing or manufacturing different parts of the cars I make.
This can be both adaptive and maladaptive. If Toyota spend their time worrying and listening to signals about whether people really like cars this would be a wasteful strategy coming at the cost of their ability to pay attention to other prediction errors that are relevant to the problems Toyota ought to be solving . Equally, being grounded in this large set of historically useful assumptions creates a kind of institutional inertia. That makes it easier for upstarts more sensitive to outlying trends and data to build things that Toyota wouldn’t be able to and snap up their market share through innovation.
So how do you take this insight to your team? As a starter, I try to understand with my teams how known or unknown their domains are. I then help them tune their sensitivity to different sources of information and prediction errors accordingly.
For example if my team is a startup or a Government alpha team they are operating in an area where the very basics of what they are trying to do are unknown and uncertain. They need to discover if there’s any need for the service or product they are building and if anyone would use it. The tool you use in a situation like this are Business Model Canvases, Customer Interviews and low fidelity prototypes.
Their Prediction Errors should be very sensitive to information relating to the basic need and use case of their product. I wouldn’t be expecting them to pay lots of attention to later stage things like a super slick software development process, or optimising the cost of user acquisition or anything like that.
On the other hand there are more mature teams working in a more known domain. Sometimes I’ll work with a team developing a website or app that people are using and the basic assumptions about user needs are well understood. In this case I might help the team pay attention to different information and have sensitivity to Prediction Errors coming from other sources.
Whilst still paying some attention to basic user needs and signals from the market, they may also be paying attention to different concerns - making their app faster, or working out how to run their service cheaper or more efficiently. eBay doesn’t need to worry about whether people want to auction things online any more, so they can down-weight prediction errors coming from that area.
In a future post I’ll map out how precision estimation and predictive processing could map to the Cynefin Framework and even help you choose between sense-making frameworks and tools. With that teaser in place, I’ll suggest that you subscribe to make sure you get these Agile On The Minds straight to your inbox.
If you got this far please give this post a like or a share, it helps with findability and my personal human motivation to keep writing :D. If you don’t like it leave a comment and excoriate it.
A couple of other things that are happening that you might like:
I’m running an online Meetup group to talk about this kind of thing, I’ll be presenting at our next meeting on 29th Feb about a Cognitive Science perspective on knowledge. Maybe you’d like it
If you’re an agilist looking to up your game I’m running a retrospectives Masterclass on March 5th. We’ll be looking at using the team’s lifecycle stage to plan the right retrospective for the right time. I’d love to see you there


