Since I managed to break my writers’ block on decision making models last week I want to follow-up with a brief discussion on the use of Narrative in presenting decision models to an audience.
In my first article on decision making models I emphasized that a model must serve a purpose. In explaining our models to others I want to highlight that there are two purposes behind explaining a model; the first is to convince the audience; the second is to convey insights into the model. This is the opposite ordering of how scientifically-trained modellers typically think about communicating results, but it is by far-and-away the prioritisation of most top scientific communicators around the world.
Convincing strategies
In the human arena, there are two extremes of strategies which are often used when trying to convince people that a decision has been made correctly. The first, involves invoking seniority. “Hi, I’m the boss this is how it is.” This strategy sometimes leverages a history of past performance. “I’m not the boss, but I’m the PhD in X who has spent 10 years working on this.” We all have to choose whether to trust this person, but once we do so we are offered no further insight into their decision making process. The other extremal strategy, which I label the Ted-strategy in honour of Ted talks, involves constructing a simple narrative. You convince people with a story, invoking multiple layers of metaphor and simile, calming their doubts with the sense of domain control which you invoke in your audience through your beguilingly simple explanation.
Like all things, extremes have problems. Having gone through years of formal education, I am naturally somewhat more sympathetic to the expert-strategy. That is, until I’ve seen it in action one time too often. I do a lot of work both advising and conducting due diligence for medical start-ups on their deep-tech. As my own expertise grows, I have enormous difficulty in seeing people who would (hopefully) never lie in their clinical practice knowingly lie to the public about their product. I have seen it too many times now for this to be just anecdote. A large number of expert medics, are so used to relying on their position in a hierarchy, to convince their own subordinates, that they reflexively rely on this authority even when it is completely inappropriate. I don’t think that they set out to lie, perhaps they are less experienced than the rest of us in working in environments where they are not expected to know everything and they don’t have the appropriate coping skills.
At the other extreme, narrative is something which I have always instinctively mistrusted. Children’s stories are designed to make you feel good, to instill basic lessons, and then you’re supposed to grow out of them. The world is messy and complicated, and I was raised to question everything. That said, I spent 3 years working for an (academic) boss who relies enormously on narrative for his career and I learned a lot from him. I will even go so far as to say, in some cases a simple narrative allows expert practitioners to take home a much more solid take-away without diluting on the depth and quality of your work.
I do think that in recent years we’ve crossed a threshold in which storytellers believe that it is more important to convince their audience that they are right than for the audience to take away a true impression of what they were talking about. Reputable organisations such as Ted and NPR place so much emphasis on narrative nowadays, I feel that they have moved into the political sphere – where convincing people probably should matter. I think that some ideas should not be understandable by your grandmother. Giving a version of the story which, through deliberately deceptive means, gives everybody the impression that they have now understood the issue, leads to a devaluation of information and expert-knowledge in our society.
Both of the these extremal convincing strategies say nothing about how evidence is actually accumulated. They focus on convincing the audience of the rightness of a particular decision. Being by nature a moderate, I think that a middle approach is possible. I believe that it is totally appropriate to sometimes say to a junior team member that we don’t have time to delve into aspects of a decision which are beyond their skillset. Thus invoking the seniority argument. Balancing this, I also believe that time must be set-aside on occasion to (i) contribute to the development of those team members by teaching them how decisions are reached, and (ii) to engage in explaining decision making processes, in general, to appropriate stakeholders.
Similarly, I have enormous respect for good storytellers. It is clear that people love a good narrative structure. As long as the narrative is faithful to the underlying model, I will usually prefer a well-explained model to a highly-complicate one which has a fractional improvement in relative performance. I spent two years attending as many visiting lectures at the University of Chicago as I could find time for. I truly made it my mission to understand, not just the basic science but also, why this person standing in front of me was so successful. Every single visitor – all of whom were top-flight academics – wove remarkable narratives around their work. This was the key to their success. One or two sacrificed accuracy on the altar of a good story, which seems to me a horrible waste of my time as well as their own, but in general they combined good scientific ability with exceptional story-telling.
Explaining models
Narrative helps to convince an audience to believe your decision making model is correct. But does it help to give insights into the model?
This is a harder question than it initially appears. As an expert modeller, I can say that we practitioners spend years practicing explaining our models to one another. Some degree of narrative definitely helps move things along faster than just handing over the equations and the coefficient values. In some cases, the specific wording of the narrative can also convey considerable additional information about appropriate use-cases and known limitations of the model.
Strong narratives are particularly useful for remembering models. It is much easier to learn and remember about broad classes of models via simple narratives about them than it is by deeply studying their mathematical details.
This entire discussion – which I am deliberately not going into in too much detail – is essentially mirrored in the AI-community by the debate between interpretable AI (iAI), explainable AI (xAI) and black-box models. Interpretable models are those closest to a narrative structure. The black-box is the expert practitioner where past-performance is a proxy for future behaviour. And the explainable model is where we have a process explanation rather than an interpretation of how the model works.
I try to fit my explanation to my audience. I doubt that I get it right every time. I certainly want my audience to appreciate my models in as much detail as I do. But this is not appropriate. If I am spending considerable effort building a model I cannot expect others to ‘get it’ following a simple explanation. So I have a tendency to over-explain. The mark of professionalism is to recognise this tendency and try to mitigate it. This is why I practice presenting my models. If my audience cannot be expected to understand my models in all of their nuance then they must at least be able to understand the most relevant aspects for their own purposes.
I hope that I get this right more often than I get it wrong.
Lesson #2, in decision models, narrative explanations must serve the purposes and abilities of the audience not the original modeller.
Something which I forgot to include in the main text of this article is how much the process of explaining and re-explaining my models has contributed to my development as a modeller. This process has accelerated since I began this blog. Formalising my thoughts helps me to understand my own work better.