Since I managed to break my writers’ block on decision making models last week I want to follow-up with a brief discussion on the use of Narrative in presenting decision models to an audience.
In my first article on decision making models I emphasized that a model must serve a purpose. In explaining our models to others I want to highlight that there are two purposes behind explaining a model; the first is to convince the audience; the second is to convey insights into the model. This is the opposite ordering of how scientifically-trained modellers typically think about communicating results, but it is by far-and-away the prioritisation of most top scientific communicators around the world.
This is my third attempt, over the course of 9 months, to write this article. The first attempt foundered on my desire to go into detail on whether explanation or explanability is a good characteristic of a model or not. I confess, this was overly motivated by my personal frustration at having worked with somebody who, “never let the facts get in the way of a good story.” The second attempt got lost in a forest of anecdotes from previous projects. I was trying so hard to knit them together that I failed to make a point. Today, I want to focus on the single most important thing that I have learned about developing decision making models.
I am asked quite often how I see Data Science in the biomedical industry. I have, of course, many answers each of which is context dependent. However one theme which I find frequently recurring is a sort of straw-man debate which seems to inherently attract technical practitioners.
The debate is usually structured as follows: How do you see the validation of medical AI products working in practice? Answer: clinical trials, test-validation sets, blah, blah But doesn’t this lead to enormous overheads? Answer: yes, but there are shortcuts But if you take these shortcuts then don’t you run the risk of running into costly failures when you finally run the clinical trials? It goes on….
I had the opportunity to talk recently with a relatively advanced researcher in machine learning methods. The conversation turned briefly to the study of embeddings when he mentioned that most of his work involves things that can be embedded in Euclidean space. Since I’ve been spending a bit of time thinking about embeddings recently, I asked him some questions to get the official ML take on the subject. I was resonably gratified to learn that – although most ML engineers don’t think much about embeddings – the research on this topic considers the embedding to be tightly bound to the network architecture. It is not possible to study abstract embeddings, divorced from applications. I fully agree with this point-of-view.
There are three basic business models in bioinformatics:
Licencing of insights
Selling a tool
In the consultancy model, you are being paid for your time and expertise. The risk lies with the payer (employer) in this case. There is no guarantee that you will come up with anything useful. Therefore your margins are also low.
Randomised controlled trials (RCTs) have been the gold standard for statistical evidence, of treatment effect, for over 100 years. Their strength is in their attempt to avoid major sources of bias in a comparison of the evidence. However, they are costly to run, particularly in the domain of personalised medicine, to which medical AI products typically belong.
There is a growing awareness in the field of immunology of the potential for using mathematical techniques. The wedge-issue here is the cascade of data appearing via new cytometry techniques; large-data looks like a math issue to most people. I of course come from the other side of a spectrum – everything looks like a math issue to me – I wanted to stimulate drug development which engages with immune system dynamics by founding my company.
I have a short thought, stemming from a combination of projects that I’m working on at the moment, and I want to share it.
The current trend towards Causality in AI is very attractive to people like me. It matches our personal biases and views of the world. However, it is lacking a natural heuristic. How do we decide how much resources to devote to alternative models of the world, as we gather evidence as to their accuracy?
Like I say, I have a number of parallel projects, many of which address exactly this question on technical and biological levels.
There is something from the world of business, studying entrepreneurship, which might be a better heuristic than any normative model I can come up with. Effectual entrepreneurship is a perspective on entrepreneurship, studying highly successful repeat entrepreneurs (eg. Elon Musk), which establishes control, rather than planning, at the core of entrepreneurial activities.
First a mea culpa, I have a huge backlog of relatively heavy articles that I really want to add to the blog. But I’ve been busy getting married – congratulations to me – and I didn’t have enough time. I strongly believe in following relatively strict guidelines on writing and editing articles, where I set myself deadlines and avoid over-writing on topics – it is just a blog after all – but for deep insights I do also have a minimum standard that I want to be able to produce before I’m willing to hit the Publish button.
I am beginning a new project this week, the topic is Causal Inference. This is something I have been reading about, and wrestling with, for quite some time. Now seems a good point to take some time out, form a project, and see what I can get done on the topic.