Last week I published an article about Artificial General Intelligence. This week I want to follow-up with my second of three attempts to predict the future. As I said, last week, this was part of a game which is commonly played in incubators when trying to draw insights from deep-tech founders. This week I want to talk about the Virtual Patient for drug development
I founded my first company over three years ago. We made no secret of our interest in using in-silico methods to build a Virtual Patient for drug development. We didn’t succeed that time but our lack of success had little to do with either technical issues or a lack of a commercialisation option, it was entirely our own fault.
I have been working on medical AI, in some form or other, for most of my adult life. For the past 12 months I have taken the opportunity to pause from racing forwards with my own start-ups and to look again, partly as a researcher, at the tools at my disposal and their intended applications. What I have seen worries me.
Part of my efforts to improve things, have taken the form of a number of peer-reviewed scientific articles. A few more such articles are still under review or exist only as work-in-progress. Today I want to summarise the 5 greatest problems which I see facing medical AI systems. For some of them I think that there are clear mitigations. For others, I suspect that we will need to rethink the entire system.
One year ago, I left the start-up where I had been working on an AI-driven companion to accompany patients through their cancer treatments.
When I left, I was deeply frustrated with the start-up environment surrounding AI in Healthcare. I was still convinced that AI could help in this space, but all I was seeing was teams going down what I considered to be the wrong paths.
Since I managed to break my writers’ block on decision making models last week I want to follow-up with a brief discussion on the use of Narrative in presenting decision models to an audience.
In my first article on decision making models I emphasized that a model must serve a purpose. In explaining our models to others I want to highlight that there are two purposes behind explaining a model; the first is to convince the audience; the second is to convey insights into the model. This is the opposite ordering of how scientifically-trained modellers typically think about communicating results, but it is by far-and-away the prioritisation of most top scientific communicators around the world.
I had quite a nice spring season of talks planned for 2020. I was invited to deliver a keynote on AI in Healthcare at Biovaria. And, I was one of the invited speakers for the Dynamics of Immune Repertoires conference where I would also have given a workshop, in Dresden. Covid-19 struck and the rest is history.
Emergencies lead to quick changes of plans. Anthony Kelly from AI in Action reached out to me asking me to take part in a special on AI in Healthcare.
I am asked quite often how I see Data Science in the biomedical industry. I have, of course, many answers each of which is context dependent. However one theme which I find frequently recurring is a sort of straw-man debate which seems to inherently attract technical practitioners.
The debate is usually structured as follows: How do you see the validation of medical AI products working in practice? Answer: clinical trials, test-validation sets, blah, blah But doesn’t this lead to enormous overheads? Answer: yes, but there are shortcuts But if you take these shortcuts then don’t you run the risk of running into costly failures when you finally run the clinical trials? It goes on….
Apparently, it’s that time again. I just gave my second invited keynote at a conference at Charité Berlin. It was really fun.
The audience were dentists – academic dentists. I confess that I struggled to understand why they thought I would be a good fit for their conference. My previous keynote was at the BIH Digital Health Forum – a much more obviously appropriate audience. But, perhaps strangely, the fit was very good.