I have been working on medical AI, in some form or other, for most of my adult life. For the past 12 months I have taken the opportunity to pause from racing forwards with my own start-ups and to look again, partly as a researcher, at the tools at my disposal and their intended applications. What I have seen worries me.
Part of my efforts to improve things, have taken the form of a number of peer-reviewed scientific articles. A few more such articles are still under review or exist only as work-in-progress. Today I want to summarise the 5 greatest problems which I see facing medical AI systems. For some of them I think that there are clear mitigations. For others, I suspect that we will need to rethink the entire system.
Something I’ve struggled with on and off over the 20 years that I have been making mathematical models is explaining those models to others. I have tried to bring people along and develop their understanding. But mainly what I observed was that, some people just got it and others did not.
I have certainly improved my own skill at explaining. This comes down to having streamlined stories and simpler take-home messages. Telling a clearer story certainly improves my audiences’ self-satisfaction, but ultimately some of them get the whole message and others do not.
This is my third attempt, over the course of 9 months, to write this article. The first attempt foundered on my desire to go into detail on whether explanation or explanability is a good characteristic of a model or not. I confess, this was overly motivated by my personal frustration at having worked with somebody who, “never let the facts get in the way of a good story.” The second attempt got lost in a forest of anecdotes from previous projects. I was trying so hard to knit them together that I failed to make a point. Today, I want to focus on the single most important thing that I have learned about developing decision making models.
I had the opportunity to talk recently with a relatively advanced researcher in machine learning methods. The conversation turned briefly to the study of embeddings when he mentioned that most of his work involves things that can be embedded in Euclidean space. Since I’ve been spending a bit of time thinking about embeddings recently, I asked him some questions to get the official ML take on the subject. I was resonably gratified to learn that – although most ML engineers don’t think much about embeddings – the research on this topic considers the embedding to be tightly bound to the network architecture. It is not possible to study abstract embeddings, divorced from applications. I fully agree with this point-of-view.
I have a short thought, stemming from a combination of projects that I’m working on at the moment, and I want to share it.
The current trend towards Causality in AI is very attractive to people like me. It matches our personal biases and views of the world. However, it is lacking a natural heuristic. How do we decide how much resources to devote to alternative models of the world, as we gather evidence as to their accuracy?
Like I say, I have a number of parallel projects, many of which address exactly this question on technical and biological levels.
There is something from the world of business, studying entrepreneurship, which might be a better heuristic than any normative model I can come up with. Effectual entrepreneurship is a perspective on entrepreneurship, studying highly successful repeat entrepreneurs (eg. Elon Musk), which establishes control, rather than planning, at the core of entrepreneurial activities.
This week has been a really big week for me. I finally uploaded the first paper from my time as a postdoc to a pre-print server, called the bioRxiv. I did three major pieces of work, during my time as a postdoc, this is the first and potentially the only, of these, to see the light of day.
I am not usually so tardy in getting work out. I published two papers from my PhD – a record for working with my PhD supervisor – the work for both of which was finished before I ever defended the thesis. My postdoc work was a bit special, I ended up directly proving that the previous work of my collaborators was mistaken. Continue reading “Preprint Announcement – Roving and Unsupervised Bias”