I began this blog almost exactly three years ago. My goal was two-fold: first, to demonstrate thought leadership in an area in which I was then founding a company; and, second, to have a demonstrated track-record rather than a picture-perfect pitch-deck. This latter is clearly a sub-goal of the first.
A few weeks ago I published a futurecast about using virtual patient’s for pharma drug development. There was a key idea buried in that article which in essence is my current synthesis perspective on mathematics and biology. I want to highlight this idea here and flesh it out slightly.
I have been working on medical AI, in some form or other, for most of my adult life. For the past 12 months I have taken the opportunity to pause from racing forwards with my own start-ups and to look again, partly as a researcher, at the tools at my disposal and their intended applications. What I have seen worries me.
Part of my efforts to improve things, have taken the form of a number of peer-reviewed scientific articles. A few more such articles are still under review or exist only as work-in-progress. Today I want to summarise the 5 greatest problems which I see facing medical AI systems. For some of them I think that there are clear mitigations. For others, I suspect that we will need to rethink the entire system.
I had the opportunity to talk recently with a relatively advanced researcher in machine learning methods. The conversation turned briefly to the study of embeddings when he mentioned that most of his work involves things that can be embedded in Euclidean space. Since I’ve been spending a bit of time thinking about embeddings recently, I asked him some questions to get the official ML take on the subject. I was resonably gratified to learn that – although most ML engineers don’t think much about embeddings – the research on this topic considers the embedding to be tightly bound to the network architecture. It is not possible to study abstract embeddings, divorced from applications. I fully agree with this point-of-view.
Randomised controlled trials (RCTs) have been the gold standard for statistical evidence, of treatment effect, for over 100 years. Their strength is in their attempt to avoid major sources of bias in a comparison of the evidence. However, they are costly to run, particularly in the domain of personalised medicine, to which medical AI products typically belong.
There is a growing awareness in the field of immunology of the potential for using mathematical techniques. The wedge-issue here is the cascade of data appearing via new cytometry techniques; large-data looks like a math issue to most people. I of course come from the other side of a spectrum – everything looks like a math issue to me – I wanted to stimulate drug development which engages with immune system dynamics by founding my company.
I have a short thought, stemming from a combination of projects that I’m working on at the moment, and I want to share it.
The current trend towards Causality in AI is very attractive to people like me. It matches our personal biases and views of the world. However, it is lacking a natural heuristic. How do we decide how much resources to devote to alternative models of the world, as we gather evidence as to their accuracy?
Like I say, I have a number of parallel projects, many of which address exactly this question on technical and biological levels.
There is something from the world of business, studying entrepreneurship, which might be a better heuristic than any normative model I can come up with. Effectual entrepreneurship is a perspective on entrepreneurship, studying highly successful repeat entrepreneurs (eg. Elon Musk), which establishes control, rather than planning, at the core of entrepreneurial activities.
First a mea culpa, I have a huge backlog of relatively heavy articles that I really want to add to the blog. But I’ve been busy getting married – congratulations to me – and I didn’t have enough time. I strongly believe in following relatively strict guidelines on writing and editing articles, where I set myself deadlines and avoid over-writing on topics – it is just a blog after all – but for deep insights I do also have a minimum standard that I want to be able to produce before I’m willing to hit the Publish button.
I am beginning a new project this week, the topic is Causal Inference. This is something I have been reading about, and wrestling with, for quite some time. Now seems a good point to take some time out, form a project, and see what I can get done on the topic.
This topic occurred to me following my recent talk at a dental conference at Charité Berlin. Upon hearing that I have a strong interest in inference, my fellow keynote mentioned that it drives him crazy that random forests, and similar algorithms, work so much better than DNNs on genomic data. He challenged me to come up with a reason for why this is the case.
I think that I know why. The problem I have is that I suspect that I can never prove it. That issue of not being able to prove things in machine learning is probably an equally interesting topic, for a future article, but here I want to address my theory of why random forests work better than DNNs for analysing genome data.
How do I really feel about this topic? I think that I can only work out the answer to this question by writing about it.
My suspicion is that those who shout loudest about personalised medicine know least about it. I fear that the promises being made publicly are categorically not possible. My hope is that I am wrong on this.