ML Embeddings and the Neuronal Code

I had the opportunity to talk recently with a relatively advanced researcher in machine learning methods. The conversation turned briefly to the study of embeddings when he mentioned that most of his work involves things that can be embedded in Euclidean space. Since I’ve been spending a bit of time thinking about embeddings recently, I asked him some questions to get the official ML take on the subject. I was resonably gratified to learn that – although most ML engineers don’t think much about embeddings – the research on this topic considers the embedding to be tightly bound to the network architecture. It is not possible to study abstract embeddings, divorced from applications. I fully agree with this point-of-view.

I have been thinking about the issue of learning task representations on-and-off for five years now. My entire postdoc was consumed with the issue. The ultimate public result of my postdoc was a preprint which demonstrated that a bias in reward prediction was not sufficient to explain human multiple-task learning experiments. Behind the scenes on that work, there was a whole body of work on both Actor and Critic learning from a task perspective. I actually did come up with two different almost completely abstract representations for critic learning, but ultimately to couple them to a model there needs to be an actual embedding. That was one of my first hands-on experiences of embedding encodings.

This past summer, I wrote an off-the-cuff article about Personalised Medicine which contained two basic insights: congruency classes and function. I had spent the previous two years thinking a lot about how to do medical AI ‘the right way’. The article was my own bottom-up reimagining of the problem. My ultimate take-away was that the most interesting problem in this space is that of finding an appropriate embedding for medical AI problems. The next most interesting problem is then doing something useful with that embedded landscape. You cannot really separate these two problems.

A lot of ink has been spilled in my former field of Computational Neuroscience about the Neuronal Code. This is often distilled, in recent times, into a question at to rate vs spike-timing based codes. I think that this is a straw-man argument. The Neuronal Code is an embedding. It cannot be divorced from the functional properties of the network context in which it operates. It probably operates in different codes, in different sub-networks, at different times. Indeed, a friend of mine has demonstrated that Pyramidal neurons might even be able to multiplex between sparse spike-driven modes and a burst-operation.

It seems that, on the one hand, Computational Neuroscience feels the need to investigate Neuronal Codes – which it is largely ill-equiped to study – in order to justify its existence as a field. Meanwhile, the study of embeddings somewhat languishes in machine learning – despite the capacity for improved embeddings to greatly improve the quality of results.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.