This week has been a really big week for me. I finally uploaded the first paper from my time as a postdoc to a pre-print server, called the bioRxiv. I did three major pieces of work, during my time as a postdoc, this is the first and potentially the only, of these, to see the light of day.
I am not usually so tardy in getting work out. I published two papers from my PhD – a record for working with my PhD supervisor – the work for both of which was finished before I ever defended the thesis. My postdoc work was a bit special, I ended up directly proving that the previous work of my collaborators was mistaken.
As you can imagine, this was a politically unfortunate discovery. Nobody likes being proven wrong (although you can read about my own opinion on the topic in a previous article) and when it occurs at particularly tense periods in a person’s career they have a tendency to lash out.
This is what happened to me, and is the direct reason for which I am not currently working at a university. C’est la vie.
The Unsupervised Bias Hypothesis
Learning in the brain is currently believed to result from the coincidence of associative learning rules (typically referred to as Hebbian) and broad-spectrum neuro-modulation such as reward. The neuro-modulation is typically assumed to be non brain region specific, although this belief is slowly being rolled back, and dictates the rate and direction (sign) of learning. The basic idea is that associative rules allow the brain to repeat desirable behaviours and stop-performing non-desirable behaviours. The neuro-modulators signal when learning should occur.
My collaborators came up with a beautiful hypothesis in 2012 which suggested that the mathematical implication of these types of rules are that synaptic weights (the embodiment of learning) will be subject to a constant bias when the brain tries to learn two tasks at once. This is an outcome of the basic mathematical formula for covariance, when applied to the synaptic update rule. They even found an experimental system (humans learning a visual acuity task) which appears to be vulnerable to this bias.
I cannot emphasise enough how elegant I found this original hypothesis. As a mathematician, it was beautiful to see such a simple formula applied to a biological system and apparently successfully making such a ground breaking prediction. The idea becomes even more beautiful when you begin to visualise the entire system in high-dimensions and applied to even more complex tasks. It undermines everything we hypothesise about reward-driven learning in the brain.
However the theory was mostly wrong, as was the demonstration in the original paper.
Why doesn’t it work?
I will admit that my preprint is badly written. It is an internal argument with people who were deeply invested in the other side of a debate. I would avoid reading it if I was you. It would be nice if I could publish a more reader-friendly version of it in a peer-reviewed journal some day, but right now I am fully occupied with other things.
For now, let me summarise the argument here.
- I showed, using mathematics and logic, that it is impossible for the unsupervised bias to ever overcome a reward-modulated covariance rule on a single synapse. This proof is invariant to multiple forms of transformation and rests on the fact that a single synapse has only a single dimension: the strength of the synapse can only get stronger or weaker.
- I showed, mathematically, that the major effect of the unsupervised bias is actually on tasks which share a neuronal representation, rather than on non-overlapping tasks. This is the opposite of the experimental system demonstrated in the previously published paper.
- The mathematics of an unsupervised bias rely heavily on the ability to take averages and linearisations. I show that, basically, in all such systems my results are robust.
- I show that adding population-level encodings, rather than single neurons, does not change my result.
- My counterparty in this debate typically resorted to calls to higher-dimensional encodings in a final attempt to defend their theory. I use word-arguments (very much an accepted approach in mathematical circles) to demonstrate that i) the fact that synapses only have a single dimension, combined with ii) new results, from Surya Ganguli, porting ideas from high-dimensional physics to neuroscience show that in higher-dimensions we are less likely to get stuck in a local minimum (of optimisation) than in lower-dimensional representations, effectively defeating this argument.
Even in summary, these are heavy arguments. There are a lot of corner-cases which are also taken care of in the paper.
The basic result is that the unsupervised bias hypothesis cannot explain the failure to learn during task roving.
Can you fix what you broke?
Sometimes, as a mathematician, it is easier to prove that something is wrong than it is to find the actual solution. However, in the case of failure to learn during roved visual acuity tasks the solution is actually quite simple and it was in many ways implied in the thinking which led to the erroneous 2012 hypothesis.
The correct inspiration is that, for certain tasks (ie. the ones of interest, in which people fail to learn) you have only a single critic in the brain (lookup actor-critic methods for an understanding of this vocabulary). However, instead of this critic giving you an unsupervised bias in your synaptic learning rule (Note: there will be some bias, it’s just not the dominant factor here) you have instead a critic which is unsure of what is going on. On task 1 it thinks the subject is doing really well. On task 2 they are doing really badly. It can’t tell the two tasks apart so it averages them. So the variance on the predicted performance is terrible.
It turns out, from work by a related group of collaborators, there is evidence that the brain’s reward prediction system normalises its reward prediction by the variance in the prediction. This means, when the prediction is expected to be bad there is effectively no signal.
Over time, the reward prediction system (the critic) tries out different things to improve its predictions. The subject is performing this task over 10,000 times, in the experiments where a transition to learning eventually occurs, so eventually the brain throws some resources at it to try to improve the performance. The critic manages to identify statistical differences between the neuronal representations of the two tasks, and correctly separates the identities of the two tasks rather than combining them. At this point each task is correctly associated with its respectively correct performance level, and reward-learning can proceed without further delay.
The development of this critic and how it performs this task – in both a neuronally accurate manner, and using a mathematically generic approach – is one of the other projects which I have completed but am highly unlikely to get around to publishing.
The answer as to why in some cases humans cannot learn two tasks at once, despite being able to correctly identify the two tasks, is as follows. The neuronal architecture for the critic, most likely housed in the orbito-frontal cortex, is not the same structure responsible for responding verbally or physically to questions regarding task identity. It is, however, the bit responsible for tracking performance. When it is incapable of separating the identities of two tasks then it monitors them both jointly. If the two tasks have very disparate performance levels this leads to a large variance in the reward prediction. Due to the normalisation, this leads to an absence of a neuro-modulation signal. With enough repetition of the task, the critic eventually learns that it is both possible and worth separating the representations of the two tasks and does so. At this point you have two accurate critics for the two tasks and suddenly learning proceeds at a rate comparable to learning the tasks individually – which is what is shown in experiments!
Why am I writing about this?
I am inordinately proud of this piece of work. My work on Lactate Analysis has transformed a field, but it required less insight and effort than this piece of work. Here, we have a piece of analysis which I would much rather have set aside but I was effectively forced to follow through on. Then, I was strongly discouraged from publishing since it was supposedly a negative result.
But it is not negative. I have positively proven that a previous, very attractive and widely respected, hypothesis is wrong. This is how science works.
More importantly, I don’t want another postdoc or PhD student from another lab to spend a few years on a dead-end hypothesis (unless they feel that they can do something better with it than I did).
I had a lot of support in both carrying-out and in publishing this work. One of the authors of the original mistaken hypothesis is brave enough to be co-author with me. He has read my analysis and thinks it needs to be published. Friends, who I have met through my work in neuroscience, have given me private reviews of the work; and, apart from complaining about the technical weight of the writing, are very convinced by the arguments. I am eternally grateful to the people who supported me through the more difficult days of this work, when even I didn’t have full faith of belief in my own analysis.
Link to the BioRxiv preprint.