Lyle Muller is amongst the most recent American neuroscientists who, for generations, have trekked north of the border to push the boundaries of understanding human cognition. “Growing up, it was all about computers in my spare time, especially writing programs,” muses Muller. “But on one fateful day, I took a course on neural networks and it was all over.” He credits the open undergraduate program at his alma mater, Brown University, with encouraging him to broaden his course load and look beyond his major.
One of his other interests? Classics. At first glance, computational neuroscience couldn’t be farther away from studying the ways of the ancients, but that isn’t how Muller views it. “Classics is about decoding. Translating from Latin to English, deciphering semantics, then going through a stage of debugging to see if what you did was correct.” With a passion for computing and an analytical approach to his Latin declensions, perhaps the combination is not as radical as it would seem.
Nowadays, Muller is shaking up the world of neural networks, where researchers attempt to replicate features of the neuron cells and their networks that make up the brain. “To get a solid understanding of how our brains work, we need to start with its individual parts,” he explains. Muller’s passions now focus on bringing together the worlds of theory and practice as his group integrates computational modelling into experiments on human cognition.
“Historically, computational modelling of the brain would be completed, then compared to how the brain actually worked on a given task. But the theory that goes into the model comes from a different world than the real, practical cognitive studies of the brain,” explains Muller. He and his research group are trying to bring these two worlds together by including computational models within the very experiments that test human cognition.
In a ground-breaking collaboration with BrainsCAN, Muller and his colleagues have been given unprecedented access to study the human brain at the individual cell level while patients are conscious. The group has a subject play a video game while recording their brain function with the highest resolution possible. Built into the video games are algorithms which analyze neural activity in real-time and then dictate what kinds of stimulus appear on the screen, and when. Applying a visual stimulus to a test subject at just the right time gives the researchers a front-row seat to the rapid-fire sequences along the nerve cells involved in vision in the brain. The result is that Muller and his colleagues can decipher how the brain responds, or fails to respond, to the computer-selected stimuli on the screen. “We want to learn how vision is working at the moment-by-moment level, and now we have the tools to do this for the first time,” he explains.
As Muller and his group delve farther into the brain than ever before, down to the level of individual nerve cells, their findings will help define our understanding of our most mysterious organ.