top of page
  • njikomes

Neural Computation, Neuromodulators, Serotonin, Psychedelics, Subjective Experience | Zach Mainen |

Updated: Dec 15, 2022


Full auto-generated transcript below. Beware of typos & mistranslations!

Nick Jikomes

Thank you, nice to be can you start off by just telling everyone a little bit about who you are and what your scientific background is.


Zach Mainen 3:46

Okay, I'm a neuroscientist. I'm working here in Lisbon. It was nice of you to come all the way here. We're at the Center for the unknown. been here since about 2008. And before that I was in the US. I'm an American. And for about 30 years now, I've been working my way through various parts of neuroscience trying to understand how the brain works. And I frequently look at this through the lens of computational neuroscience, which is trying to understand not literally but metaphorically and using tools from computers, science, artificial intelligence as a way to try to think about how the brain works, but always sort of imagining it's far more complicated than then we actually can conceive at this moment.


Nick Jikomes 4:39

Yeah. So actually, I want to ask you about computational neuroscience, just as a field I guess, a lot of a lot of times you see computer analogies getting made to try and explain or understand what brains are doing. Can you talk a little bit for people that don't have a background in this field about, you know, to what extent are our brains like digital computers and To what extent are neurons in the brain actually computing things? And what does that mean?


Zach Mainen 5:04

Okay, that's a good, good question and a fair question. So I would say the computer metaphor is a metaphor, and always of explaining the brain are metaphors is a computer metaphor is probably the best one, we have, in a very general sense, because it's the best technology that we have, you know, the outside of the human brain, which is the most complicated object we know of in the universe. The next would be digital systems that we build. Now, if we get into the specifics, is it a digital computer in the style that we use, you know, in my cell phone or in your laptop, of course, it's not. But in a, in a sense, the computational metaphor, which goes along with terms like information, functions, networks, it, these are all computational metaphors, and things have developed in the 20th century and into this century, with increasingly sophisticated forms of computation, which have come to resemble to some degree, the way that the brain works more so than they did say in the 19 1980s. Right, so even your cell phone now might have a chip that has a kind of hardware design, that that's inspired a bit by neural network, as opposed to the traditional central processing unit and memory, that that's the, you know, kind of bread and butter of computers. And at the software level, as every one is now, more than aware of its its AI, is has come to mean, machine learning, which is also is basically another way of now saying a kind of neural network, type of type of processing. So it's so it's, it's all, it's all a metaphor. But from the technology side, it's become clear that it's a very powerful way of processing information. And I think, from the neuroscience side, or from maybe the biology side, it's the most sophisticated way, that we have to think about what might be going on inside the brain.


Nick Jikomes 7:32

And so you know, another interesting thing here is, you mentioned how, you know, when people talk about AI and machine learning, they're often talking about neural networks. And these are pieces of software, which are kind of inspired by stuff that we've learned in terms of how the brain is structured and architected. Can you maybe just give people a very basic sense of like, how that developed? But what exactly does it mean to have, you know, software, AI software that's inspired by the architecture of the brain? And what's like a coarse grained description of what kind of architecture we're actually talking about there?


Zach Mainen 8:06

Yeah, so when we say neural networks, we mean something like, there are a bunch of units, which resemble, abstractly the units of the brain, which are neurons. And so the units and a neural network are somehow a bit similar to neurons, they, there are many of them. And they pass very simple messages in a kind of parallel fashion. So in the brain, there would be billions, billions of them. And in a, in an AI model, they probably are usually not billions, but there are many, many, many. And they're organized into generally layers. In the brain, there are not strictly layers, but there are areas that pass information from one to another. And each area is composed of a whole bunch of neurons, or units. So you could think in the most simple terms of there being an input, like say, visual information coming into the visual system to the retina, and then through the visual cortex. And each stage in processing has a in parallel processes, an array of information, like the pixels in a camera, correspond to units in your retina. And that information then goes through a series of connections, which in the brain would be called synapses and in a neural network are simply an array of the facets information from say one vector, multiply by a set of weights, which gives you another vector. So there's a sort of structural analogy and a mathematical simplification, but you picture rather than things being done serially things being done massively in parallel, and using very, very simple math, basically linear algebra, and a little bit of special nonlinearities, but very, very simple. But massive numbers of neurons. And then what was sort of the revolution in the last few years was large numbers of layers. So in the old days, it was, say a network would have a hidden layer, an input layer and an output layer. And now it might be dozens of layers.


Nick Jikomes 10:36

Yeah, let's actually let's dwell on that for a second. So So what's the biological equivalent to these layers? So when we talk about input layer, output layer, hidden layer? What does that look like in terms of how the brain is structured?


Zach Mainen 10:48

So let's so just to pause to say, like let's we're start start by saying where things are similar. And then we get, we can say endlessly how none of this is really, literally true. When I was taught this stuff, when I was a, I guess, an undergrad, I was an undergrad in the late 80s, there was sort of a period of excitement about neural networks, but there wasn't the computing power, there wasn't the data. So the things were not amazing with what they could do. But they were effectively very, very, very, very similar. And what I was told at the time was, or what was sort of phrase was, neural networks are exactly like the brain except for three things, space, time, and probability. Otherwise, it's all the same, right? So it's a very, very coarse approximation. But so you asked specifically about layers. We think, generally in neuroscience of the brain is having areas and each area, say in the cortex has several layers, cell layers. And so that would be an example, if we go to the visual cortex, there are cells distributed across across the cortex in the depth of the cortex. And those are structured into denser layers of cells and sparser layers of cells. And cells within a layer tend to be more connected to cells in that layer. And there are specific patterns of connections between layers that constitute some kind of canonical processing algorithm. However, all of that detail, say, of what's going on in the cortex is not something that's typically put into the kind of artificial neural networks that we're talking about when we talk about what's in your cell phone, or what's running on a large language model, or what have you. Right.


Nick Jikomes 12:56

So but when we talk about, say, like, an input and an output layer, yeah, we're talking about a brain or an animal, what's the input layer.


Zach Mainen 13:03

So if we talked about that larger scale, we had the input layer, it would be the sensory periphery. So it could be the retina could be the receptors in the skin, it could be in the cochlea for sound, the output layer would be pattern of muscle contractions, could be limb movements,


Nick Jikomes 13:32

I see. So the input the input layer is where sensory information enters the system. The output layer is the system generating like a movement or behavior from that. And then there's a bunch of layers in between. They're super complicated, and we don't really understand them. But but that's the basic idea. It's coming in, it's eventually going out. And so that basic kind of structure that we see in the brain, which has like an anatomical reality to it, even though it's super complicated, there's like literally layers and parts of the brain. The software that AI people create, sort of is inspired by that structure. And they've they've kind of made it look the same way.


Zach Mainen 14:08

That's right. The thing to emphasize is that is this parallelization. So the US term used to be parallel distributed processing. So in a in a ordinary computer, you could think of information lining up to get into the CPU and then get distributed was a serial process. And in neural networks, because there's many neurons in a single layer, information is flowing in parallel. That's kind of the way we think of the big trick. But there's a couple other things. I mean, from there, we can sort of start expanding to make things a little bit more realistic or interesting. And I guess first thing to mention briefly is what makes all of this work, and why we call it machine learning. For example, this is a synonym you probably most people have heard. Why machine machine learning is so simple reason is because rather than orchestrating or architecting, these connections beforehand, or based on some kind of ABS abstract ideas or guesses, there's a learning rule. So the connections between neurons are, are dynamically adjusted to be stronger or weaker, based on what's called a learning rule, which, which is just that just the way of changing the connection. And the learning rule can be fairly simple. And result in a fairly complicated pattern of changes, because the learning rule is going to depend on the input and, and on the output. And can be made more or less complicated, but by applying a fairly simple learning rule, to a machine digesting data, let's say. So say we're feeding an AI images from the internet, in order to teach the neuron the machine, the neural network about those images, what's happening is you're you're inputting images at the input layer, then the network processes them, the information flows through the network to the output layer. And then the learning rule says, say the output is not correct. Let's change the connections so that the output is more like what is desired. I see this is called a supervised was an example of a learning rule that is a supervised learning rule. So say, if we're trying to teach the network, how to categorize dogs and cats, and we feed it a lot of images, each time the neural network gets an image, information flows through and depending on the pattern of connections, outcomes, cat or dog, and if every time it, you tell it the right answer, you can propagate this correct and correct answer through the network, updating all the weights, so that what comes out the next time is more likely to be to be correct. In this way, it's possible to train these networks to do cat dog categorization. But it turns out, really eight incredible things happen with very few ingredients of this type. So you end up with something as complicated as the language processing models like GTP, three, that can take text and give you responses that sound like a human or using a little bit more complicated algorithms, but essentially the same sort of strategy, very large, simple network with a very interesting kind of learning rule.


Nick Jikomes 17:49

I see. So the idea is information comes into the system. There's a bunch of units that are either literally neurons, if we're talking about a brain or something metaphorically, metaphorically, like a neuron, if we're talking about a piece of AI, software, information goes in, and there's some notion of like, correct or incorrect at the end. So to use the cat dog analogy, you might give a bunch of images that are labeled cat a bunch of images that really will dog, the the computer, make some kind of model of those things. And then you have a new images, and you can say like, Okay, if it guessed it was a cat, and it was correct, you can say correct. And if it's incorrect, you say incorrect. And then the whole thing sort of updates if it gets it wrong in a way that doesn't require a human being to go in and specifically engineer the changes, but sort of automatically happens based on this general learning rule.


Zach Mainen 18:38

That's right. Okay. That's right. So now what we're describing is one kind of learning rule. It's a supervised learning rule, because we said, let's say we know the answers. So this is a way to get to get the network to classify, say complicated patterns in the data. But oftentimes, the correct answer is known with correct answers and available. And for those cases, there are other sorts of learning rules. So another form of learning is unsupervised learning. And there, you're saying, you're actually saying, interestingly, hit network, here's some data like just come up with some idea of how to how to deal with it. And typically, that means forcing the network to kind of abstract. So you give it's a very complicated images with millions of pixels, but you ask the network to compress that information, and reproduce the images again on the other end. So by compressing them into some very small format, the network learns without supervision, to come up with a good way of re representing the images in order to reconstruct them. And that is a kind of example of unsupervised learning process, which will make a network that kind of knows about images. You can tell that that that network would know about images because for example, we could look at the the middle layers and ask what's going on there. And you'd start to see things that that actually in some cases resemble things that you would see if you were to record from a neuron in a brain, which is kind of remarkable. Or you would see things that were meaningful. And this space in the middle is sometimes called the latent latent space of the network, it's these hidden layers is where the magic is happening. But like the brain with these networks, sometimes we don't really know because it wasn't crafted that these latent spaces, these hidden layers, because the learning rule is doing the work. We know the learning rule, we know the dataset, but we don't know what the solution was found. So sometimes, machine learning person is in this weird situation of not being able to explain why the answer is what it is, which is a kind of fun, fun way where, you know, the brain, the network, the machine becomes more like the person not necessarily in the way that we would like it to becomes inscrutable.


Nick Jikomes 20:59

I see is that why like? So like, it was a few years ago now when Google's Google DeepMind created the AI that could play Go, and it was beating, you know, world champion GO players. And as they were watching all of this happen, they had like, I think they had like expert GO players like watching watching what was happening. And basically, if I remember correctly, a lot of them would watch what the AI was doing. And they couldn't discern any strategy wasn't obvious what was going on. And yet, the AI was winning, like reliably. Is that what you mean that like, we don't want to say we don't know, what's going on the hidden layers is that the the AI is coming up with strategies that basically no human being would come up with in their own head.


Zach Mainen 21:40

Yeah, something like that. I mean, I guess what, what's interesting, where this comes up is, let's say we're using AI to decide who gets a loan, you know, so there's a data, your credit application is being processed, and instead of a person deciding, you're feeding it into a machine. So you're teaching it something. But the what what you're teaching, it is not explicit criteria for why did you reject this application? Why do you accept that application, you're just say feeding it, the average reliable, say, the reliability scores of previous applications. And so then the network is learning something in the hidden layers to try to, to do that task. And then in the end, is a statistical model of what happens in the past that probably is predictive of the future. But if you ask it, why did you suggest to reject application X, you could look at what pattern was produced in the network by that application. But it doesn't necessarily interpretable even though you can see. So in other words, you can look at the network, you can look at the weights, you could look at the hidden layers, and what happened in the hidden layers. It's a bit like looking at the brain and going in and with a recording electrode or MRI machine, and trying to look at what's going on in the brain. But there's no guarantee that in either case, there's something that is human interpretable. Right, the problem gets kind of split up in ways that are, that are just complicated, because that's not the ark. That's not the way it was architected, it was architected by specifying the rule, but you can sort of discover the the, the engineer behind the neural network can kind of discover things about the networks that they create, because they don't, it's not just what you get out is what you put in. And this is in contrast to what what was the old way of doing AI which was to sort of engineer it by hand with rules to sort of explicitly have if then statements. So if you go to the previous generation of chess, playing computers, or go playing computers, they would they would have knowledge, which was essentially input by chess experts, or or other domains, you would have experts trying to tell machines through sort of machine speak,


Nick Jikomes 24:10

how to do the tacitly specifying like yes, this through this


Zach Mainen 24:14

explicitly and in the brain? You know, there is no creator of the brain who explicitly said they're going to be areas or they're going to be simple answers. So when we go back and you know, coming back to neuroscience, because I'm not, I am not a, an AI engineer, I'm a neuroscientist. But I think a lot of these things are relevant that we shouldn't necessarily expect the architecture of the brain to make sense to simple hypotheses. We try to look to neural networks for inspiration. And what we see there is not always necessarily hope inspiring, right? So I'm saying so even in the case where we, we have a network that some engineer designed, we know exactly what the architecture is. We know exactly what the learning rule is, we know exactly what the data was doesn't mean that the network is understandable. To a person that didn't, that wasn't, you know that, that has all the information, right? Even if even if you know, all of that, which in the case of the brain, we don't know, what the learning algorithms are, we don't quite have the entire architecture, although there's some progress in mapping the conductivity of the brain. And we certainly don't have the ability to record from all the all the units all the neurons. If we if we did, we still might not be able to, to give simple answers to to how any particular behavior is working. So I remember I mean, you have a bit don't always say, That sounds pretty pessimistic, but like another thing another way we call this, you know, job security, right? There's, there's a lot a lot of puzzles that will keep us busy for a long time as as, as researchers.


Nick Jikomes 26:01

Yeah. And there's another thing that might be worth unpacking up front, is, you know, we talked about input layers, output layers, hidden layers are stuff happening in between, we talked about how the input layer for the body or the brain is your sensory organs. That's where information is coming in. The output layer is is like the behavior that's generated from all of the information that gets processed and all of these layers of complexity. There's also a notion in neuroscience that you often hear, which is a feed forward versus feedback are top down versus bottom up. Can you explain what that is for people and how neuroscientists use those terms?


Zach Mainen 26:37

Yeah, that's a good, that's a good point. So if we think about feed, feed forward, and feedback, from the perspective of input and output feed forward is connections that paths from the input toward the output. And feedback is information coming the other way. And in a very simple network, when you think about the activity in artificial network, the activity is entirely feed forward in for say, the example of the image network that was getting pictures and doing cat dog classification, the activity passes forward, the learning process passes information backwards. Typically, in that type of supervised learning, if you think about it, how is the how are the connections toward the toward the input side going to learn about the output? Well, that's something needs to convey information backwards. In the brain. There's a lot of feedback. So there's a lot of connectivity, sometimes more backwards connections, then forwards connections in the brain, because there's so many layers, so much complexity in between, sometimes it's not even entirely clear whether something is forward or back. There's a very complicated pattern of connections. But if you look at a sensory system, where you're one step from the retina, or one step from the olfactory epithelium, that connects the nose to the brain, is quite clear. What is feed forward? And what is feedback. And you can ask, what are all those feedback connections? Doing? And that question was, was, you know, a hot question 30 years ago, and still a hot question. 30 years later, I think we, we don't really know, the level of architecture entirely how to think about feedback. But one conjecture would be that it has something to do with a kind of a kind of learning process. People are trying to think of ways to map some of the algorithms that are used in artificial neural networks, onto the brain that in those networks involve the sum of those learning rules involve passing information about the correct answer backwards, and that's one tempting. It's a tempting thing to try to figure out how the brain might be doing the same. But it's a very interesting, and I think unsolved kind of problem, and what what exactly all that all the feedback is doing.


Nick Jikomes 29:21

And when we, you know, when we record from neurons, and we try and figure out what they're doing, and we're thinking at least metaphorically about, you know, brains being like computers in some ways. Are there any clear examples of neurons in the brains of animals doing a very clear computation? And, you know, what, what, what is what does that look like if it's there?


Zach Mainen 29:46

That's an interesting question. So a clear computation was so so an example where we know what we have, we have a clear way to think about what a particular brain area is doing.


Nick Jikomes 29:59

Yeah, like One, like one that I think you hear about often, or that's maybe a famous potential example of this is there's neurons in the brain that release dopamine. And some of these neurons do something called a reward prediction error. And my understanding is that was that was conceived before it was actually found in the brain. And I'm just wondering if you could explain, like, what that kind of thing is, and what do we actually mean when we say that, like, we've recorded neurons that do a computation like that?


Zach Mainen 30:28

Okay, let me give you a brief abstract answer. And then let's get into dopamine and so on. So, the way to ask that question is to is to basically say, Okay, I have a, I have a function, or a task that I think the brain is doing, or the organism is doing. So that might be detecting, detecting moving dots on the screen, like I've trained the animal to detect whether the dots are moving left or right. And then I can come up with a try to come up with some description of what's the optimal way to do that, if I were a computer, or if I was an engineer, and then I look and see whether the behavior of the animal is optimal. And then I look at whether the pieces of the brain that are involved in that behavior, are doing the things that they ought to in order to support the behavior. So it's a pretty it's a pretty that is, in a way the gold's the holy grail of of neuroscience are the bits of the Grail in a way are to kind of find what is the brain optimizing? What is the function that it's performing, trace that down to the, to the substrates. And beyond that, however, because there are many, many possible functions, a story like that there's not one of them, there's many, many stories like for each task, or function you came up with, there'd be a potentially another story. So the simple ones, some of them like if you go to a very simple, relatively simple organism, like a, a horseshoe crab, and you look at the eye of a horseshoe crab, there, you can tell a story about how the eye of the horseshoe crab evolved to be able to discriminate visual input. So that was one of the simpler eyes that you could deal with in biology. And you can look at the wiring in the eye and kind of get a fairly satisfying answer about certain certain things. If you go to the human brain, and you ask, you know, what's the function of the visual system? And what and the architecture the problem explodes in complexity, and you kind of have to take it little piece by little piece. So that's but that is what we're that many of us in computational neuroscience would say that that's one of the one of the goals is really to be able to understand really well a particular function, and how it's whether it's being computed optimally, and and what exactly are all all the pieces?


Nick Jikomes 33:02

I see. But there's, I guess, the point is, there are many examples in the brain were in many different systems where like the neurons are computing things they're doing, I think we can say that akin to subtraction are akin to things that our computers like, are doing at a very basic level.


Zach Mainen 33:19

Um, Mmm, it's interesting, I hesitate to say that, it's hard to say, it's hard to say you've really explained any particular part of the brain. So let's say we have a recording from a part of the brain called the hippocampus, which is involved in functions like navigating the environment. And it's deep in the brain. And but we have Majan really good recordings from the hippocampus. And we see some some interesting activity that activity correlates with, with where the animal is in the environment that is a particular cell fires is activated. When the animal is in one corner of its of its environment and not anywhere else. We're pretty sure that the hippocampus is computing something. It but it's just to say, that's a nice way to think about it. But is it really just computing the place? We don't know, it's probably more complicated. If we looked at more features, the environment, we'd find out there were more complicated things going on. So I'm just saying we don't know we, I think saying that it's a computation is more like an assumption of this kind of, of this field. And that's the way we're committed to thinking about the brain. It's not a conclusion. The conclusion is more what is the computation? The computation computational neuroscience is just the field that thinks this is a useful way to think about everything. Right? So it's more like the layer of assumptions. I see. I see. So you want it so to get into dopamine, versus this area is pretty interesting. Because you asked Me, you were mentioning reinforcement learning, I think. So to get there one step I wanted to put on the table that I didn't mention. We talked about supervised learning in which there was an answer. We talked about unsupervised learning, which there's no answer that work has to figure it out, all by itself. Third Class of learning, generally speaking, is reinforcement learning where there's kind of a hint, it's not a full answer. But imagine the environment is saying good or bad. And you could think of this, like, if you were training an animal to do a trick, if you gave it a food reward, you're not telling it, what it has to do, you're telling it, you just did something useful? Yeah,


Nick Jikomes 35:41

there's some intrinsic property of the animal that's like, you know, we all experienced this like, right, we don't have to learn to have emotions that we like or don't like, it's like, our body sort of just creates these things. And that helps us directionally learn where to go because we want to naturally reproduce the good feelings and avoid the bad feelings.


Zach Mainen 35:58

Yeah, yeah, you might say evolution has, has built in, you know, some kind of some kind of hardware that doesn't need much interaction with the environment. And that is like telling the animal you need to avoid extreme temperatures, you need to find food when you're hungry. Those things are not in a strong sense, learn their innate, that they're the evolutionary, you know, necessities. And those signals that satisfy these basic organismal needs. are leveraged by the brain to to learn more complicated things. So you don't need to learn that you actually, even this gets into sometimes some of the things you'd be surprised that animals need a little bit of experience to learn even things like that water is good for quenching thirst, almost at that level, but more to the point. It's good to be fed when you're hungry. But could you can you use that, that reward that you got when you finally succeeded in getting the food to understand what you did right to get to get there. So imagine there's, you're in a maze, or there is a maze with a mouse and a maze. And the goal for the animal is to get out of the maze. And every time that happens, it gets rewarded, because it likes to be free and not in the maze, for example, then there's a there's a kind of learning rule that that's very useful, that allows you to use just that last reward to kind of propagate that conclusion backwards, through what happened before that. And before that, and before that, to construct to construct based on just that kind of simple, simple answer. The kind of model that is needed to get to get there, you know, to say, in this case, oh, I just went left, right, left, and then I got a reward. Then the next time I went left, right, right, and I didn't get a reward, through this process of of abstract feedback, it's possible to tune the network to come to understand what's the sequence of things that should be done, or even to create a more abstract model of the world. So to go back to machines, like when another example of Deep Mind trained, you will probably remember trained, actually, the goal is a good example. But I think of the video games. So training a network to play video games, or go can be done entirely based on the the, the simple kind of reinforcement signal, it's called, it just tells it how good it's doing, without having to tell it. Exactly how to play. Yeah, play. Exactly.


Nick Jikomes 38:59

I see. So you just bake in some notion of like, you scored high or you did good in the computer likes that scare quotes. Or the opposite you did bad. And then just sort of the strategies that emerge from that don't have to be explicitly programmed into the machine. It just sort of happens automatically by giving these these core screen signals of good goodness and badness.


Zach Mainen 39:23

Yeah. Generally even good. Generally, goodness, or absence of goodness is enough even. But that's another Yeah, but But yes, essentially, goodness, goodness, signals reinforcement signals. Yeah. Well, go ahead.


Nick Jikomes 39:37

I was just gonna say like, you know, to think about it from the protective perspective of food, like, you know, none of us had to learn to feel hungry, we just sort of get this feeling. And you know, you don't like feeling hungry, you want to sort of shut that off. And there's many, many, many different foods you can eat to shut it off and many different ways you can acquire those foods and none of that what I guess what this approach buys you, so Speak is, you can have something that's very, like coarse grained and crude, like, there's just this general feeling of I'm hungry, and I don't like this. And with just that sort of seed of signal, you can learn in almost infinite number of ways to get through food and strategies to deploy to get there, just with that very simple sort of feeling or a very simple, crude signal.


Zach Mainen 40:26

I think so I think that that, that is the hope, if reinforcement learning theories are born out, then it's possible to create machines that on these type of signals do that. But it's the three types of learning probably all have some kind of place, there are times when we are supervised, like when we go to school, and we get explicit teaching, or observation of other people, even animals can can do observational learning, in some cases, the unsupervised type where there is no good or bad at all. And for example, language learning is not done on the basis of reinforcement there. There is no constant there are corrections, right in school, are you speaking correctly, but most learning of language doesn't require explicit grammatical corrections. It's entirely done by by kind of self supervised and observational process. So like so reinforcement learning, it's, it's a, it's a super interesting and important topic, but it's pretty clear, there are multiple forms of learning going on. And, and probably as we discover more about how to build systems that learn interesting things, we'll come up with even more gradations and variations on those.


Nick Jikomes 41:47

So you know, as we're thinking about things like like learning, AI and reinforcement learning, we've talked about, you know, sensory information coming in, we've talked about the idea of, of an output or like a behavior that you generate. We've talked about this notion of feed forward and feedback. There's also this distinction in neuroscience that gets made between, like, fast neurotransmission, like point to point communication between individual neurons, and something called neuro modulation. Can you talk a little bit about what neuro transmission and neuro modulation are and how that starts to tie into some of these things?


Zach Mainen 42:22

Yeah, so neuro modulation is one of the several ways that neurons communicate with each other. What I'm about to say is a little bit simplistic, I think, but say if you go to the textbook, you see stories something like this, there are fast transmitters, the kind of bread and butter of passing those messages between layers, that would be glutamate, the excitatory transmitter and GABA the inhibitory transmitter. And then on the other end of the spectrum, there are hormones that are very broad and often slow. So, they may be circulated in your entire body. And in the middle, you have something called a neuro modulator, which is broad. So contrasting to the to the fast connections, which are point to point one unit to one unit or one neuron to one neuron through a synapse. That when the hormone switch are super general, there are modulators are very broadly distributed or acting. So the neuromodulators tend to come from neurons that are clustered in a in a group, often in the part of the brain called the brainstem, which is a more primitive part of the brain. And then they project very widely, their outputs or their axons to many parts of the of the brain, like the cortex, or the hippocampus or the basal ganglia. And so even a single neuron neuromodulatory neuron might cover a reasonable percentage of the entire brain, I won't give you a number, but 20% is so you know, a single axon could spread all the way through the entire cortex from the front to the back. So on the other hand, what we now know is that there that those signals are also somewhat fast, or at least some of the signaling is also as fast or almost as fast as the classical excitatory and inhibitory signaling. So those neurons may also may release a neuromodulator. We're about to talk about serotonin and dopamine and so on, and also release glutamate or GABA, or even another chemical. And so some of the signals coming from those, those neuromodulatory neurons are slow and some of them some of them are fast, but it's the feature of have been very widely distributed, that's particularly interesting because having a fast, potentially fast signal that reaches much of the brain, at the same time is a relatively uncommon thing in the brain. And it gives those those neuromodulatory neurons that kind of power to distribute information rapidly. That that makes them just on this anatomical and simple physiological basis special. I think that's one thing that we can say, we know, their capabilities are probably more than we know. But we know they're kind of interesting in that that respect. And if we map this onto the this topic of reinforcement learning, it was noticed 2020 30 years ago, probably earlier that that there's an interesting compatibility here, a reinforcement signal is also something in a neural network that that needs to be widely distributed, everyone needs to know something good just happened. And it would benefit from being temporally precise the signal should be it just happened now. Not a very vague sense of it happened in the last hour. So there's been this this idea that neuromodulators perform this special computational role. And there are various theories about what roles those might be. And there are several different neuromodulators and there have been defended attempts. Two of the pioneers in this were Peter Diane and, and Kenji doya, who made up theories for dopamine, serotonin, norepinephrine, and acetylcholine, which are sort of the four big well known neuromodulators.


Nick Jikomes 46:59

I see so so these neuro modulatory cells in the brain, they are interesting and sort of special and different from many other neurons in the brain. And that they the neurons themselves, the cell bodies live in these small little neighborhoods, typically deep in the brain, like in the brainstem, but they send even though there's relatively few of them, they send out all of their connections over large chunks of the brain. So they're capable of broadcasting signals very widely across many different parts of the brain. And that's, that's different from what a lot of the other neurons look like in the brain. Exactly. And so you mentioned some of them, so dopamine, serotonin, and acetylcholine, I want to talk about some of them in a fair amount of detail. But before we get there, you know, when you sort of hear the cartoon description of these things, oftentimes, people often talk about them as if each one sort of has a very well defined set of roles that are distinct from the other one. So like, you will often hear dopamine, basically acquainted with pleasure or motivation, you will often hear serotonin described as being for mood and things like that, is that too much of an oversimplification? How much do we know about like was what they're doing? And how do you start to think about that?


Zach Mainen 48:15

So it's, so definitely, it's an oversimplification. Like, let's look at Git, let's look at this. On the one hand, we had we have glutamate and GABA, and the way we're thinking about those is passing information from place to place. So to sort of say, What does glutamate do behaviorally? is a bit like, the answer is yes, or something. You know, the question doesn't quite make sense, with neuromodulators. Why do we think differently? Why not think it's just another chemical that transmits information in a different way? Why are we tempted to say, oh, it has some function? Well, the answer is because it turns out that most psychoactive drugs, not all, but many of the important ones have some target, which is a which implicates one of the four neuromodulators. So typically, a psychoactive drug often is going to interfere with or mimic the signaling of either dopamine, serotonin, norepinephrine, or acetylcholine or some combination. And because drugs have specific effects, or they have effects, let's say, if I gave a drug that I knew activated a dopamine receptor, and the drug had a particular effect, I could then infer or I might infer that dopamine was somehow involved in that in that function, or if I blocked the dopamine receptor, I might block that function, right. So so there's been a long history of pharmacology On the brain in pharmacology, giving drugs to animals or people allows you to ask that question. It's a it's a, it's a kind of perturbation of the brain, which you can assay at any different level. But mostly it's been asked at the behavioral level, not at the neural level. So we we know quite a bit about how drugs affect behavior. And that literature let people start imagining what the neuromodulators endogenously intrinsically would be would be doing. But it's kind of a leap of a leap of induction or or hypothesizing to do that mapping. In some cases, it's worked remarkably well. Or at least we've come up with some interesting ideas and another cases less so.


Nick Jikomes 50:50

Can you let's talk about dopamine for a little bit. Yeah, dopamine


Zach Mainen 50:53

is our is our is our poster child for success. Yeah.


Nick Jikomes 50:57

Um, so there's a couple things I want to discuss here. The first is, why does dopamine have a reputation for being tied up with pleasure, and addiction and motivation? And and if you could do your best to sort of describe, like, sort of what the state of the art of understanding is in terms of dopamine, but like, why, like, what, why do some people say like, oh, dopamine equals reward, or dopamine is important for motivated behavior, things like that.


Zach Mainen 51:26

Okay, let, there's so much data and things to go through that we got to try to keep it high level, but dopamine. So first, there are a bunch of drugs that are familiar to people. An example would be amphetamine, or cocaine, which is an act on the dopamine system. And those drugs tend to produce kind of wanting of that drug experience, they also tend to be pleasurable, at least initially, although that part seems to be dissociated, both from the actual wanting part. So that gives this idea that if we activate the system, the dopamine system, we produce something like a reinforcement, or or wanting so this goes back to that reinforcement signal we were we were talking about. There's another a number of other lines of evidence, one of which would be if you stimulate in the brain will electrically in the area where those dopamine neurons are located. You activate the neurons. And that electrical stimulation that activates the neurons is something that animals find compelling to do. And it's even been done in people and you know, oui, oui, oui, oui, it hasn't been done, I think for a while, but it we know that, that that that somehow mimics a kind of reward signal in, in some kind of subjective sense. So people will, will keep pressing, let's keep it with animals, but animals will keep pressing the lever to receive cocaine, but they'll also keep pressing the lever to receive stimulation of those neurons.


Nick Jikomes 53:13

Yeah, and so that starts to look a lot like, you know, reinforcement learning in terms of how we theorize it might happen. One thing that I wanted to ask about too, is, you know, so so, you know, people studied dopamine in the context of natural rewards in terms of drug rewards, and things like that. Earlier, we were talking about reinforcement learning, and the sort of general teaching rules and things like that. And whether or not neurons are computing things. One of the interesting signals in the brain that was discovered quite a while ago now is that these dopamine neurons, at least some of them do something called a reward prediction error. And so what is that? And how does that start to tie like the biology of what the cells are doing with this theory about how learning might be happening in a brain?


Zach Mainen 54:00

Yeah, so So it's interesting I was I was around in terrace and housekeys lab as a PhD student when Peter Dan and Reed Montague, were working with ideas like this. They're actually coming from computer science perspective of reinforcement learning, but they were looking at data, particularly data coming from Wolfram Schultz his lab. And together these guys came up with this, this idea. So the the learning theory is, is often there are now many variants, but the basic idea is temporal difference learning. And the observation is that dopamine neurons display a signal that's very much what you would want from this particular kind of reinforcement learning algorithm called temporal difference learning or T TD learning. And to explain that the thing that we haven't gotten to that I should put into the background material is, we've been talking about the brain as a kind of feed forward system where we have the input flowing to the output. But the way we should think about the brain, and the way you'd see this diagram, if we were at the board would be a more of a loop. And the idea of the loop is something basic to what was called cybernetics back in the middle of the last century, and is sometimes now called something like predictive coding or, or variations thereof. But it's as basically a very, very core idea about how to think about organisms and the environment, where the organism produces an action. So the motor output that acts on the environment, and then the environment, in turn, presents feedback in the form of sensory information to to the organism, so agent, environment, loop action, and perception loop. And the reward signal is part of the perception loop, it's part of what the environment provides to the agent. And in the idea of cybernetics way, what was the problem, there was sort of a control problem, you can think of the animals job or the organisms job is to get get reward, or to keep things in a desired state. And in a very simple version, think about a thermostat as the organism and the environment as the room. So the thermostat acts by controlling the temperature are concerned controlling the heating element, say in a in a heating system, and that somehow, it acts on the room, depending on when the windows are open or closed, and how many people are in the room Different things happen, that temperature of the room is sensed. So that's the perception part of the thermostat. And then then the computation that the thermostat has to do is to compare the the desired temperature, the the predicted temperature with the actual temperature, and then change its motor output. It's the heating controller, according to that. So in that thermostat example, what's really, what's critical in this cybernetic loop is to have an expectation of the temperature and to compare it to the actual temperature. So for the thermostat, that's like the goal of the thermostat is to keep the room at that desired temperature of, but more generally, that the way you might think of that is that the thermostat needs to have a model of the room in order to do its job, right, like the thermostat needs to know how the room is behaving in order to know how much to turn the heater up, in order to get you know, how big is the room, how many people are in the room. So a really intelligent thermostat, if Google were currently designing, one wouldn't just sense the temperature, it would model the whole room, right, it would know you've probably going to come in at this time of day, and you'd probably anticipate turning up the heater before you got there. Or etc. So, so this is the way we think about the brain, the brain is the thermostat that the brain is trying to keep the environment, you know, keep the body fed and at the right temperature and all these things. And its drains job is basically to produce motor output behavior to do that. And, and to achieve it, it's it's it's building a model of of the of the environment in order to allow it to know, when it does something, what what's going to happen, how what do I do now to keep the temperature temperature stable, so to speak. So reinforcement, so the rate reinforcement is, is fundamentally is a part of this loop that's being compared to the, to the expected reinforcement.


Nick Jikomes 59:32

Yeah, so an important thing here is there's this notion of expectation baked into the system. And there's an ability to compute a difference between what you expect to happen or what you expect things to be at or want them to be at, and then what you're actually sensing in the environment.


Zach Mainen 59:48

Exactly. So so in the reinforcement learning field, I'm putting in this broader context because I think it's helpful for some of the things I think we'll discuss. You could say quite simply We know that the brain, when given a reward in a predictable way can anticipate that the reward is going to come. And the error, what's happening in this TD learning rule, this thing that we think dopamine neurons are doing is the expected reward has been subtracted from the actual reward. And I think the way you should think about that is the brain is is a is a kind of is a loop is in a loop with the environment, it's trying to predict what's going on in the environment. And there are many signals, which which are important for the brain, which are basically when the environment doesn't do what the what the brain thinks the environment is going to do. Just in general, that's another actually another way to look at that is, if if the network was try it was being trained in a supervised manner. If there were right answers, then if there was an error signal of the difference between the right answer and the answer that was being was currently being given, it's another error signal. So this notion of surprise or error signals as a, as a learning signal is, is super important, and even more general than reinforcement learning. So the brain is, is constantly forming expectations about what's about to happen. And when things are as expected, it's generally not as interesting or important, then when things are not as expected, if things are as expected, then no need to worry it no need to change anything, when things are not as expected. That may be that, you know, all of a sudden, glucose levels are too low, or, for some reason, an unexpected, we don't expect dark objects to be coming out of the sky. You know, if that happened to us, right now, as you know, sophisticated as we are like a big shadow looming over us from outside, we would we would freak out. Those kinds of unexpected things turn out to be really important. activators of neuromodulators, and just generally make the brain kind of go go crazy. So if we step back from the theory, one thing that's clear from neuromodulators, if you record from those, those neurons, whether it's dopamine, norepinephrine, serotonin, acetylcholine, if you do something suddenly, unexpectedly, you know, loud noises, what have you done, our hands go crazy. It's just it's so I think we know really well, that one of the design features of the brain is that it's really tuned to, to have expectations of some sort. And to respond to things that are out of the ordinary. And the response is to learn is to is to adapt, because things that are out of the ordinary, the job of the brain is to make them ordinary. And that's, you know, that's sort of the essence of the of why brains are so important from an evolutionary perspective is that they are able to deal with things that never happened before in the history of the organism. So something something that is totally out of the ordinary is when learning is is is important,


Nick Jikomes 1:03:24

as a sort of very, very high level neuromodulators seem to be very intimately connected with this notion of learning and changing your behavior and adapting to things that are unexpected, which is exactly what enables animals to move around and exist in variable environments that aren't constant.


Zach Mainen 1:03:40

That's right. That's right learning is super important. And the other side of it, generally speaking, would be attention. So in the environment, there are many things to pay attention to potentially, and at any moment, an organism has to select, something to respond to, and some things cannot be responded to, like imagine an animal in a forest or a jungle, there are many noises, there are many plants, there sounds of all sorts of other animals. There's, this is impossible, even if it was possible to process all the information, it's not possible to direct an adaptive response to everything. So there's a selection that has to go on, in terms of what to learn about what to study in more detail where to go, that forces is a selection. And again, surprise, or sometimes it's called saliency is is a function typically of how unexpected The thing is, at least for for the large part of the brain that seems to be devoted to to intelligent behavior. Those are the things that require intelligence. The things that that may be, maybe routine, but still important. Like you have to always, you know, walking is always important. And there's a lot of things going on in walking, that that obviously, the brain needs to carry out a fit effectively. But just walking isn't something that neuromodulatory neurons need to worry about. Because it's just walking today, you know, you don't need to think about it. So neuromodulators seem to be important when situations when the resources of the brain need to be re allocated in order to deal with something by focusing attention, and then learning of the outcome of that, that interaction doesn't that, you know, learning new lessons.


Nick Jikomes 1:05:35

And so to sort of finish off on this part, and give people just a very concrete sense of like what some of these individual neuromodulatory neurons do in certain situations. So like, in the, I guess, the classic example of a reward prediction error and a dopamine neuron, you train an animal, like a mouse or a rat, or something that, you know, whenever it hears a beep, it's going to get a snack or a piece of cheese or whatever that it likes. When people record from dopamine neurons, what happens to the what are the neurons doing in terms of their activity, when these rewards come as expected versus unexpected?


Zach Mainen 1:06:10

So that's it. So that's a classic experiment is is is this type. So the classic experiment was with monkeys, but but this could be done with, like the mouse and the cheese. So if you would deliver the reward the cheese unexpectedly, then, as I've been motivating, you'd see a dopamine response, you you the animal wasn't expecting the good, a good thing to fall out of thin air, and suddenly, it happened, you'd see dopamine neurons are excited at that moment. If then, you put some structure into the into the environment, and you always preceded dropping the cheese by a tone, or, or a light three seconds before you would deliver the cheese. Then after a few occasions of that association, the dopamine neurons would do something interesting, which, which should be expected. Based on the story I'm telling, they would stop caring so much about the cheese because the cheese though it's good is not unexpected anymore. It's now expected. So the dopamine neurons would now start to find the tone. Very interesting, because the tone is let's assume unexpected. It I'm delivering the tone whenever I want and what it but whenever I do, I give three seconds later the cheese. So what the neurons are going to do is stop responding to the cheese itself, even though it's good and start responding to the thing that predicts the cheese. And that is the essence of how the error flows from the thing that was initially initially unpredicted to something that because of the structure in the environment, allowed that unpredicted thing to become predictable. And now the animal that knows the tone precedes the cheese has a kind of model of the environment, the statistical structure of the experimenter in this case, and the dopamine now starts worrying about when is the tone when when is the tone. The last piece of the classic experiment is what happens if you then take away the cheese in the tone cheese pairing. And here again, the mouse has an expectation, because the tone has always been followed by cheese, for example, for the last hour, if you now take away the tone, sorry, if you don't take away the cheese. The dopamine neurons are now again, surprised. And now here's actually where something gets more interesting. The dopamine neurons, when they're surprised, in this case, are actually disappointed, or at least the classic dopamine neurons, so they respond by a small suppression of activity. So in the case where things are as expected, where the tone is followed by the cheese, at the time when the cheese is delivered, there's no response from the dopamine neurons. Everything is is good, but no response when the tone is present, but no cheese is coming. It's a surprise and a disappointment. I was expecting cheese. The dopamine neurons actually shut up a bit. They pause. And so this makes the dopamine signal look like a what we call a value signal or a valence signal. It cares about good or bad. It the dopamine neurons fire when the reward was delivered unexpectedly and there little bit disappointed when they did expect a reward and it's not forthcoming. So they're comparing the computation we think that they're doing is is, is computing the expected value at this moment and the actual value, subtracting the expected from the actual and having a signed error prediction prediction error.


Nick Jikomes 1:10:20

I see. So it's as if something like math is being done by the neurons, yes, in relation to whether or not good things are happening as expected.


Zach Mainen 1:10:30

As all the stories we're going to tell here, like it's a simplification of what's actually even now known about dopamine neurons, but but as far as those experiments have been pretty widely reproduced in, in, in, you know, certain circumstances, it looks very much like that. The thing we're almost now I come in a little bit is, when we look at serotonin, what what do we see? Are all the neuromodulators responding in that situation the same? And the answer is, we still don't know the full answer. But for serotonin, there's an interesting difference with with dopamine that I think is quite important. That basically, in a very, let's say, in a very similar experiment, which maybe we can go into if you want, but that where the dopamine neurons have this value oriented signal, the serotonin neurons seem to treat disappointments and an unexpectedly good things. Similarly, they, they care more about the degree of surprise, or salience. And they don't actually, they don't actually get to they they fire even for things which are which are unexpectedly bad.


Nick Jikomes 1:11:52

See, so they just care about whether or not something surprising happens basically,


Zach Mainen 1:11:56

it so the great, so more or less, yes, more or less, yes, so you could say one is a computational terms, one is one seems to be assigned prediction error. And the other cases an unsigned prediction errors, you take, like the absolute value of the of the value expectation but but actually rate so right now, we still don't, we don't really know all of the details about what serotonin neurons are doing. And there are a lot of variations on the these concepts when you dig into the modeling. So one issue is, are these neurons dopamine, or serotonin, really single numbers, so we've been talking about it, like there's a number for the goodness of the reward or the amount of surprise, but actually, there's not just one serotonin neuron or one dopamine neuron. And it looks likely that the story is not a single scalar value, but rather some kind of distributed representation of more than just one value. We've been talking about the neurons as if they blanket the entire brain with a single signal, but it's not that simple. There seems to be some degree of specificity in the projections between receiving and delivering the kind of structured structure to the to the to the reinforcement signals.


Nick Jikomes 1:13:28

So before we get into like some of what we're learning about what the serotonin neurons are actually doing in animals, historically, why has there been this association between serotonin and mood? Where did that come from? Actually


Zach Mainen 1:13:46

good yeah, serotonin has been has been associated with mood as I understand it, what primarily this came out of the development of a class of of drugs called selective serotonin reuptake inhibitors, which most people are probably familiar SSRIs Prozac, etc. So these drugs target the the serotonin system, they are thought to increase the availability of the amount of serotonin in the brain, although we don't really know extremely well, what they actually do as much as you would think. And those drugs are for some people effective antidepressants. And largely that story of serotonin and mood came out of the the selling of those that class of drugs are the kind of justification of the fact that while they are antidepressants than serotonin therefore must have something to do with mood, but it was more of a reverse a reverse inference process than a you know than it is covery about serotonin, which then somehow led to the concept of the that becoming an antidepressant,


Nick Jikomes 1:15:07

I see. And you know, when you think about when you think about something like if you're a psychiatrist say thinking about depression and SSRIs, and things like that. One thing you think about depression is Oh, when people are depressed, they feel sad, their mood is low. But another thing that you might point out is that a lot of times people with depression, they they ruminate, they are behaviorally inflexible, they're sort of unwilling to try new things. And they kind of just get stuck in a rut and they don't think anything's going to change. What What if we started to learn about the potential link between serotonin and this notion of behavioral flexibility, or you know, wanting to keep doing the same thing versus trying something new.


Zach Mainen 1:15:52

So that is an interesting topic. So there, there's a line of studies that have somehow implicated serotonin into things like cognitive flexibility is one term or impulsivity. Versus I guess the way what we were putting together now, rumination is a pretty different concept, but you could see somehow, rumination being a kind of lack of flexibility or a kind of, kind of the opposite of impulsivity. Impulsivity is a concept like not being able to control your will or not being able to, to act to withhold acting, yeah, delay gratification, but now we're, you know, because now we're really transitioning from computational terms that are defined in terms of algorithms and AI, we're leaving behind physiology and, and circuits. And now we're entering into a world that's largely developed on the basis of pharmacology giving drugs of different types that activate receptors or so on, and behavior. So most of now, what we're going to be talking about is, is really only known at the level of if I give a drug like this, then then I do a behavioral test like that. And I extract a couple of features of the behavior. What do I get? And I think in all of this, we're on much shakier ground. And we don't know how to map these behavioral experiments on to, or there are many ways to map these behavioral experiments onto the brain or onto models of the brain. So that's that caveat said. Like. So cognitive flexibility. One, one type of task or situation that you can set up to illustrate cognitive flexibility is a reversal task. And reversal task is actually the one that I described in the mouse example, where a person or an animal has for some time learned a particular association, say a tone and cheese, which I also would call a structure in the environment, structure that the structure of the environment. And then after having learned something, it suddenly encounters a situation that violates its expectations. And it calls for forming a new association, generally, or calls for flexibility. And now here's why the problem is quite interesting. So imagine you're the your imaginary or a mouse in the forest. And you are used to foraging in a particular site, which is like the the tone, and then that site, I don't know, you find acorns. And mice, let's say seeds. I think Acorns is probably too big for most mice. So that's your that's that I mapping on to the Chi like I don't think mice in the wild forage for cheese. And I don't think there are there are tones. So we're just going to make it a bit more ecologically minded. And we put the mouse in the forest in a particular location finding seeds. So every day the mouse goes to this location and finds seeds. So it farms this expectation that that tree is a good source of food. Then one day, it wakes up and goes to the tree. And, and there's no there's no seeds. What is the mouse to do? Should the mouse basically start unlearning? or just forget what it just learned. Let's suppose it goes back another couple of days, and there's still no seeds? Should it? Should it just relearn this association so that now that location is no good anymore? And just go try somewhere else? Or should it be more sophisticated and say, ha, maybe there's something else going on here. And something else about the world has has changed that this tree is currently no good. But who knows, maybe in two weeks, the tree is going to be a good source of seeds, seeds again. Right? So the difference would be in the second case, let's not erase what we knew about this association. Let's keep it let's put it in a different compartment. And let's, let's learn something, something else. Maybe there's another sign that that things are different. Maybe the season has changed, maybe there's signs of another type of mouse, a competitor, and that's why the seeds are gone. So so this kind of situation where expectations have been formed, then there violated calls for something more than just relearning it calls for cognitive sophistication, which, or you could say, a kind of flexibility to us to start building more complicated representations of the world representations in which things are not things can change from place to place from time to time. And this is this is what theorists computationally think is going on. Even when the mouse learns about the cheese, that there's evidence that the brain has not forgotten the original tone cheese Association, when it unlearned that it's actually still there. If you bring back the association, the mouse, the mouse quickly re learns the the old information. So it rather seems like the brain has has figured out there are multiple contexts here. There's one context in which this association holds true. And there's a new context in which it doesn't hold hold true. So the brain is starting to build a map, a map of the world, not just in the sense of space, but in a higher order sense that, that, for example, the seasons is a is a thing, which changes the availability of resources, I can be in the same place in a different time of year, and things will be different. So I don't just need a map of one map of the whole forest, I need a different map, depending on which season it is or which time of day it is, and so on. So this this kind of map cognitive map making is something that we don't really know so much about. It's not been that studied in animals. It's not that easy to study in people. And if serotonin, so I didn't really even I didn't give you the explanation of why we think serotonin is involved in this. This is a bit backwards. But let so let me let me fill that in. But basically, I'm going to, I'm going to tell you why it seems serotonin is involved in that sort of process, which makes for a lot of interesting possibilities. So so the evidence, and sorry for giving this backwards, the evidence is, you can do pharmacological manipulations to inhibit serotonin and animals are slower. To do this kind of relearning, when the association changes, it's pretty much pretty much that simple. And in the case, in the case of looking at the signals from the serotonin neurons, whether the association gets worse or it gets better, like whether the outcome is worse than expected or better than expected, you still get serotonin signals. So it's appropriate for saying that your map is wrong your your model is wrong, it doesn't matter whether it's your you're disappointed or your or your elated, your model is wrong time to time to fix the model. So fixing the model might be just making a subtle adjustment like I need to be a little bit faster next time. But it can also be this kind of wholesale oops, I'm in the wrong context. Like I thought I was I thought I was it the right time to get the seeds but maybe I'm maybe I'm off with my timing. If you start think thinking about this, we're doing this all the time, our behavior is is completely different. Depending on the situation, what's right in the office, at this moment, is completely inappropriate for to do in the middle of the cafeteria that perhaps or what's appropriate to do in your bathroom is completely inappropriate to do On a zoom call as as you would not want to find out, right, so that's, that's a, that's exactly this kind of the same stimuli, the same immediate set of objects isn't always processed to produce this the same output, it's very context dependent. So if we start thinking about mental illness, it said of schizophrenia, for example, that a lot of it is not doing the wrong things, it's doing the right things at the wrong time. Or we think about depression, a lot of it is also appropriate, but inappropriately prolonged or or inappropriately accents expressed or intense, and so on. So


So, if if serotonin, if serotonin has to do with these sorts of things, so the first thing that that comes to mind from these observations is, we're thinking about a signal that has to do with learning, or adapting, or changing the rate of learning or that but it's really even more complicated, because it's about this. Sometimes it's about deciding, it's not about when to begin to learn, it's about when to divide your world into different boxes. It says something, something that that's a little bit more, it's a little bit more more complicated to start thinking about, how would that be affected if it wasn't working? Right? If you, if a person was no longer making the right boxes, for their experiences, they were, they were not able to be in the right situation that they should be, it's somewhere there. But they're, they're splitting the situations too much, or they're merging them.


Nick Jikomes 1:26:49

I see. But in general, it sounds like experimentally, what we know reasonably well so far is that when serotonin neurons are less active, that tends to be associated with keeping doing the same thing. So an animal preserve whatever behavioral strategy it's taking, and when they become more active, that's associated with changing what you're doing


Zach Mainen 1:27:14

that that's exactly that's the tendency. So think about going back to the error signals. Serotonin neurons are another neuromodulators as error signals are generally saying something about the model is wrong. So switch, do something, if we'd been surprised what was expected didn't happen, you need to change. Right? You need to change. Absence of serotonin, or other neuromodulators tends to mean don't worry, keep doing what you're doing, keep doing what you're doing. So, so in, so that, so it's a very crude, it's probably way too crude at a certain level. But that's that's a way of thinking about serotonin, which is sort of on the same plane as the way we're thinking about dopamine. But rather than thinking of serotonin as another type of signal for good, like a positive mood signal, which, which, which, which doesn't seem to be to be the case, and that the neurons also fire for things which are not good. It, it seems like serotonin should be thought of as more of a flexibility well as having the impact of flexibility. Among other things, presumably, but but being signaled by by signaling errors by signaling, surprise.


Nick Jikomes 1:28:41

Yeah. Interesting. And I mean, that does start to get you thinking about, you know, why it is that different search nergic drugs have the kinds of effects they have, you know, to the extent that SSRIs are effective, right, someone in in a depressed state needs to get out of it, it would sort of make sense based on what you were just telling us that if you elevate serotonin levels, that at least for some people, that's going to help them change their behavior and sort of remap stimuli to you know, how you should respond to them in a different context or in different contexts, plural. Or, you know, when you start to think about things like the classical psychedelics and what they're being investigated for, therapeutically, you're sort of stimulating certain serotonin receptors. And, you know, in general, that kinds of experiences that that people report and the studies they've done with depression stuff, it, you know, at least directionally, it's involved in, you know, updating your behavior becoming more flexible and less, less rigid or less stuck where you're at, is that how you start to think about why some of these search nergic drugs have the general kinds of effects they do.


Zach Mainen 1:29:51

Yeah, I think that's that's, that's pretty right. So this idea that serotonin Um, has this function is I would say not by any means a consensus in the field. I'm not even sure if it's a consensus in my lab. But, but if we go to psychedelics or in fact, there's a bit of history with this with with serotonin there is this idea of kind of a change promotion, like a behavioral change. I think Michael Pollan's book was titled, how to change your mind. And this idea goes back to at least the 60s. So that's there, as a as a kind of thread. And with, I guess, with serotonin with SSRIs, they're also experiments showing, for some time, a kind of increase in types of neural plasticity. Like during development, there are types of plasticity involved in large scale changes into the to the nervous system, like when you have an animal that develops with one eye closed zoo, the experimental paradigm, where you can see kind of irreversible changes are hard to reverse changes in the visual system. And serotonin seems to be able to on unleash the ability to change to rewire ocular effects of the visual cortex. And this, this has led to kind of a one model that I thought was quite early and influential was this idea of just undirected change as as a function of the, let's say, this class of drugs, and perhaps, perhaps the system is not literally just that, but thinking about this as kind of null hypothesis, almost like a very simple hypothesis is undirected change. So if you're, if you're not really, if you're kind of stuck, and you're not really in the right place, then turning up the knob on this kind of error signals turning up the knob on, on change. can be can be good. You know, if you're in if you're randomly changing, if you were in a really, really bad place, then maybe random is better than on average anything. To me, this recalls the now that we're talking about, I guess, we're getting into depression, etc. A bunch of the therapies that work the best for depression are still ones we don't like to talk about that much. But things like electroconvulsive therapy. So EC T, is a super effective antidepressant, just a


Nick Jikomes 1:32:51

crude sort of shake the system out of wherever it's at right now.


Zach Mainen 1:32:55

Yeah, yeah, EC T, TMS, are the most effective antidepressants, they just sort of not the first line because they have side effects. Pretty much all these drugs have, in some sense, since side effects. So I have a colleague for as a practicing psychiatrist, who, and I know others who would say, you know, if I was severely depressed, this would be the thing I would go go try. Really? Yeah. Yeah. Wow. So AECT has a super bad reputation.


Nick Jikomes 1:33:27

Yeah, I mean, naturally, you know, when you say it, I think of like the 1950. Exactly. These very crude, primitive looking psych wards where they're just electrocuting people basically.


Zach Mainen 1:33:38

Yeah, exactly. And it used to be done in a way that, you know, well, there's a whole context there. Like if we talk about the context of patients who were, you know, possibly against their consent being subjected to AECT we get a kind of horror story, One Flew Over the Cuckoo's Nest. It's exactly yeah, but likewise, if we go back to the early days of psychedelics, and we see prisoners being investigated, largely against their consent, with psychedelics, or we see the CIA experimenting on consenting and consenting subjects with psychedelics, it's all a horror story. Right. So AECT I, you know, personally, I agree, it seems like something I would, I would tend to avoid, but I think it's my bias that it sounds like electrical shocks are going to destroy something that should, should be as it is, but we don't really know. You know, we know that there's, you know, occasionally some memory issues. Benzodiazepines also cause you know, memory issues. Yeah. Is that or is that so? I don't know, there's, we should be a little we, in a sense, since we don't know what any of these things are really doing. Like to say psychedelics are better. There's a whole lot of considerations that that need to be taken into account. out. Nothing, nothing is perfect. But it's the same. But I guess my point in bringing this up is certainly not to say that AECT is is better than psycho psychedelics, I don't think it is. But that to say that the theory is about the same, which is you shake the snowglobe however it's put, and if you're in a bad place, you will more likely than not end up in a, in a in a better place with as a pretty low so low bar. Yeah, right for for things. But but you know, from a science perspective, it maybe it sounds a bit bleak. And we would like these things to be doing something more interesting behaviorally, like we would like psychedelics to be giving mystical experiences and not shaking the snowglobe. But it fits a neuromodulator I think a better way to think of it to begin with is something closer to very, very crude. shocked, shocked, the


Nick Jikomes 1:35:59

system was just like a reboot. Yeah. Have you tried turning it on and off yet?


Zach Mainen 1:36:03

Yeah, yeah. And it's forcing us to ask not, you know, not to put so much weight into what, you know, we don't put a lot of weight into the drug itself, it's the drugs, ability to interference


Nick Jikomes 1:36:16

as a sort of broadcast system that is naturally providing a very general widespread, being to the brain, whatever Exactly. That


Zach Mainen 1:36:25

is, exactly. And since we, since we can only interpret that broadcast signal, insofar as we understand what the machinery that it's interacting with, is doing in the first place, we kind of push the problem from drug to neuromodulators, to neuromodulatory system to brain itself. And so then we're back to, you know, what is this process that I, I am talking about with cognitive Remapping? How does that work? Is that really going on? Some of these things also get pushed into? Is this a thing where we know what the right thing to do is? Or is it or is it, you know? Is it? Is it possible to? Is it possible, for example, to design an artificial system that will suffer from the same problems that a human suffers from? So this is kind of the flip of, of what we talked about before, and I find it an interesting sort of thing to push. So, normally we say, we think of computers as a metaphor for understanding the brain. But if we're right, the brain should be a metaphor for Understanding Computers, too, right? So we get inspiration from neurons and how they work to design better computer systems. But why not behavior? Like, why do we get depressed? Is that just like a bug that just couldn't be worked out? Like, why is it you know, we can throw the problem on to all sorts of incidental, we were not adapted to this or that? I don't know what but can we also ask Is depression the inevitable consequence of the way that a system is designed? It just cannot be avoided, if we get to a system that is actually as adaptive as possible? It's not clear that that that is that this is a false statement. So we don't tend to think of, you know, we tend to think of building computers that can do intelligence that can do all the features of human behavior that we love. We don't think of wanting to design a system that gets depressed. But But is it possible that some of these maladaptive features of the brain are inevitable,


Nick Jikomes 1:38:37

likely consequences of something that is designed well for reinforcement learning or whatever?


Zach Mainen 1:38:42

Exactly. Yeah, exactly. Or they're, they're due to compromises that just that cannot be, that cannot be a better solution, just the best of all possible systems is that it's as flexible as possible, right? Or it's up. It's optimized in such a way that that I think that that that's, you know, something we can't say all that much about, but it's an interesting, it's an interesting way to look at to look at the problem.


Nick Jikomes 1:39:10

Yeah. And, you know, it is interesting to think about what these psychedelics are doing in terms of interfacing with the serotonin system insofar as it relates to this potential story about behavioral flexibility. Do we know very much about what you know people done experiments such as rook just recording from the cortex or somewhere else in the brain, when you give an animal psychedelics or when you fire up the dorsal Raphe neurons and and the serotonin neurons that live in the brainstem? Do we know very much about what's happening in terms of large scale brain activity when you either give us our genetic drug like a psychedelic or when you make the serotonin neurons themselves fire more.


Zach Mainen 1:39:56

SO to SO SO to two topics one is one is more what do psychedelics do to brain activity? And the other thing I gotta correct is is like, how does it actually interact with the endogenous? Yeah, what's,


Nick Jikomes 1:40:12

what's the endogenous response of the brain when serotonin and serotonin neurons become more or less active? Okay, hearing that are


Zach Mainen 1:40:20

okay, sorry. Yeah, there's also the question that that I thought you were asking is, which I think is quite important is how do psychedelics modulate or interfere with or amplify the endogenous function? And I'm gonna give a slight, slightly sheepish. I don't think we really know that much yet about any of that. So I say that. I mean, certainly there are experiments in humans, with psychedelics, with MRI, and from a systems or computational point of view. These are fairly 30,000 feet up kind of experiments. Yeah. And I don't know all that much about what to make of them in terms of how the brain is, is computing. I mean, I can I can interpret those in terms of like change, or,


Nick Jikomes 1:41:11

like the randomization is in humans and stuff like that. Yeah, I


Zach Mainen 1:41:14

don't know that those constrain too much about how we should think about neuromodulatory function and how serotonergic drugs interfere with that, or,


Nick Jikomes 1:41:26

I mean, is anyone is anyone doing like the equivalent experiments in animals where we can zoom in a bit more and see in more detail how the brain is responding? Is anyone doing imaging or recording?


Zach Mainen 1:41:37

So So various labs, I think, have started since the, I guess, the Senate psychedelic Renaissance. So there are more studies coming out. Our lab is is involved in in those studies, with psychedelics, per se, that's something that we're just starting there. Let's see how to how to give a short answer to what we would know. In my head, I would say we just start to scratch the surface of what of what's happening. So they talk about what's going on. And in our lab, we're looking at things like, is the brain actually more activated or, or less activated when serotonin is released? Like it's at that sort of like that level, which is that sort of the FMRI level, there was a recent fMRI rodent study, for example, give you an example from Kenji dois lab, who I mentioned earlier, has been working on serotonin for some time. And they showed that the response during anesthesia when a rat was was anesthetized was inhibitory. But that when it was awake, it was no longer inhibitory. So it's at a it's at a super crude kind of level, asking things like this. Yeah, we're asking questions like, if we look more statistically, at single neurons, or populations of single neurons, using technologies that allow us to record hundreds of neurons at the same time, and several brain areas, statistically speaking, what's going on with the statistics of activity that we think encodes information, and this is all being done mostly without the animal actually doing a task? So it's, it's kind of spontaneous activity. So it's, it's very early, early days. You know, there are other studies out there that I, you know, if I was, if I had them all, on the top of my head, I don't mean to say that there's nothing going on, there are studies coming out more frequently than ever now. But I still think I think it's a kind of a period, which means we don't really have yet any super clear, take home messages from the from those experiments, you know, the ideas that are out there, which I could give you based on the kind of theories that we were discussing, and all this architecture of the brain, the kinds of things that people are going to be trying to look at. And so give you an example. If we talk about these ideas, this idea that the brain is modeling the environment, it has expectations, and then serotonin is involved in updating those models or beliefs, then what could we isolate? Could we isolate activity representing models? And could we see if the effect on expectations as they were encoded is different than the effect of say this purely sensory information? And we did one experiment like that with stimulating serotonin neurons. This was done by Orion lotum Omega Lorinser postdocs in my lab. And they did this in the olfactory system. And they were able to isolate activity that spontaneous firing of neurons in the absence of sensory stimulation, and an activity that was driven by odors in the olfactory cortex. So the odor evoked information is more feed forward more purely sensory, the spontaneous activity is a bit harder to assign an origin but is not primarily sensory driven. And what they found in anesthetized rats was that stimulating serotonin neurons affected the spontaneous activity, but not the sensory evoked activity. So differentiated between what might be associated with the model from what might be associated with the sensory input, as if it reduced the amount of expectations that the that the mice had, or it or it suppressed their expectations. So that's cool. I like this, I'd like to study but relatively small number of neurons, anesthetized animals, were trying to do similar experiments in behaving animals, in awake in white wake mice. And it'll


Nick Jikomes 1:46:20

be I mean, it'll be interesting to see, you know, as people study some of the stuff that you know, so when I think about, like, what you just said, in the context of you no feed forward versus feedback and expectations, rather than sensory information, you know, when you when you think about the psychedelics and the fact that they're principally acting through the serotonin to a receptor, and that that receptor is really strongly expressed on certain neurons in the cortex that we generally like to think are these sort of top down feedback type neurons, it'll be really interesting to see what happens when people start to do you know, imaging and recording in response to those drugs and look at things like spontaneous versus evoked activity.


Zach Mainen 1:46:59

Yeah, I agree, that's going to be super interesting. I think the trick is, a lot of the experiments, we can do it, you know, we can do say something about layer five neurons with certs with, you know, to a receptors, but usually not at the same time as say, the animal is behaving and doing its natural thing, etc. So requires putting together a bunch of things, but the but the experiments are, are, are AR, you know, are doable. There's a lot of different potential pieces to try to try to fit together. What, what's interesting. Now, I think it's interesting to think about what well, to think, to think about how those kinds of results would inform human experiments, or would inform our understanding of psychedelics as a therapy, or really, as psychedelics as a tool for a better understanding the mind consciousness, cognition. And it's quite interesting to start sort of trying to bounce back and forth between those things, because depression into depression is also the case, but particularly for psychedelics, the essence of the problem, or the essence of the interest in these things in a way is that they provide a, a kind of window onto the experience of, of the world or the one's own experience. Not just that there's stuff going on in the brain, right thing that's fascinating about them for so long, is their ability to change perception to change very, very fundamental things about what people are experiencing. So it's an opportunity to say, we have this manipulation, which we know has these fascinating effects on subjective experience. And that's a lot of the reason why people are interested in psychedelics and why actually people got interested in serotonin to begin with, has a lot to do with the fact that they are understood quite a long time ago that they were acting on serotonin system. But but then to bounce that back and forth with ideas about computation or ideas from data from physiology, trying to make sense of this back and forth, which I would say is the other kind of Holy Grail of neuroscience, we talked about the Holy Grail being defining functions as being objective trying to understand how neurons compute certain functions. That's kind of the problem of intelligence or like the problem problem, problem solving. How does the brain compute like the other fundamental Holy Grail of neuroscience is trying to make sense of our own experience, trying to make sense of why, why we are the way we are why we have these types of experiences. And in in that mode, a drug that has an effect on subjective experience, and can be given to a human willing subject, given all the appropriate consent and protocols is a kind of unique, a unique opportunity in that respect. I mean, not unique, but the cases in which we can put recording electrodes in, in human heads, the cases in which we can do optogenetics in people's heads, the cases in which we can do electrical stimulation and all those things that could be fascinating to know is going to be very, very, you know, it'll be a while or it will be very difficult drugs is something been around for a long time, and have that that side, taken care of right like, great, great, relatively usable way to explore subjective experience, from the inside. You can ask people what what they experience, you can take them yourself. And then that can be compared to the biology, the neuroscience, the models, et cetera. They're not. They're not too many other perturbations like that, that are that are kind of accessible. And psychedelics have have always been thought of as something with a very complicated, fascinating, sometimes impossible to ineffable impossible to describe effects of the highest possible order. So it's incredibly fascinating. From this poll of, of self understanding.


Nick Jikomes 1:51:52

Yeah, one of the one of the interesting things to think about too, in the context of a lot of the clinical results that you've seen with psilocybin and other things the past few years, is that, you know, these things affect perception and conscious experience in profound ways. That's the first thing people always talk about, you know, how, how profound and life changing the experience was how how distorted their perceptions were, how they experienced things they had never experienced before. You know, when we think about that, in conjunction with the therapeutic effects they've been seeing for major depression and things like that, from your perspective, as like the basic scientist, as opposed to the clinician, when you think about, you know, what we do know about what the psychedelics are doing in terms of the biology that's happening, what we know about the phenomenology of the experience, there's this question of, you know, to what extent is the content of that experience relevant for the therapeutic outcome? You know, so when you talk to someone who's had their depression treated, and they talk about, you know, the visions they were seeing, and the content of those visions, and you know, what the symbolism was for them as it relates to their their depression and their life? The experience for them seems to be very important. They say, it's very important. But there's also a perspective that maybe it's not, and all of that stuff is just sort of a side effect of the therapeutic biology that's actually happening under the hood. And you know, there's some people who think, you know, we can probably get a lot of the therapeutic outcome we've seen by engineering drugs, that give you that without the subjective change in your experience. And then there's, there's people who think that that's probably unlikely to happen. How do you think about that? Do you think we're likely to see a similar magnitude of therapeutic outcomes if we engineer so called non psychedelic psychedelics?


Zach Mainen 1:53:43

Yeah, super interesting topic. And I have like, 16, different answers. But none of which is from the perspective of a clinician because I'm not right, but so I think my number one answer is, there's something about it, that doesn't completely make sense to me from a basic perspective, which is the following. So if someone has been treated by a drug, and they're getting better than this is a form of experience dependent brain plasticity, by definition. What I mean by that is that any change that happened, we can attribute it to the brain because we're not going to be mystical and every type of plasticity that could possibly, well, let's be to maybe be a bit strong here, but all of the types of plasticity that have that have been studied that are important seem to be activity dependent. And activity is basically a function of experience. Good experience means what well, when we talk about subjective experience and psychedelic trips you mentioned like what kind of visions the person saw But from a more basic perspective, experiences means everything that happened. And so it's hard to imagine a drug that would have a experience independent effect. That would be anything other than kind of random,


Nick Jikomes 1:55:18

a random change in the synaptic weights in all


Zach Mainen 1:55:20

parts of the brain. So we're back to the east. So I guess we're back to the AECT conundrum. Right? So if if psychedelics are nothing more than AECT AECT sorry, electroconvulsive therapy got it, then, then yeah, why and then, then you if he's, if you believe that AECT can work. I can also, I think it's not implausible that psychedelics could do something comparable to EC T, electroconvulsive? So I saw I agree, like on this on this poll, but it sounds like what you're saying, but I'm but I'm saying is, if we, that what we like, I guess I'm I'm painting a mix here of the of the, of two perspectives. So one perspective is yes, it's plausible that you could have some kind of brain change that on this idea that things couldn't get any worse. So they can probably get get better by, you know, randomizing things. On the other side. Everything that's actually happening in a in a therapeutic process, is depending on some facet of the context, the preparation of the patient, the experience that they go through the after follow ups, and all of that is, is going to inform the plasticity that that happens in the course of that. So you can certainly imagine this kind of clinic in which in go patients, and you know, no one's speaks to them, and the brain is shaken up by whatever it is you want. And I do believe something could come of that. But is that something we want to be serving up to people? I would say, yeah, that's comparable to saying everyone who's really depressed should get ECG as soon as possible. Yeah, yeah, it can be it probably will help on average, I'm not sure it will help on average, but it will sometimes help. And for some people, it will be worth it. And I I don't know if I respects so much the the urgency to do that for a lot of people, but I think it's a really low bar, a low bar for therapy. So on the other side, I think conceiving like trying to go down that route is kind of throwing the potential, you know, sort of potentially throwing the baby out with the bathwater. So like, rather than saying, We cannot afford to treat people humanely, it's too expensive or difficult to figure out the protocols or whatever. We need a drug that that doesn't have experience involved, which is how I see that that route, we can also say that we should be optimizing experiences together with drugs. And that would include, you know, the selection of who who would, who would want to receive them, their preparation, the experience itself and the aftercare. I think, I think the argument would be, we can't afford to do that. And that, that, you know, we're stuck with something much less. I don't know, I find that rather bleak. I so so I think what we know about the brain says we should what we should be going for is understanding how psychedelics modify therapeutic processes, we shouldn't be taking on either biological or psychological absolutism, we should be recognizing that, you know, a person has particular circumstances, and he's gonna go through particular experience, and do our best to guide that. And to, you know, to make the placebo effect that may actually be a lot of the therapeutic effect, a part of the treatment, you know, so for example, there are not yet randomized controlled trials to find out how to optimize the therapeutic process relative to the drug, you know, so there are dozens of companies finding new compounds, trying to engineer the compounds every which way, because that's what the pharmaceutical industry knows how to do. And so it kind of makes sense. And there's a business model and there's a there's a kind of structure in place. But pharmaceutical companies are not in the business of optimizing clinical care. So instead of a kind of unified kind of approach, we're getting kind of either or, either or Thinking, or we're just saying, for example, in the compass therapeutics, psilocybin trials, they're kind of trying to minimize the amount of care required the right, taking it out completely right. In the Hopkins trials, there's much more elaborate psychotherapy, and the process involved and the if the impact of the compass trials, or the the effect in the compass trials, were the effect sizes were much smaller than people might have expected. And one might expect that that could have had to do with the minimization of of the, you know, kind of placebo component. But I think we have to think in this plasticity mode. Yeah, that the placebo is something we need to work with, we need to Yeah, you know, understand how to make that work. Yeah, I work for the process.


Nick Jikomes 2:00:52

I mean, I think you and I are thinking about this in a similar way. Because I, you know, I think there probably is, inevitably going to be some therapeutic efficacy for some people, if you think of this is just sort of the the shaking the snowglobe thing. So like, if you're not directing someone's experience by having them go through the therapy sessions, and think about, you know, their depression, or their alcoholism, or whatever it is, you know, if you're in a really bad place, if you're super depressed, and your brain is just in a state, that's that's giving you that, then maybe taking some kind of random walk out of it has good odds of being better than nothing. And maybe that's also why right, you see effects and animals where, you know, we're not giving the rats psychotherapy, but we are seeing like antidepressant effects, or the rat equivalent of that. But you know, the way that I interpret the result, that when people go through therapy sessions, where they're specifically talking about their ailments that are then having experiences which they report has been significant, indirectly involving them thinking about all of the elements of their depression, or their alcoholism, alcoholism or whatever. I think that experience is the experience dependent plasticity component you are directing, if you're thinking about the actual problem in your life, that experience that you're having while you're tripping, to me, I interpret that to mean, you are involving the networks in the synapses that are encoding, maybe the associations you want to break. And because you're doing that in the presence of the drug, you're going to have a larger effect than if you maybe or maybe not activate those networks, because you're not being directed to.


Zach Mainen 2:02:28

I think that's right.


Nick Jikomes 2:02:30

Yeah, so I think it'll be interesting to see how that stuff plays out. But my prediction, it sounds like this is your prediction is that you'll probably see an effect in both cases where you sort of have an undirected administration of the drug with no sort of therapy, psychotherapy component, you'll see it in the other case, but I think the magnitude of the effect will be different, more people will be helped more if their experience involves thinking about the associations that are relevant.


Zach Mainen 2:02:56

Yeah, I think we're gonna see that soon. Because for ketamine, although it's not a classic psychedelic you we ended up in that situation where there's a lot of clinics practicing ketamine without any particular psycho assistance, but there are others which are doing it. What would be nice to see is trials, probably ketamine is going to be easier, because it's, it's less, it's already out there. Yeah. And it's clear that ketamine has some efficacy, even in kind of non therapeutic. How do we put it in a bare bones context across contexts? Like maybe? Do we know whether the, you know, it's said that ketamine treatments are not lasting very long, right? Is that does that improve if one changes the therapeutic context?


Nick Jikomes 2:03:49

Yeah, yeah, that's a pretty simple, answerable question too.


Zach Mainen 2:03:53

Right? And what but I would like to see in the field more emphasis on understanding the interaction of the drug with the, with the experience of the patient, whether it's in a clinical context or not understanding understanding somebody that is with eyeshades listening to music in a in a journey, isn't the optimal experimental context for a scientist to understand what's going on? Like, it may be a the optimal therapeutic context or not? I'm not sure. But the problem with that context for you know, from a basic kind of curiosity question is, it's really hard to know, moment by moment what the person was going through. Yeah, if you don't disturb them, and if you don't collect any kind of signatures of their behavior, any, you know anything much. You don't really have any resolution to say much if you don't have a right or context. So what we have so far is a very limited number of studies. mainly are almost exclusively in healthy subjects where they do some kind of cognitive task, like the reversal learning tasks we talked about, which by the way, are a bit all over the place, in terms of results, not super consistent. Or you have patients who rightfully, you know, are kind of fully, they're the full, the main purpose of the of the psychedelic is the care for the patients, so they don't tend to bother the patients with, you know, wake up, please, could you start? Could you do this game? Yeah, but I see in the middle, you know, psychedelics are not not something which is, at least And anecdotally, psychedelics is something that a lot of people are not severely ill think it helps them. And so one would expect in the future, if things progress that, under some circumstances, people who are relatively healthy are also going in to have psychedelic experiences as they are anyway, in the current state in which those experiences are illegal. But you know, people are using millions of doses of psychedelics every year, in an in a kind of out of science context and what what they're doing there is optimized, you know, maybe haphazardly, but is optimized for their enjoyment right? People take these drugs because they get something out of it.


Nick Jikomes 2:06:25

Positive Psychology is the taking of something to feel better, even though you're not feeling bad, rather than to get rid of some problems.


Zach Mainen 2:06:33

Exactly. And the context that they're taking those Yes, some people sit with Isay, eyeshades and music, but a lot of people take them in social contexts, which is historically, apparently, you know, a typical use for psychedelic is a kind of ritual involving a number of people. So a lot of the uses of these things are culturally contextualized, in different ways. So what we're doing in the clinic in the case, like in the Hopkins studies, or in many studies, is not necessarily a typical use


Nick Jikomes 2:07:08

it almost it's like a non ecological Exactly. It's a


Zach Mainen 2:07:13

currently not completely ecological use. So for example, what do we know scientifically about the social there the interaction of psychedelics and social interaction, we would not have very many papers to count if we were going into literature to to go with science. We know anecdotally that that that there's there's a lot of things going on. I think social contexts are particularly particularly interesting. But what about I don't know athletic contexts, we know psychedelics are used for dancing. Why? Why do people associate dance even before you know we talk about modern dance culture, rave culture as a as a thing, but if you go back to


Nick Jikomes 2:07:56

one of your gatherers, they just you know, they would do drumming ceremonies, this


Zach Mainen 2:08:00

is exactly the earliest mescaline made it into the first scientific literature, as you know, from a context where it was being used in, you know, with drumming, you know, it's not something we invented with techno music. It's something that goes way back. Why is that is that? Maybe it's maybe it's happenstance. But I think what's interesting is we're with a medical model we've come to, we've come into it to prioritize, for some good reasons, at least initially, helping people with most severe illness go from bad to good. And we're not really caring about the experience itself, which, even if the outcome was not necessarily always good, we might still find drugs like this worth taking. So many of the drugs that people prefer to take, we could take alcohol, caffeine, especially alcohol, you know, we don't do it because it helps us to be less depressed. Afterwards, we do it because we find the state pleasurable, right, right. So so there's a lot of things we don't know about psychedelics, there'll be being used in a lot of contexts. They make people that people find find useful, that don't have to do with a just introspectively. Trying to solve one's problems, they're not just sort of a torture that you have to go through, like AECT in order to get better afterwards, when used appropriately, they're presumably something that can help people to have not only better outcomes, but sort of better, better experiences or to explore their own minds. So we shouldn't kind of forget all of that, as if it's all just instrumentally useful to the antidepressant cause


Nick Jikomes 2:09:56

Yeah, and it's fun and fascinating to watch to see I've sort of literally been watching in the literature, how people are talking about this, as they get the results, translated in to science speak for journals. And then, you know, have I've watched that in the literature of all over the last two years, and now you're starting to see is like, you know, the new New England Journal of Medicine will have a paper or whatever, and it'll be like, you know, we gave patients psilocybin or whatever. And, in addition to the psychedelic side effects, these were the therapeutic outcomes we had, like, there's just sort of the segregation in people's minds, that just sort of completely puts the the subjective side over here, as not even potentially being relevant to the therapeutic side. And I find that very, I guess, I don't find it bizarre, I don't find it too surprising, actually, that that many people are doing that. But I do think, I do think if people aren't mindful, that they're making that separation, automatically, that they're going to miss stuff. And, and we might not get as big outcomes, as we hope,


Zach Mainen 2:10:59

completely, completely agree the this kind of disciplinary thinking or reductionist thinking where it has to be? No, it has to be one thing, or if it's been approached by, by a company, it has to be sort of fit into that format. It's not necessarily that the company or the people involved in the company, want it to be that way. But it is, the world is kind of has its tracks, and you know, doing drug development has an outcome based track, it has randomized controlled trials, it has pretty well defined way of doing business, for better or for worse, it's kind of a straitjacket in a way from an understanding perspective, or maybe maybe from a health perspective, it is ultimately, you know, both necessary and, and slowing us down. And, you know, something happened over the course of, especially the 20th century, where if you go back to the time of Freud, more or less, early 1900s, you did not have this level of kind of mind brain separation, you know, even Freud before he went full scale, psychodynamic studied neuroscience wrote about, you know, brain mechanisms, you know, William James, was equally comfortable talking about neurons and consciousness, and treating Varieties of Religious Experience and, you know, much less, much less fractured in many ways. So, I think what, what we need to keep in mind is that we're going to have to fight the forces of, of, you know, doing things really well, from a fairly narrow perspective and getting stuck and thinking that that perspective is the only is somehow everything else has to come down to that. So what neuroscientists tend to do, whether they, you know, whether they like it or not, is they I'm a nurse would have been a neuroscientist means is, you're going to put neurons ahead of everything else. And, you know, okay, that's, that's what I know best. That's my job, I spent the last two hours talking with you mostly about computers and neuromodulators. But I should keep reminding myself, my students and whatever the public that that's not because I think everything reduces to neurons, it will all be solved like that. It's like me, you know, trying to go into a conversation with an economist and trying to convince the economist that we should all forget about macro economics and start talking about neurons, he will laugh me out of his office, if he even lets me in his office, and it's not going to change. The economists are going to be there for for a while, right? So we have to be interdisciplinary in a kind of tough way. Like we we got to accept that the thing that may get us money or get our promotions or get our papers in, which is going to be disciplinary and tract is not always going to be the end all be is never going to be the end all be all of what we need if we're trying to help people, where all these complexities are still going to be present all the you know, the neurons are there, yes, but so are the so are the social conditions. So are the, you know, the political conditions. So are the psychological conditions we cannot be Keep approaching this as if one of those is going to suddenly undermine and get rid of the need to take into account all the other ones. That's our challenge.


Nick Jikomes 2:14:53

One thing, sort of a vague question I want to get your take on is Tying a couple of threads together that we've been discussing, you know, if it's true that that serotonin is somehow involved in this, this idea of context switching or allowing the brain to switch, you know, one thing that strikes strikes me that's important there to say is being able to shift your strategies or, you know, reassociate things in a context dependent manner, that's super important to do, if you actually need to change your strategy if the context shifts, but the flip side of that is also, it's super important not to do that, if that's not called for, and I'm wondering what your take is on the acute versus chronic use of, of serotonergic drugs, whether it's psychedelics, whether it's SSRIs, if that is the case, you know, if if the point of getting someone out of depression is to get them to form some new associations, or make some new context assignments, that's strikes me as something good to do. Until you make you sort of remap your your associations and your contexts, but then if you're, you know, sort of staying off the drug chronically for a very long time, is it possible that it would actually be maladaptive? Because? Because you actually want to stay in a stable strategy once you come upon it?


Zach Mainen 2:16:17

Yes, I think that's right. So if you have, let's say, built up a, a set of strategies, or a set of contexts, that, you know, you compartmentalize your life in a certain way, that's gotten gotten out of control or, or gone wrong, then you want to, you want to shift shift those those things. So more flexibility will help you. If you continue, you may have a hard time. Keeping the starlight in perspective, let's say in this sort of metaphor, you have the compartments, they should be stuck to some degree, those are the kind of the guidance for your long term, your longest term thinking, how am I How am I? How am I doing on the 20 year timescale? Which, you know, young adults don't even have, right, so it's a super, super long term perspective. If you're constantly meddling with your metaphysical beliefs. For two decades, how could you possibly have a kind of steady guidance? Or if you do, it's for your forcing things to kind of become more and more meta, but but, you know, there's, there's gonna be some attempt to keep to keep making sense of the world. But if you're, if you're taking a drug that allows you to compartmental re compartmentalize, or let's say rethink everything. Yeah, what what, how can you continue to be in that state? How long would you like to be in that state? It's a, it's, that would be the implication of, of this way of this way of thinking about it. That's not an easy study to do. Right, like, three weeks is is a lot to ask for a study on depression with long term psychedelic use in a in a anecdotally, there are no grave dangers. Right. But, you know, how do we compare people's lives at such an abstract level? It's, it's not going to be very soon that that one is going to have answers to that. But what I think what's interesting is we, we should be trying to contextualize treatments, and studies to try to look at those things. And they're not even that, you know, they're not easy to get data at sort of honing our tools for understanding lived experience, on a rich timescale, a short timescale and a long timescale. How do people do this, this kind of thinking, we get at that by conversations, we get it that with novels, we get it that with, you know, these kind of informal tools that we have, as people in cultures, you know, we do podcasts and whatever, but we don't have a science of life, you know, in that in that way, which is a lot to ask. So it's not that clear that there's going to be very soon but that we have studies where one is going to be able to say answer that question which you you know, the point you're making, which I think is super, super valid, it's going to be more raising awareness that maybe these are things that we should be thinking about. And you know, if you are taking psychedelics, think about that kind of issue. If if you know somebody who's gonna go for therapy, there's gonna be a You know, quick fix mentality is going on in it, there are long term effects, it's maybe not horrific flashbacks. But there's going to be some effect, right, there's gonna be some directionality at some level, to how it changes you as a person you can't be going through, it was the most intense experience of my life yet again, and not have some kind of long term. Right? Yeah. And then that's a choice like, then with the awareness of that with the better the more awareness people have, the better they can make those decisions for themselves. Because in this, you know, what the point that's important here is, we will not be able to outsource and mechanize all of our mental health care decisions, to RCTs to clinical trials, right, we will get rid of the worst,


Nick Jikomes 2:20:51

there's never a clinical trial that speaks to every eggs, everything we need to answer today,


Zach Mainen 2:20:55

exactly everything that is about you, you know, you need to, at some level, take responsibility for your own brain, your own mind, you cannot ever expect your therapist or psychiatrist to be able to know you better than than yourself. And so, you know, this podcast is a kind of mechanism for people to better inform themselves about these issues where we don't have answers, right? I'm, I'm, I'm certainly not giving any answers about any particular person. But like, by raising these kinds of discussions, I think people can start to ask this of themselves, you know, about their own lives. So sort of the, you know, introspection is, is a full time job, right? For, for many, for a neuroscientist trying to understand the brain. You know, it's something that fat fascinates us. And, and, you know, thinking about why you're depressed. As a neuroscientist, if you're depressed, and you have very rich, complicated ways to think about things, it can be helpful. If you reduce it to this drug is good, this drug is bad, this, you know, this therapist will will sort me out, it doesn't give you very many tools to work on yourself. Right. So I really appreciate the need for people to take to inform themselves and you know, whether you're scientist or not that these topics are not so complicated that you cannot, you cannot start to have your own opinions about, about about your own mental health right. Far from it.


Nick Jikomes 2:22:30

Yeah. Well, we've been talking for a while now, is there anything that you want to leave people with any final thoughts or anything you want to reiterate from our discussion?


Zach Mainen 2:22:39

I think that's a great place to end.


Nick Jikomes 2:22:41

Excellent. Well, Professor Zach Mainen thank you for your time. I really appreciate it. Thank you.

bottom of page