Full episode transcript (beware of typos!) below:
Nick Jikomes
Professor Terrence Deacon, thank you for joining me.
Terrence Deacon 2:53
Thank you. Hi,
Nick Jikomes 2:54
can you briefly tell everyone where you are and what you do, scientifically.
Terrence Deacon 3:00
So I'm at the University of California in Berkeley. I am a biological anthropologist, by training. Though much of my work has been in the neurosciences. I got my PhD from Harvard. I in biological anthropology, though, I worked in the neurosciences at MIT during most of that time. And it was an anatomical background. In fact, all through the 1990s, I was doing neuroscience research, mostly associated with fetal neural transplantation. It was associated both with medical procedures associated with using transplantation to maybe repair, neurological damage, smoke was even led to human trials. But much of my work was interested in using cross species, fetal neural transplantation to ask questions, also about species differences. And how the brain develops is connectivity is how neurons find their targets, so to speak.
Nick Jikomes 4:02
And you've done a lot of work and a lot of thinking around the origins of human language, which is a subject that has fascinated me for a long time. I think it fascinates a lot of people, because in many ways, language is arguably like the quintessential human human capability that we have. That makes us special in some sense. And so I want to start out just by asking, and for those that don't know, so you have a book called The symbolic species with the subtitle, the coevolution of language in the brain. It was written approximately 25 years ago, and I've gone through it a number of times. I've got it right here. It's one of those books that I have filled with so many highlights that they're almost counterproductive at this point. But can you just start out by describing for people, what is language? How would you define it? And more importantly, what are the key ways that human language differs from from other forms of animal communication,
Terrence Deacon 5:03
in fact, the question is a good one, because I think it's not having a clear answer to that is blocked or thinking about this, and blocked I think careful research into it. So if you think about languages, just just vocalization, just vocal communication, lots of species engaged in that. If you think of language as communicating about something that is outside of the body, many species do that as well. Come also communicating about, you know, states of the body, like, am I afraid? Am I aggressive? Am I interested in sexuality, those sorts of things, lots of species do that. That's clearly not what language is special for. I titled my book, The symbolic species, in part because I argued that in effect, there's something special about the way language represents things in the world, and represents our own states, our own beliefs, and our own intent, intentions. And that difference, I think, is also troublesome, because the word symbol has been used in so many different ways. It's sometimes being used as just sort of the, the arbitrary term we use for any sign, any thing that stands for something else. But I think mostly, we use it to talk about a very special kind of communication, which sometimes it's described as arbitrary rep representations. Whereas representations are things that communicate by virtue of likeness, called icons, things that communicate by virtue of their sort of symptomatic correlation with things we call indices. Symbols lack those things most of the time. And it's not just that they're conventional, not just that they're set up, by agreement sort of between individuals, whether they're different organisms, or different people. But it's that both their form and their way of referring are conventional. That is there, it's set up by virtue of some kind of shared interpretive agreement. And it makes it much, much, much different. And as a result, language refers to things in the world in a very different way. And because they're not connected directly with things in the world, they can refer by virtue of things that have happened in the past things that are possible, things that are impossible things that don't exist, we can communicate about that. Whereas if, if the way of communicating is always associated with something that has the same form, or is somehow physically related to something else, is sort of stuck in the present, makes it hard to communicate about things that are in the past, or could happen in the future.
Nick Jikomes 7:52
So you mentioned these three different categories of representation, icons, indices, and symbols, Can you unpack maybe give some examples of what icons and indices are, as distinguished from symbols?
Terrence Deacon 8:06
Right, so and, and I want to be clear that you can have conventionalized icons and conventionalized symbols, we have them all the time. So like conventionalized icon might be that sort of smiling face, we make with colons, and parentheses. In our texts. It's it's made with components that are symbolic, but it communicates by virtue of its similarity to something to a smiling face. But a lot of perception is iconic, we recognize things because they share form. And one thing can stand for something else, by virtue of shared form, including including, for example, the smiling face. That's an icon. And so in a sense, it's the simplest and sparked probably the most common form of representations. All perception works this way. Up. indices are things that represent other things by virtue of their connection to them in some form or other. So a simple form might be that hiccup or a cough, communicate something about my states, because it's physically associated with this. The smoke that we might smell is associated with something burning, and therefore it communicates to us by virtue of its correlation with something that burns. We can use. I like to use the example of smoke for other reasons, because, you know, smoke looks like clouds, so it's sort of iconic of clouds. But smoke can also indicate an index can indicate fire indicates something burning, but I like to think about the smoke that comes out of the Vatican during the church, choice of a pope. If it's dark smoke, it means that no decision has not been made. yet that votes have been taken, but it's not final. But it hits white smoke somehow announced as the choice is finished. Now it's playing a symbolic role, it indicates, at the same time as it symbolizes. And so this is another interesting feature that things that are symbols will also have oftentimes iconic features. They're like other things, indexical features that correlate with other things. But they also require something else in interpretation. the thing itself is not enough. It's correlations are not enough, it's likenesses are not enough, you need some sort of agreed upon interpretation, shared interpretive features. And that's the difference between this white smoke and the dark smoke coming up from the Vatican. symbols on the other hand, that, which is that feature of the difference between the light smoke and the dark smoke, notice how much more is there looking just at the smoke you all of the information carried by smoke itself, whether light or dark smoke is not sufficient, there is nothing in that sign vehicle that provides the symbolic meaning that's something that is in the interpreter, it's alone.
Nick Jikomes 11:19
I see. So icons refer to something because they literally look like that thing. So the smiley face emoticon that we make with a colon and a parenthesis literally looks like a smiling face. And index is something that indicates something because it's very tightly correlated with it in space and time. So smoke indicate something's burning, because something's burning right now. And the smoke is always there with it. Whereas symbols have this more arbitrary quality, they don't need to necessarily directly resemble or directly correlate with something. And so they can be used to refer to things that aren't immediately present in space or time. And you talk a lot about in the book, the importance of this, this continuity that we have for things for words and symbols, that they don't need to be proximate in space or time, or resemble in any direct way, what they refer to. So can you unpack that a little bit more? And maybe also tell people who was Charles Sanders purse? And how did he influence your thinking on this?
Terrence Deacon 12:18
So John, let's start with Charles Sanders purse, he was a philosopher writing at the end of the 19th century, beginning of the 20th century. He introduced this concept actually, it's an older concept, but he gave it really precise formal analysis, this concept of semiotics or semiotic relationships, or semiosis way to talk about a process of producing signs CMA, for the classic notion of the sign, like in semaphore, in, in naval communication, using flags, for example, semaphores. So, it refers to any kind of sign vehicle, and how it communicates any kind of representation in relationship. Charles purse, tried to formalize this. And he tried to formalize it first, by using the three concepts, we've just been talking about icons, indices and symbols, but then also trying to analyze the process of, okay, what's necessary to communicate with these, it was necessary to interpret them as meaning something. So for example, many species can interpret smoke in terms of the fire, in part because they've maybe had an experience of it. And smoke might be in that sense, frightening, or attractive, or, you know, the smell of sweat might be attractive to mosquitoes. These are things that are sort of, they tell you about the what you might call the interpretive competence. So that, on the other hand, a fly is not going to be able to interpret a whole lot of things that we can interpret a whole lot of even icons and indices that we can distinguish, but it cannot. And that's because its brain did not provide it with the interpretive confidence. It can't even learn new interpretive competencies. And one of the things that I've been focused on is, what kind of interpretive competence is necessary to interpret symbols? And it's a very complicated question, because this is, of course, what we want to say about what's different about human brains. Why can I interpret things certain ways, and when my dog hears the same word, it interprets it indexical when it when I say, you know, walk, my dog immediately knows that walk is associated with something that's likely to happen, doesn't know that I'm talking about this device that I cook things in, doesn't know that if I'm talking about something that happened a day ago, two days ago or might happen tomorrow, it's interpreted indexical Like it's associated with something is likely to happen right now. And so my dog gets excited about it. Um, however, we're talking about locks. And of course, it has nothing to do with, with anything that's likely to happen today between us, or that I'm about to do in some time in the near future, I've now been able to pull it out of that sort of immediate context, in part because now the word walk doesn't have that indexical Association. My argument is that for some reason, species other than ourselves, tend to interpret words that we produce only indexical they can't, for some reason, cross that threshold to see them as referring to things in this sort of abstract way. That words like lock can refer.
Nick Jikomes 15:48
Yeah, I think we'll talk about this quite a bit, this idea that, that symbols and words as we use them in language, don't need to refer to anything that's immediately present. And yet, and yet, we don't extinguish their meaning. So with most animals, when you teach them to associate a and b, a word, or whatever stimulus was something else, if you then present stimulus a over and over again, but it's not immediately associated in space and time with thing B, that association will actually disintegrate. But that's not the case with words, we can think of many examples in our own use of language where we use a word perfectly fluently, and it's never or almost never associated with the thing that refers to in our own sensory experience. And yet, that association never degrades. And that's very interesting.
Terrence Deacon 16:38
In the first case, the case of animals, we're talking about this basically, conditioning like that, um, it's about indexical relations, it's creating an indexical relation. Now, what's interesting is that, think about a rat in a Skinner box that sees the light gone. And once it learns the association between lights going on, and I can push a button to get a drink of water. That's really an arbitrary Association. There is nothing about light going on that says it's about water. But what the rat has learned is that it's correlated, the two are correlated, it's learned and indexical relation. Now the experimenter created that association. It's not a sort of natural association in the world, but because rats can learn, but their learning is an indexical relation. And much of learning is about indexicality. In fact, it's one of the things that, that evolution has built brains to do well, brains recognize correlations and learn them, because that's how we get along in the world, we need to know what causal features linked with other causal features. And and in that respect, it's really well developed in other species. But that also means that innocence, it's kind of the opposite of what you're just describing, in terms of words. If things are going to be learned by virtue of correlation, you know, we want to make sure that the correlations are good and strong. So if that's how we learned, what that tells you is that learning symbols must be different. Must be different than that, because it's precisely that which we don't want to have the the driver, the thing that holds words to their meanings.
Nick Jikomes 18:20
Yeah, and I think we'll come back to that. So this, this idea that there's a different kind of learning necessary for forming symbolic representations. And I'm going to ask eventually, about the, you know, one of the things that's so interesting and mysterious and seductive about language is, it's so complicated, it seems, and yet, we learn it so easily, but in ways that we have no conscious awareness of, and we do it at a very young age, and we're going to unpack a lot of that stuff. But first, I want to ask you, you know, what would you know, if we were to do a thought experiment, think about the some of the original languages that were out there, and humans? What might the simplest possible language imaginable look like? And is the thing that you describe in that way? I mean, could could we look at any extant species of nonhumans, like, say cetaceans? And could that be a simpler form of language? Or is there still a difference there?
Terrence Deacon 19:17
I think it's a good question. In fact, it was one of the questions that motivated me writing the book. I was actually talking to kids in a classroom. And I think there's like second graders or third graders. And I was trying to make exactly this point, that humans have this thing we call language and other animals communicate. They can even communicate vocally and transfer information from one to another. But it's not language. And a smart kid recognized the problem instead, well, you know, isn't their dog language and cat language and be language and that sort of stuff? And I said, Well, no, it's different. They, you know, for all the reasons we've just described. And and that didn't satisfy this kid. And a little time went by and then broke in this, this same little girl she broke in. And she said, Well, okay, but they do have simple languages, right? Those are just simple languages, and ours is complicated language. And I realized that now it's not just simple and complicated. It's not just more of them put together, more parts put together with more stuff. There isn't something like simple grammar and syntax, not even to word three word grammar and syntax or something like that, you know, the species. So the question I had to ask myself was, why not? Why isn't there this sort of graded effect, clearly, this we have this graded effect in terms of our ability to articulate sounds, it's gotten probably gotten better and better over the course of our evolution. Whereas other species may not have quite as much of an articulate capacity. But the way we use it to represent is really quite different. And realizing that there were no simple languages in the world, now became a problem. And it became a problem for two reasons. One, that it said, look, there's something fundamentally different, that there's a real difference. And this is the symbolic difference in my my thinking, but also, it undermines our ability to use this standard capacity that we biologists love, which is to have this sort of graded sequence of some process that we can say, oh, here, it's getting better and better and better. And we can see that it's adapted to this new function. And it was just sort of this incremental transition. Now, I think, historically, it are, you know, evolutionarily, there had to have been an incremental process in our evolution, which we got to become better interpreters have this, internalize some of these capacities more effectively. But clearly, there had to be a point also, when there were simple languages, something like a simple language, I think that we need to sort of get away from the ways we think about language now to answer that question, and I think it would probably be much more like, put today, we would call a ritual of rituals have iconic and indexical features. You pantomime things, you act things out, they have, you know, so iconic features of what they represent, we might need to bring people together, or people in objects together in certain ways that have indexical features. Or in fact, you know, I might act as though I'm going to strike someone, or do something, which indicates something that might have happened or might happen. And yet, we also use ritual to communicate things that are abstract about the future. One of my favorite ones is, of course, marriage rituals that we find throughout the world, what guess what things we have to do is to communicate to a population. And this doesn't, that's not easy to see if we do it on paper. But in most parts of the world, where we actually see ceremonies involved in this, what you're doing is you're communicating a sort of change in status, that something that is not visible, you can't put your hands on, but it's going to affect social interactions on into the future. It's about something that is happening at the moment, it's changing this abstract relationship that people have to each other, somehow the ritual has to communicate that as well. And so I think about early, symbolic communication, as much more like I would think about ritual today.
Nick Jikomes 23:41
Interesting. I'm going to go off script now based on what you said. And so it strikes me that, you know, we can talk about icons versus indices versus symbols, as separate things, but but they're actually you know, you often describe them in the book as having sort of this nested relationship. And what you were just describing here about a simple language, perhaps resembling something like a, you know, maybe like a religious ritual or something where stories are being told. It's not pure symbolism, it's symbolism that's tied very closely to indexical and iconic forms of reference. And I'm immediately thinking of, actually how difficult it is, even for fully modern, fully educated humans to think in purely symbolic terms. So for example, you know, we could think about algebra, for example, I almost no one finds algebra to be completely intuitive. Most people find it to be quite hard, and a lot of people find it almost insurmountably difficult to understand. So it almost sounded like you were describing the earliest forms of language that we could imagine as necessarily involving these multi modal forms of communication that use both icons, indices, and symbols, perhaps because the, you know, the pure symbolism is actually probably too much for the average person, especially the average person at this time in history. To handle and comprehend,
Terrence Deacon 25:01
I think that's a good way to put it because I, I want to think about the beginnings of this process in which, and this has to do with the CO evolution approach to this. That is early on, there would not have been time for the competence to sort of have evolved, we would not have had a much more sophisticated capacity than chimpanzees and bonobos do today. Would they be capable, or would close relatives, to them as maybe our distant ancestors were? Would they be capable of some degree of this, and what would be necessary to bootstrap into it? Again, recognizing how complicated it is, what the interpretive competence must be to do this, the interpretive competence requires a lot of things that are not quite the same as flutters necessary for acquiring, for example, indexical relationships about cause and effect in the world, a quite different feature. And so this idea that icons, indices and symbols are sort of linked together in some respects, it's actually about what I would like to call the infrastructure that you need to build symbols, you need to build symbols up, using iconic and indexical means when we think about, you know, agreements, coming up with a sort of symbolic relationships by agreement, we use the word convention, often to talk oftentimes to talk about this a conventional habit or something like that. What we recognize is that if we wouldn't have symbols capable of helping us do this, if we didn't have language, how do we establish conventions, but we do so and this is what rituals often do rituals, established conventions, oftentimes, in context, that would be hard to do with just symbols. And one of things about marriage, the example I just used, is that recognize that everybody else that's out there in your community that are close friends, there's going to be some sexual tensions, there's going to be obligations that have to change. These are not things that are easily communicated, particularly about the future. So it may take a lot more support, with lots more iconic and indexical support, so that you get this stuff. So you really understand how it works. And this tells us that, in fact, in building symbols, in building our capacity for symbolic interpretation, even as young children acquiring language for the first time, we have to do so by building an infrastructure, first of icons and indices, showing things sharing things using words initially, not as symbols, but as indices, young children a year of age and so on, may be using words, but they're not using them symbolically, right away, may have to develop this slowly.
Nick Jikomes 28:03
So I want to discuss this. This difficulty we have where on the one hand, we know as evolutionary thinkers, that there must have been some sort of graded and continuous change in animals and primate lineages that eventually lead to language. And yet, we do have this apparent discontinuity where humans can do this symbolic thought that is related to language, and it appears that it appears to not be present in other lineages. And I think many thinkers historically have looked at that discontinuity and said things like, Ah, well, there must be a special part of the brain that evolved, suddenly, there must be a, you know, a language acquisition device. If you you know, people who took a Psych 101, or neuroscience, one on one class may have learned about Chomsky theory of universal grammar, or maybe you've read books like the language instinct, but the idea is, well, we have this this continuity. So maybe there was some sort of big mutation or big single thing that, more or less magically, or at least suddenly gave us this new ability. So how do you think about the idea that there's something specifically built into the brain like a language module?
Terrence Deacon 29:16
So first of all, I don't think there's something like language model module. In fact, much of my early work was to look at human neuroanatomy, compare it to the anatomy of other species brains, and ask the question, do I find anything new then? Are there new structures? Are there new connections? And of course, one of the things that's obvious is that our brains are pretty big for our bodies. There's some quantitative changes and our internal quantitative changes that I focused on, but one of the surprises is my early work. This was throughout the 1980s, early 1990s, was it? There aren't new parts that particularly are associated with this new capacity. In fact, what it means is this capacity, recruited old parts that still do what they evolved to do in chimpanzees and gorillas, and so on, but are now also associated with language. That is, the brain, these brain systems were recruited to do something new. And the recruitment was really complicated because it involves not just one area that does it or two areas into it. But we now are pretty clear that most of the cerebral cortex is involved in one way or another, with word meaning, really startling feature, because what it means is that, you know, none of these areas evolved specifically to do language. So that first of all, we're not going to find the simple answer to this question in finding the special part. That's not going to work, nor are we going to find the special gene that does this. There are lots of genes have been changed, that make it possible. So it's a multi gene multiregional recruitment process, that first of all makes it pretty difficult to sort of come up with a nice, neat answer that says, Okay, this part does it. And this is why we're where we are. And this is what led me to think about this code evolution knowledge. In other words, that an early use of symbolic communication, which is difficult, symbolic reference is not easy to acquire in the first place. And your point about, you know, mathematics is a good one, that we still have difficulty as it becomes more abstract, we have increasing difficulty, since building that infrastructure, knowing why a certain equation looks a certain way, why the icon ism of it, why the structure of the equation itself is not the structure of a mathematical relation, that's not made up of symbols on a page, but is, in fact, sort of a relational feature, an abstract relational feature, we get this, these levels of abstraction, we get lost pretty easily. So it shouldn't surprise us that early on, this was not an easy thing to do. But if it had been going on for as I think about 2 million years, then there's time for that demand to do symbol decoding, more easily drove changes in the brain subtle changes in memory, in ways of acquiring associations, in ways of representing and communicating with each other. Using sort of social cues about others attention, and so on. All these things had time to develop in response to communicating symbolically. So I think of this as a sort of Ratchet like effect, in which a slight demand to use symbolic communication produced selection for people who did it slightly easier, which produced the capacity to produce more complex symbolic relations, which produced selection on people that did it slightly easier, crossed that threshold slightly easier. Over the course of about 2 million years, I think we developed where we are now. So that, you know, since having pulled the ladder up after ourselves, after 2 million years of evolution, it looks like this huge, fundamental difference in cognition. But I think it was generated not by selection for us single language mutation. But in fact, this accumulation of biases that make it easier and easier and easier over time, distributed over many, many brain areas, over probably changes in neurotransmission, subtle changes in connectivity, and so on. All that simply made the unusual process of acquiring symbols just slightly easier over time. But after 2 million years, it could make quite a difference quite a discontinuous appearance and difference
Nick Jikomes 33:57
in the book related to this area. So we're thinking about CO evolution, how language is, in a sense, adapting to the human brain in the same way that the human brain is changing and adapting to its circumstances. And also the idea that there's selection pressure, for language to be learned as early as possible. And and of course, we know that children are somehow special in terms of learning languages, it's very difficult for an adult to learn a language like a child can. And you know, there's a quote I want to read where you say children's minds need not innately embody language structures, if languages embody the predispositions of children's minds. And so can you unpack exactly what you mean by that type of passage?
Terrence Deacon 34:43
My point here is that languages themselves have to be passed on have to have to be passed from person to person, generation to generation. In that process. There's a bottleneck, and the bottleneck is of course, acquiring it. And then producing it so that somebody else can acquire it. Um, the bottleneck has to do with learnability. The languages that exist today had to be learnable. By human brains. Now, it's one of the reasons why mathematics is not a language. Mathematics is symbolic, but it's not something that we can do spontaneously. And there's a good reason for this language, we just have to communicate, and a little bit of slop is okay. We just, you know, we just need to know that want to get certain things across, it may take us many times repeating the same thing or in slightly different ways to do that. So what's necessary there is just just communication. But in mathematics, precision is critical. In language, we have to do it in real time, I have to be able to do it on the fly, I have to be able to decode the symbols and the relationships, the grammatical and syntactic relationships on the fly. So I have to be able to do it pretty automatically. Whereas in mathematics, I may have to stare at any equation for weeks or months to understand what it's about, read, redo it, recalculate it, understand it in different circumstances, use it to produce a graph or something like that, to help me understand it. So in one sense, language is something that has to be done on the fly has to be done now. Whereas mathematics does not, as a result, they have very different constraints. Mathematics does not have to be easily learnable. And in fact, what we can do is that we can make it a little bit more sensible by finding better notational systems, as our notational systems have gotten better, it becomes easier to pass it on. So there is even still a little selection pressure on mathematics. But it's the selection pressure is primarily on precision of reference. We're in language, it's on in a sense of the ability to refer quickly and easily, and to get at the symbols and understand their meaning, you know, in real time, so yeah, there's different selection pressures. But one of the things that, that we can now say is that we shouldn't have expected mathematics to be easily learnable at a young age. Because it doesn't require it doesn't expect that in a sense, its future transmission is based upon how quickly the facility is apart, is acquired, and passed on, or is language, those languages that exist today must be those that were effectively easily passed on and acquired. That means that if you're you have better facility with language, if you learned at a younger age. And once you've committed more of your nervous system, to this process during a time of great neuroplasticity, then you're going to have both more facility to produce it. And children who have in a sense, this ability to pick it up easily will be advantage. But that means also we should expect that languages have adapted, in some sense, to human learnability. And, in fact, if if they're passed on more effectively, if they're acquired younger, then we should expect that language structures themselves have been selected. By virtue of being learnable. At an early age, at an age in which you can't do mathematics, and an age in which you're not yet able to remember, you know, the names of the streets that you're living on. A lot of other things are going to be impossible. So that tells us also that languages have adapted to be learnable at a younger age. And that I think is one of the interesting reversals of the idea that somehow we have at a young age, a special language acquisition capacity. We probably do have all kinds of things that make it easier to acquire language. But languages have also adapted to us. And this is also another reason why artificial languages don't get passed on where we create artificial languages and try to get them learned. universal languages. Like Esperanto, for example, just didn't seem to catch on. Whereas the languages that have spontaneously evolved and change over time spontaneously, because they get passed on easily acquired easily are languages that have adapted to us.
Nick Jikomes 39:41
Yeah, this was one of one of the many sort of reversals or perspective shifts in this book for me where we're looking at a familiar phenomenon, the facility with which children learn a language, and we're actually reinterpreting it in a new way. That seems to make sense once you once you sort of get it so instead of The standard way of thinking, which is really, that children have this special critical period, that is particularly that's been engineered by evolutionary processes to be for evolution, you might think of it instead, as, as you said, that language itself has sort of adapted to the learning biases of the young brain. And, you know, as a brain develops naturally, I mean, it develops, so changes its structure. And so as we go from infancy to child to adolescence to adulthood, the brain of an individual is going to have a different set of learning and perceptual biases. So if we sort of were to go back in time, and just reverse the arrow of time and go back all the way to the beginning, when whenever language first evolved, we would see it being acquired at later and later ages. And presumably, the earliest speakers of the thing that we might call the earliest language would have been adults, and they would have learned it under, you know, they would have been speaking something with the learning and perceptual biases that those adult brains had. So does this mean that the earliest languages probably had a very different structure, and were passed down primarily from adult to adult. And it was only that, you know, over time, you have this phenomenon of it getting pushed to earlier and earlier developmental ages.
Terrence Deacon 41:19
I did not make that claim in my book. But But indeed, that's what follows from this. And it's another reason to think about the early symbolic communication is not like spoken language. But in fact, more like ritual. If you think about the early language, the early symbol acquisition and communication that has a kind of ritual structure to it, think about the individuals acquiring it like you and I acquire higher mathematics. It happens at a later stage, brains have to be more mature to acquire it. And it doesn't get passed on so well. It's difficult to pass on. It's going to mean that as that system becomes more acquirable, at a younger age, those will be passed on more effectively than those things that are only acquired and passed on an older ages. And the same reasons that we don't try to teach children to read and write and do arithmetic until you know, past age five, six or seven. I think of that, sort of the distinction that the very earliest symbolic communication was sort of a lot like this problem of reading and writing today. It's a, you might say, a slightly displaced version of language, we've taken language and We've abstracted it onto the stuff on the page. And then we've taken stuff on the page, and abstracted it to these things like numbers, and symbols on the page that are really opaque in some respects, in which the syntax of an equation, if you think about an equation is having syntax. It has an iconic form, it has a form. But its form is so abstracted, it's not like a picture. But it does have form, some things are close to others, and some things are distant from others. Some things are next to others, some things are repeated. The same letter, the same number might be repeated somewhere where the same operation sign might be repeated somewhere there iconic features, likeness features, it's that it's structured by their indexical features next to features, operations. And some things operate on other things, because they're close by, or that we have parentheses that tell us how to do it. What operates on what first, second, and third. In some respects, what we've done with mathematics is abstracted it many steps away from language. It makes it much more powerful in some sense, because of its abstraction. But it still has some of the same problems. And so I like to think about early simple users, like those of us who are struggling to acquire mathematics as young adults.
Nick Jikomes 44:08
So one of the stories that you tell in the book that has to do with, you know how how we should think or rethink the idea of a critical period for language in early childhood is the story of Kanzi. So, can you tell the story of Kanzi and what was special about that ape and what it means for this, this general topic?
Terrence Deacon 44:29
So, this requires putting any context a little bit because Kanzi have been oboe was an ape related to chimpanzees, that acquired language at a fairly young age. Before this, a number of studies using chimpanzees had taught chimpanzees to use a simple system, a keyboard system in which they push buttons that have particular what they call Lexa grams on them, arbitrary squiggles that were meant to be like simple And if you push them in the right way, you're sort of communicating symbolically. To train the chimpanzees to do this. They really had to wait until the chimpanzees were into their adolescence. Young chimpanzees just could not learn this stuff, just like we have trouble, you know, learning mathematics at a young age. But when they reach a certain age, they did a pretty good job of acquiring this thing. So there's a couple of chimpanzees named Sherman and Austin, that were pretty good at this. By the early 1980s. When Kanzi a different species related chimpanzees, bonobos, were brought in to this same research facility down in Atlanta. One thing that happened is that they tried to change to train this also, for cons these stepmother now kasi was sort of brought in from the wild as a young chimpanzee, and was raised by a stepmother named Matata. Matata was now old enough to be taught this language system. At the time, whenever you push the buttons, they had updated their computer a little bit. So it also spoke, it said the word when you push the button, and so they tried to train Matata in the same push button symbol system. Matala was not a very good learning she did not acquire it very well. Any part she did not acquire it well because she was raising Conte at the same time. Kanzi was this little youngster crawling all over her pushing over the apparatus, getting in the way and messing things up. batata as a result was just simply not learning. But Kanzi was there all the time, at some point. And I like to think about this as Kanzi being frustrated with his stepmother. The questions are asked, Matata and kasi just pushes the button to get the answer. He's in their mind in the experimenters mind. He's too young to learn, too young to acquire these symbols. But it looks as though um, without actually having been trained, just sort of hanging around with his mom while they were trying to train her. He got it. He figured it out. He was too young to be trained with a sort of stimulus response kind of training. But got it. And once they began testing him, without any extra training, he seemed to have this huge vocabulary that he'd already acquired without being trained. He was a sort of passive observer in the process. Now, I think there's a couple of ways to think about this. One is that and this is what they originally thought, well, it must be that bonobos are much better at this than chimpanzees nuts, the whole difficulty. Now the problem is, of course, they were trying to train Matata. But Tata was also a bonobo. So for some reason it wasn't working with Matata. The other way to think about it, is that now they were using symbols in a much more language like way, much more naturalistic way, because they were actually speaking, the keyboards were making the word sentence. Kanzi was picking it up probably a little bit more like a child picks up language. And the structure, of course of the keyboard system, although it was artificial. It was based upon language, how language works, verbs and nouns and things like that, requests, and responses to and to requests. All of these things were sort of built in this is the way language doesn't work structure. So another way to interpret this is that Kant sees immaturity was actually an advantage. And that, for us, being immature is actually an advantage. Having a brain that doesn't have quite the same kind of memory system and learning system that an adult brain has was actually an advantage for Kanzi. And I think that's another way to think about our own situation.
Nick Jikomes 48:58
Yeah, I was really interested in this idea that handicaps more or less the the inability to do something can simultaneously be the ability or be adjacent to the ability to learn another kind of thing. And so you discuss this a lot in the context of children learning language, you also use the example, which I think makes this point very well, of so called idiot savant. So people that have very abnormal brains. So if people have ever seen the movie Rain Man, that's actually based on a real life person named Kim peak, P EK. And there's a fantastic YouTube documentary about this individual who has a brain problem. They're born with a brain that is structurally very abnormal, and they can't take care of themselves. They can't they're very handicapped mentally in many different ways. And yet, they have these superhuman capabilities, like you know, just unbelievable memory capacity. And you also give examples of other animals in the animal world where, you know, you look at certain animals like certain things species and they have this ability to remember things that is extraordinary. And yet they don't seem to be very smart in other ways. And so there's this, there's this idea that I think you make very clear in the book that handicaps for one thing could actually empower you to be very good at something else simply because at a certain stage, or being a certain species, you have a brain that's structured to give you certain biases that make you good for the one thing, but bad for the other.
Terrence Deacon 50:26
And I think that's a good way to think about us, I think that we in fact, are not so good at some things, precisely because we're so good at this capacity of acquiring symbolic relations.
Nick Jikomes 50:39
What are some of the things that you would say humans are not very good at,
Terrence Deacon 50:44
there's a wonderful study done in Japan, with the chimpanzee I, in which the this chimpanzee has been taught to recognize the numerals one tonight. And to recognize that they're ordered one tonight. And the task was shown on the screen flash up, distributed on the on the touchscreen, a bunch of numerals one to nine, and the chimpanzee has to push them in order 123456789. And learn this pretty well. The the key is that this same procedure now is done, when you flash the keys, the numerals up on the screen, and then over the top of each new numeral now, you just put a blank square. So they suddenly disappear within a fraction of a second, they're shown and they disappear. Can you now push the squares in order one to nine, if you've only seen them for a fraction of a second, and they've been flashed up in different positions on the screen, the chimpanzee can do this really well. That is, in a single glance, have the memory the eidetic memory of the screen and where each number was placed, and can do this human beings, we try and try and try and just can't do it. There's an ability to innocence. Take this eidetic information in sort of like the idiot savant, the Rain Man kind of example, in which it's now suddenly there. And they can do this in a way that we cannot do. Because what we're trying to do, and I look at numbers on the screen, I think of the concept of 12345, I, I'm thinking of all the symbolic relations, if they're flashed up there for a fraction of a second, it just takes time for me to do that. Because I'm biased in a different way, I can't do this, I can't possibly keep up with the chimpanzees capacity, the chimpanzee is not seeing them, symbolically, it's seeing something else. And it's been specialized to sort of make these snap judgments on the basis of this geometric distribution, of course, must be necessary. If you're going to go flying through the trees, hand over hand over hand, you need to have this kind of capacity, we seem to have sacrifices for this other capacity.
Nick Jikomes 53:23
And I want to go back to talking about the structural features of the human brain that give us the kinds of learning and perceptual biases necessary to acquire this ability to perform symbolic representation of things. And so you know, earlier you mentioned the idea of size. And so a lot of people, you know, even in the academic world that think about brain evolution, will first point the fact that well, we have really big brains. And then you might say, well, we have big brains, but a whale's brain is bigger than ours. And then typically, the response is, well, we have brains that are big for our bodies. And so maybe we have the biggest brain for our body of any animal. And you sort of explore these ideas. And so can you just unpack what is the relationship between human brain size and body size that you think is important for understanding what we've been talking about?
Terrence Deacon 54:18
It's a good question, because there's lots of misinformation associated with this. Number one, we don't have the largest brain for our body size. We have a brain that's about 2%. of weight for a body weight, some somewhere between one and 2%. A mouse has about 4%. So in terms of brain and body size, we're not the types do not have the largest brains. No, we don't, as you pointed out, many whales have much larger brains than ours, sometimes four or five times larger than us really large brains. Of course, they have very large bodies. So absolute size and relative size are both in a sense red herring. They lead us to the wrong conclusions. There is a sense in which our brain is unusual for our body size, before an animal of our size, we do have the largest brain for an animal of our size. That's an unusual feature. But it's also the case that, that monkeys and apes, compared to non primates, for the same body size, have larger brains. A lot of my work has spent time trying to understand how that evolved, and how it develops. So one of the things that we found, for example, is old data, that in fact, there is what we call an allometric relationships. And this is why mice have larger percentages of brains. And elephants and whales have smaller percentages of brains, for their body size, brains seem to enlarge with respect to body size, sort of the way that the surface of a ball enlarges with respect to its volume, roughly, to the two thirds power. That is, it's a surface to volume relationship. There have been lots of claims about why that might be true, why it is that mammal brains and bird brains as well have this sort of scaling relationship with respect to bodies, one of the one of the ways to think about it is maybe that has to do with just simply that's how you sort of stay even, you know, the brains don't have to expand, you know, as fast as volume. And there's lots of reasons for thinking that I won't go into too many theories, one of these that we did find about primates, why is it the primates have about twice as much brain for their bodies, as other mammals of the same size? What we found, interestingly enough, and this is work that I began some years ago, and actually, even over the last few decades have improved upon, we now know that that's because primate bodies grow slower in the womb. But primate brains grow at the same rate as other mammal brains to in the womb. But that means that from a very early stage of development, shortly after implantation, bodies are just growing slower in primates. But if we look at brain size per time, brains are growing at the same rate as they do in dogs and cats, and elephants and horses, and whatever.
Nick Jikomes 57:28
So, so newborns don't exactly have big heads, they have small bodies,
Terrence Deacon 57:33
that's fine. So and, and they have a brain that will continue to develop over time a little bit longer, which makes them unusual in that respect. But it's even more troublesome than this. And that is that if you look at some species with small bodies, or some breeds of dogs with small bodies, I use the example of chihuahuas to ours actually can have a much larger percentage of brain to body than us. And that's because in the womb, they're developing like a typical dog. But shortly after birth, their bodies stopped growing. And so the whereas a normal dog, its body keeps growing. A Chihuahua in the womb is growing like a typical dog. And as a result, its brain seems to be functioning like a typical dog, we don't think about chihuahuas as being super smart, because they have more brain for their body. They seem like a fairly typical dog. On the other hand, there's a difference in primates. And that is, if your brain and body difference is different all the way through gestation. And brains have to connect with bodies during development, axons, output branches from neurons have to find their way into the body, and neurons from the body have to send their information up to the brain, all during development of the brain, in primates and in humans. There's this disproportion of brains and bodies. That's not the case for the Chihuahua. So that the adult display disproportion of brain and body seems not to be as important as this developmental process. And what that means is that we need to start looking at how brains develop, and how size might affect how brains develop. In order to begin to answer this question, what is size difference mean for us? Because clearly, our large brains for our smaller bodies much larger, about three times larger than you'd expect, in a chimpanzee of our body size. Size does matter. But how size is produced, how its generated, and when it's generated during development, when the disproportion is generated matters a lot. And one of the things that's different about us is that not only do we develop like a typical project Made in the womb. But our brains keep maturing as though we're still in the womb for the first year of our life, so that in effect, we also diverged from primates in this respect. We're like primates in the womb. But we have this extended brain development, our brains are developing, like we were a much larger primates. In fact, I like to use the example of, you know, imagine we were a, a 10 foot tall primates, as adults, our brains develop on a pathway, as though we're primates that are that big, but our bodies never get there. Our bodies stay at this slow development. So in effect, I had to turn my attention when I was thinking about this turn my attention to development, how does the brain develop? How are structures within the brain, even though it's just a larger brain, with no new parts, just more of the same? Could this developmental difference, cause chunk size differences, connection differences, and so on within the brain is a consequence of this.
Nick Jikomes 1:01:12
And this is, this is the part of the book that was evocative for me of ideas that people often describe as Neural Darwinism. So I'm thinking of thinkers like Gerald Edelman, and others, and many people listening probably won't be aware of these ideas. So just like you think of Darwinian natural selection, operating in the Origin of Species, there are sort of Darwinian like processes in the developing brain where different neuron populations of neurons are, in effect competing with each other, and many of them get pruned away. So how does this concept of Neural Darwinism and competition between different brain structure in different locations start to fit into this?
Terrence Deacon 1:01:57
So it's a really interesting question. And it's unfortunate that most people who think about the evolution of language, and even the evolution of Rames, not being aware of these developmental issues, don't recognize how important it is to look at this as a field that's been called EVO Devo over the last 20 years or so, Evo, referring to evolution, and Devo development, tries to get at these issues. And I became very much focused on this and development, one of the reasons I began doing research in terms of neural transplantation, fetal neural transplantation was to begin to understand how these developmental features might have to do with species differences. So the key is this. And this turns out to be true for lots of structures in embryo logical development in animals, and even plants to some extent, that is one way and I like to think about it in terms of you're going to build a a stone wall, but you want to have a door in the stone wall.
You know, one way to do it is to build the wall, and then knock things out. So you have a door that you can pass through, the stones will then sort of settle into place. But if you try to build the the stone so that they create this arch to begin with, it can be very difficult takes a lot of work. So that in one sense, and this is the sort of evolutionary strategy, just build a lot of diversity, and then select out from that diversity, you know, build your wall and knock things out. Well, it turns out that the brain is built this way, in lots of respects. It turns out that that neurons are way over produced during development. And then the fine tuning takes place. And they compete with each other for function. And those things that have better, in a sense, more correlated function with their inputs and outputs out compete, survive, and others disappear. This happens also, and I was most focused on this in terms of connections. Because what happens is that neurons in our cells that appear and are born in different parts of the brain may need to connect to each other. How do they connect to each other? Well, they grow out these long branches called axons that have to find their way to distant areas in the brain and make connections with certain neurons. They do this by sort of sniffing their way forward. During development. They find their targets, they make connections. But it turns out that in this sniffing process of finding your targets, many more neurons find overlapping targets early on, it's a not not very specific technology. But what happens is that once they're connected, they begin to send signals to each other. And there's this phrase that we like to use when we teach this that neurons that fire together, wire together, that in effect, the correlation of activity determines that those connections will be maintained and connect actions that don't seem to be synchronized with each other don't seem to be, you might say synergistic in their functioning seem to be eliminated over time. So it uses this sort of selection like logic. Now, it's a little different than natural selection, because there's not multiple generations of this just happens in one shot, you generate a lot of variety. And then you select it and you make the fine grained circuits. After the fact, it turns out that not only is this done intrinsically, but also external information, visual information, auditory information, plays a role in this pruning process, that fine tunes connectivity. So in fact, having input of a special kind, and being biased, to sort of take that input in and use it functionally, also plays a role in wiring the specifics of the brain, one of the reasons why young brains are more sensitive to the surrounding than adult brains. And that you can in a sense, it's the that standard story, you know, you can't teach old dogs new tricks. But it's really easy to teach young dogs new tricks. Well, the same thing is true, of course, for languages we found. But it's true for a lot of things, in part, because what's happening is that early on, there is this process of taking a relatively non specifically connected brain. And using information, both intrinsic and extrinsic Lee provided to sort of fine tune the wiring. Once this now becomes clearer, we realize that the relationship between brains and bodies during this period of time matters a lot. Having, you know, because of an abnormality having an extra finger develop, it won't be that that extra finger doesn't have innovation, that extra finger usually will work, someone with six fingers will have fingers that all articulates the brain adapts to the body it finds itself in, in this respect. If we, for some reason developed an extra limb by some weird mutation, my guess is that the brain would adapt to using that limb. Similarly, if you know if we lose things, if we have fewer limbs, the brain adapts to that effect. But this only happens early on.
Nick Jikomes 1:07:21
So you, you know you seem to be well, let me put it this way. There's this tendency, it's, I think it's just a natural human tendency. But you see it in people, including neuroscientists and I found myself constantly doing this naturally, where you we just naturally think about the brain. In phenological terms, we'd like to think about it as you've got one module in one place a different module in a different place. And to be sure, different parts of the brain, different areas of real estate are specialized. For different reasons in the brain. If you go look for that, you will find that, but you seem to be coming at this more from the tradition of someone like maybe Darcy Thompson, who's thinking about development and proportions. And this concept of allometry, which is a lot of what you were just talking about really. So very briefly, could you just mention who who that was and what, what is allometry You didn't use that term, but maybe define it because people probably haven't heard it before. So the
Terrence Deacon 1:08:19
term allometry, of course, comes from two parts, ELO and meter. meter, of course measurements. Allah means difference. allometry has to do with the fact that during growth in our lives, as well as comparing animals of different size, things don't grow at the same rate. So we look at young newborns, they have big heads and small bodies. allometry says that as they mature, their brain will stop growing at an earlier stage, and their body will keep growing. And as a result, when we look at an adult body, they have a big body and a small head. In comparison to two babies, we have big heads and small bodies. That process is an allometric process that is ello, meaning two different meters of growth, two different rates of growth. When we find animals of different size, we see the same feature and I describe the difference between mouse brain and body relationships, and human brain and body relationships. That's the result of an allometry a different rate of growth across phylogeny. As mammals get bigger, their brains and their heads don't get as big, as fast as other parts of their body. Another way to see this, it was actually probably the first person to see this was Galileo. Galileo was looking at the bones of mice and small animals and the bones of very large animals like elephants and and hippopotami. And notice that the small animals have long thin bones and the big animals have large fat stubby bones. And he realized that this had to do with the fact that the strength of a bone that has to hold up a body, it's growing linearly. But bodies are putting on weight to the third power to the cube power of volume. So it's a length of volume relationship, in order to keep up enough strength support for this very, very large body, as things get bigger, bones have to get faster and stubbier. As they get bigger, so they, they change their relative shape. So you can't just imagine a mouse body blown up like a balloon, being able to support itself, it wouldn't be able to support itself, his bones would all crack and break. That's an allometry relationship. So what I'm struggling here with is that, in fact, there's a developmental allometry that goes on in the growth of the brain and body in the womb, early on in development. And what's happening is that this process that looks sort of Darwinian like, is a process of keeping the thing, in a sense, organized allometric Lee. So why is this happening is it this is a process that allows the nervous system to adapt to its body, without having to know in advance what it's going to be finding out that so to speak, it's, in a sense, the way that natural selection adapts organisms to their environment. Over time, it turns out that you can think about that and sort of microcosm in embryology of the nervous system is adapting to what it finds out there. And it does so by using this natural selection like logic.
Nick Jikomes 1:11:44
And so I want to talk about a couple of structural features of the brain that are probably relevant. And the first is the prefrontal cortex. So the prefrontal cortex, as most people know, it's sort of the large, it's a very large structure in humans compared to other mammals. It's the part of the brain right behind the forehead, more or less. And it's responsible people generally talk about it being responsible for the sexier aspects of human cognition, executive control, doing complicated things, context, switching all of that stuff. So does the prefrontal cortex, in your view play a special or outsized role in symbolic thought and language learning? And if so, what is that role?
Terrence Deacon 1:12:29
As probably the most difficult question of all, because, in some respects, we have not fully resolved this. And we have not resolved it for surprising reasons, we don't actually have really definitive understanding for the relative size of the prefrontal cortex. And there's a couple of reasons for it. But let me just sort of give the background behind this. And this has to do with this developmental discussion we were having, because there's some parts of the brain that are fairly directly connected with the body, our motor system, the cerebral part of the cerebral cortex, we call motor cortex, which has fairly direct output to the spinal cord, neurons that connect with muscles, we have inputs, about the tactile surface of our skin, about from the retina, and so on, things are relatively direct, coming back into the brain. Given that the brain is adapting to the body, it finds itself in during maturation. And one of the things that's going to happen is that the relationship between the periphery and the central part of the nervous system is going to be much more strongly constrained for those systems that have a much more direct relationship to our bodies, to our sensory input, our motor output. But there are parts of the nervous system parts of the cerebral cortex that have a very indirect relationship to the body. And one of those is prefrontal cortex, other areas, fully called parietal cortex, also, to some extent, but prefrontal cortex probably is the most distal in the sense that there's the most indirect connection to the rest of the body. And it turns out that those things that are most directly connected to the body also are some of the first to mature as the brain matures. So prefrontal cortex is also late to mature for all of these reasons. That means that as brains and bodies have shifted out of proportion in primates, and then again in us with respect to the rest of the primates, those areas that are most directly connected to the periphery are going to be relatively well sized, in terms of their function with respect to the peripheral needs. But those areas that are most distantly connected most indirectly connected. Well, it sits inherit any of the extra space. And my argument in the in the book, and it's still an argument that I would make is that the prefrontal cortex because of its distal connection, With a body, because of its late maturation inherits a lot of space that's not taken taken over. Because if this is a brain that would have developed in an animal that's eight to 10 feet tall and weighs a half a ton, then we would expect much more of the brain to be sort of involved with his peripheral receptors, and effectors. But given the fact that it's not in that kind of a body, that means that there's less of the brain that's taken up to deal with the body, leaving some out there. Now, the key is what is prefrontal cortex doing, then if it's not directly connected to the rest of the body? Well, it turns out to be playing a critical role in lots of orienting and mnemonic features. You mentioned some of them, we, we sometimes generally collapse that into this idea that we call executive function. I think that's a little misleading, because as a kind of a muscular like sense that that's where, where the executive stands and controls the rest of the brain, I think that's a little misleading. It has, for example, a lot of connectivity with an area called the tectum, or superior colliculus, in, in primates and other mammals, an area involved in orienting, you know, sort of focusing attention, adjusting attention to keeping track of things, what we need to do, when we're orienting on one thing is, we also need to keep track of the background. And when something happens somewhere else, we need to be able to move to that place, bring our attention to that new problem. prefrontal cortex really plays a role in sort of juggling the possibility of multiple folds sigh of attention, keeping some things in view, and other things sort of on the ready. So it turns out to play a really crucial role in lots of combinatorial like processes, and that sort of thing, where we really have to keep track of the details, and sometimes suppress some details in order to focus on on others prefer, that's why we call it executive sometime because it plays this sort of role of deciding. And I think, again, it says at a molecular sense to it, but if deciding what to attend to what not to attend to, and what to be ready to attend to. And so prefrontal damage oftentimes causes problems of that sort, in which were sort of driven by peripheral features, we can't suppress tendencies, and so on. Another reason to call an executive in that respect, Mike point with respect to symbols, is that's exactly the problem with symbols. Symbols are stimuli that are not directly connected with things.
And in fact, they work by virtue of combinations about how they affect each other, how they refer to each other, we began this discussion with this troublesome question, or why don't we extinguish that association, if the association is lost, like other associations? Well, the key to that, of course, is that it's because symbols are related to other symbols. mean, I like to think about this in terms of a thesaurus or a dictionary. We can think of each of those as a kind of a network system in which each symbol, each word is sort of linked to other words, and that the network is distributed in lots of ways. And you can see how words are linked to words to link to words, that linkage is one of the things that keeps them all, in our memory, that is not linked to things in the world, as much as they're linked to each other. I often defined,
Nick Jikomes 1:18:25
I often had the image in my mind, when you were explaining some of this in the book of almost two, two layers of networks you've ever seen, like the neural network architecture, or something that that people show in computer science, you know, you can imagine symbols a, you know, a lattice of connections between symbols, or some symbols are highly connected to other symbols. But then the symbols also point down to a second layer, which would be the layer that contains all of the concrete items in the world they refer to such that, you know, if you break the connection between a symbol, and the thing that refers to because it's not highly correlated with that symbol, just like the examples we discussed earlier, the whole structure is still maintained, because there's this other web of symbol symbol connections, and it started to give me a picture of why it was true, that we don't extinguish a lot of these associations, even though the symbols aren't directly correlated with what they refer to.
Terrence Deacon 1:19:19
Why and think about how many words a typical speaker of a language maintains in relation to each other and two possible reference, oftentimes in the range of, you know, 20,000. This is not like something, you learn a little bit of association. It's held together by virtue of this elaborate network you just described, sometimes linguists describe it as maybe a lexical network or a semantic network. It's those associations that hold it together. But it's precisely because its reference demands Association. Once you've sort of cut that link to the thing in the world are weakened in some respect. You really need the strength other connections. And as a result, when we refer to things in the world, we do so with sentences. By putting words together, the symbolist generally don't work alone. We sometimes have symbols that can work alone in special contexts, but they're in special contexts. So think about, you know, sitting in the theater and suddenly yelling fire. You're not playing to a particular fire. But what you what it does is that everybody in the in the theaters recognizes that if a symbol does not show up in a sentence, it has to directly refer to something that's immediately present, it becomes indexical. So when somebody yells fire, without it, without it being in a sentence, we automatically assume that there's a fire right here, and now. It's, it takes on this other function. But if I said, you know, let's have a fire tonight. The complicated relationships between these other words, allow me to sort of disentangle this, this entanglement from the world immediately that it's in. What's interesting about this is that the sentence, therefore, he is also doing iconic and indexical work. Because the structure of the sentence, the grammar and syntax we like to think about, is actually doing iconic and indexical work. Let me give you a simple example, I had to sort of make a noise here that we can, we can hear it, you hear that noise as I tap on my computer. In doing so, you know something about what's happened, you can use the noise to know it's indexical with something happening. When I say that hard the index and the word. Now actually do something, they say the surface of this computer is hard. The index is playing this role. But notice that the phrase, the surface of this computer, is hard. That whole phrase is doing effectively the indexical work, it's pointing to something. So that we don't tend to think about this, but the structure of grammar and syntax, what makes a sentence a sentence is that it has recaptured and recoated the iconic and indexical features of the world. The sentence itself is iconic of this relationship that I created by saying heart. I've used these other words to do indexical work, and symbolic work. So it's one of the other features that we talked about before that symbols are embedded in his iconic and indexical world, but they also then produce a higher order iconic and indexical relation.
Nick Jikomes 1:23:10
So given what we've just discussed about the prefrontal cortex, and its apparent importance for symbol learning, how do you think about the fact that children, young children, the ones doing the symbolic learning actually have immature prefrontal cortex, that being an area of the brain that is one of the last to actually fully mature,
Terrence Deacon 1:23:33
right, and in fact, it probably doesn't mature until maybe three, four years of age to come a level that can do the kinds of things we can do. And that suggests to me that, in fact, a lot of the early what we call early language learning is very iconic indexical. That we over interpret it because it's a word, and we know how to interpret words. We take it as symbolic. Why is that we try to do is we try to pull out the indexical features that that young children are using it and embed it in a larger context.
Nick Jikomes 1:24:09
I'm immediately now that you say that it almost seems obvious, like a very young child, just trying to use words is always pointing to a thing that's right there and naming it. They're not talking about something tomorrow.
Terrence Deacon 1:24:22
Exactly. And not only that, notice that children begin and this is something I think is uniquely human, and is critical for language learning is we point, we reach we have hands, we can exchange things, and we can point to things. We can direct each other's attention. And a number of people have sort of pulled out this sort of shared attention problem, joint attention as a really critical feature. Notice that this happens before we even start acquiring language. And children are very good at it. Other species just don't get it. When I point to something my dog looks at my hands Not when I'm pointing to children is just the other way around. And we look at each other's eyes. And we know where are the faces turned, what we're attending to. Joint attention is pretty critical here because it's doing the indexical work. And the indexical work is important to assigning the reference to the sound that we're producing. And yet early on, it's also just an index, they're correlated with each other, the sound is correlated with the doggy with the pointing up. But one of the things that's happening, and I describe this as sort of ungrounded, we need to sort of figure out as young children, how to take these iconic and indexical uses of words, and shift the iconic and indexical features to the relationships between words, as opposed to the relationships between words and objects. So one of things that's going on as you need to make this shift, I think it happens slowly, and children, I think we over interpret them as doing language, because we haven't distinguished the iconic and indexical from symbolic at this stage, because we're so used to words as being symbolic. But what it is that we adults do is, of course, we're trying to embed, what we see the indexical and iconic use of the sounds of these words that children producing, we want to embed it in this symbolic realm, we need to sort of pull them out of the iconic and indexical into the symbolic world. And we're good at it as adults, and children are good at sort of following this. Now, because they're very much interested in what adults and their caretakers are attending to.
Nick Jikomes 1:26:45
Do you think I'm a little fuzzy on my developmental psychology, but roughly speaking, you know, young children will start to learn words, and they kind of learn them relatively slowly, one at a time, they have limited vocabulary. But at some point, there really does seem to be a threshold where you have this explosion in their vocabulary, do you think that maybe roughly corresponds to the brain sort of figuring out the symbol mapping part of this?
Terrence Deacon 1:27:09
I think so. And and, of course, remember that we're never finished, you know, those of us that have struggled with math with higher mathematics, we just can't do that, that level upon level upon level of abstraction very well. So children start very concretely. And their use of it is very concrete. But in effect, they have simple sentences to begin with. That point at which word acquisition really accelerates is shortly after they begin to use two and three word utterances. That is, it doesn't begin to accelerate, until they begin to do this competent Torial work. Now, the combinatorial work is forcing them to sort of parse out the iconic and indexical features, and understand the symbol symbol relationships. As soon as they do that. Now, the symbols are reinforcing each other. Now you can acquire lots of symbols very rapidly, because they're now acquired as part of this large network. And the network keeps them going. So that this transition you're talking about, I think, is very much like the sort of associative learning that begins this in terms of things associated with the world in an indexical way, suddenly, they begin to shift into a symbolic learning system, in which they begin to shift the mnemonics that they're using the whole words together into this symbol, symbol relationship.
Nick Jikomes 1:28:33
I see. So the other major structural feature of the brain I wanted to talk about in relationship to language learning, is the lateralization of the brain and the fact that we have two hemispheres a left and a right. And to some extent, you know, again, if you think back to a Psych 101, or neurobiology one, one course, you'll learn that the brain is lateralized. There's a left and a right. Some things that the brain does, are not lateralized or specialized into one hemisphere. And some things are and usually the big example of a thing that has clear lateralization that you learn about is language. And you know if I caricature simplify, but that's not really too much, really, you often do. Have it taught to you in school in this fairly simple way, which is, the left side of the brain is usually for language you learn about Broca's area and Verna keys area, and the right side isn't so much for language. To what extent is language lateralized in any way to one hemisphere or the other? And more generally, what what role do you think lateralization plays? That might be crucial for crossing the symbolic threshold?
Terrence Deacon 1:29:41
Why not? It's a good question, in part because I think we've overplayed the lateralization store. We do know that people can have reverse lateralization. We know that children born with a disorder that causes them to have most of their left hemisphere missing can acquire language. We also know that A study done many years ago now, with simultaneous translators, these are people who listen to somebody speaking. And while they're speaking, say it in another language. So you know, in the United Nations, we have these people assigned or that person who does, you know, sign language translation, you know, in real time was somebody else's speaking, what is it was found early on, is that typically, students are learning to do this, they develop your preferences. And oftentimes, the most successful simultaneous translators have lateralized the two languages differently. So they're not competing with each other. And, and they end up having an ear preference for one language, and not the other. And so they have an earphone in one leg in one ear and not the other ear, as they're as they're translated. So this tells us that even young adults have considerable plasticity, in terms of what side of the brain is doing what the other part is that, again, thinking about language in a sort of general sense that language is just one thing. We tend to think of it what language is here or there, what's really going on is a different aspects of language, are being fractionated into the two hemispheres. And the things you want to fractionate, just like in the case of the simultaneous translator, are things that are going to get in each other's way. So what's going to get in its in each other's way? Well, if, for example, looking at the cognitive tutorial, relationships, of language, there's the words getting together and modifying each other, versus this other aspect, which is, its relationship to sensory and motor experience, and maybe emotionality. These are two kinds of associations of the same sound that could potentially get in each other's way. So as we acquire language and become more and more efficient at it, as we mature, one way to make it more and more efficient, is to begin to fractionate those functions on the two sides, not completely, not that one side will do one thing, and the other side will only do the other, but largely that it will, in a sense, do a division of labor. So what we find is oftentimes in the right hemisphere, there's a lot of understanding of what you might call the referential and particularly the emotional or attentional features of language. One of the things that people have noticed is that right hemisphere damage oftentimes causes people to lose the ability to see sort of the big picture. classic stories are, you know, you tell a story in which there's, you know, a lot of things going on, and there's an anomalous event in the story. right hemisphere. Damage means that oftentimes you don't see why something is anomalous, doesn't fit. But you interpret all the details well, because your left hemisphere is getting the connections, right. And following the logic of the story, following what leads after what needs after but, but not sort of getting through the big picture. The other thing that is often noticed, is of course, right hemisphere damage causes what we call a pro soleus. And this has to do with what we call pro Xotic features. So the fact that if I get excited, my voice gets higher and faster. If I'm depressed, you can tell this depressed, because the way I'm speaking, is that we communicate emotionality, or attention, ality or what we think is important or unimportant. By virtue of these changes in tonality, and in speed, and so on. This is very strongly both produced and interpreted on the right hemisphere. So right hemisphere damage oftentimes produces very flat speech, speech is maintained in a normal way. But it's very robotic, it doesn't have a lot of character like this. Whereas left hemisphere damage, we lose a lot of the detail. But oftentimes, a phase A phase phasic patient says patients who have damage to some of these language specific like areas, areas that are really specialized for language,
don't get the details of meanings and connections, but get the gist of what's being communicated. Get the pragmatic framing of what's going on. Whereas the split brain experiments where a patient has its corpus callosum cut, so that there's not communication back and forth, oftentimes, will separate these two functions. And particularly shortly after the procedure. Oftentimes, only one side is sort of dealing with one aspect and the other side dealing with the other aspect. Over time. It looks as though most of those patients begin to develop compensation for that
Nick Jikomes 1:35:00
I want to, again start to talk about this idea of CO evolution that, you know, humans are adapted to be able to acquire and use language. But also that language is adapting to the brain in particular, that the child's brain and you almost, you know, talk about language in the book as if it's this other organism. So the same way that a gazelle and a cheetah co evolve, because they're two separate organisms that have a deep relationship, language, and the human structures and the human phenotypes are co evolving with each other. At one point in the book, you have a passage that says, quote, languages are far more like living organisms than like mathematical proofs. And I believe, at that time where that passage comes up, you know, you're contrasting the way that you're thinking about things with the way that a lot of linguists would classically think about language. And so what do you mean when you say that language is more like a living organism than a mathematical proof?
Terrence Deacon 1:35:59
It's a very good question. And of course, it's, it's, it's metaphoric. I mean, I don't mean that languages themselves are like living organisms. In that sense. What I am describing is that languages have to persist, historically, across many generations. And their persistence depends upon their learnability, their usability, and their transmissibility ability. Those are all things that are crucial, so that the persistence of language has something to do with how it fits with the users, and with the brains of its users. So when we talk about language change, we're oftentimes dealing with facts that language doesn't just change at random. There's certainly in certain ways that it changes, because there are certain transmission and learning capacities are sort of built into this. The way to think about this has been re conceived since I wrote this book, not just by me, but by others, with a phrase in evolutionary biology called niche construction. Niche construction is a recognition that organisms don't just respond to their world, but they change their world. And in changing their world, it affects the way they have to respond to it. So the most obvious simple example, are beavers and beaver dams. Beavers create an aquatic niche, by their behaviors, they've been doing so for millions of years. As a result, beavers our runs that have become aquatic li adapted. They have flat tails. They don't want to hold their breath. They know how to swim, they have webbed feet, they've become adapted to a niche that beavers have created. So I like to think about language and culture as our Beaver Dam, our aquatic environments of beavers, our aquatic runs, we're symbolic primates. That is the symbol world is our aquatic world. And we've had to adapt to that world. We create that world we pass it on, it's not within an individual. It's picked up, it's passed on. It's recreated generation after generation, like beavers build dams, generation after generation. It's beaver bodies that have adapted to that physical environment. It's primate brains or human brains that have adapted to this abstract symbolic environment. And so in effect, we're living in this niche, we're imbedded in this niche, we can't get out of it, we're so embedded in it, that we would say that a human being doesn't have this experience is not a human being. There's something fundamentally missing. They're not human in some sense. And this is a so called problem of the wild children, the feral children that are raised without language and human interaction. In fact, they're not really human in the full sense of it, because a lot of what is humaneness now, is something that's distributed in this niche that we're all embedded in, can't get out of, and our brains, you might say, expect this niche. We come into the world expecting certain kinds of social interactions, expecting certain kinds of communication, our brains are set up to expect it, they've in a sense, given up some things to be more adapted to this world, to this niche. And it shouldn't surprise us that this niche is not an ecological niche in the same sense, and so much about our brains and bodies and our behaviors don't look like they're adapted to the kind of world that say chimpanzees are adapted to, because we've adapted to this very different, in a sense, non ecological niche is symbolic, cultural niche.
Nick Jikomes 1:39:55
I think you do a good job in the book of making it clear that we You know, our brains evolved physically somehow to be able to cross the symbolic threshold and engage in symbolic thinking. But once we did that, we've now effectively created this niche, this new environment, which then changes a whole set of selection pressures that cause subsequent brain evolution. So we're sort of somehow evolving, for some reason to cross the symbolic threshold and be able to do this kind of cognitive trick, that's very powerful. But then in so doing, we're effectively creating a new environment that we then have to further adapt to. And I think that was a really, really interesting way of thinking about the evolution here. And that I had not thought of before. And, you know, as mentioned previously, you very much think about language in a evolutionary and developmental sort of framework. In fact, I think the original way I discovered your work was that as an undergrad, I was studying evolutionary developmental biology and thinking about how bodies are constructed. And then brains were constructed. And then I got interested in language and evolution. And so somewhere along those lines, I discovered you for those. And I think it's very powerful to have the sort of developmental perspective when thinking about this subject are a number of others. If people are interested in that, I do have another episode that I did the Sean B. Carroll, who's famous EVO Devo, biologists that I studied with. And so you can go check that out if you want to learn a little bit more about that stuff. But I want to continue talking about coevolution. And, and another I think, developmental evolutionary concept. So at one point, in one of the later chapters of the book, you get into a discussion about something called Baldwin evolution, which has to do with how behavior can affect evolution. And so how did what is that what is both winni? And evolution? How does that tie into some of the things we were just discussing?
Terrence Deacon 1:41:53
It's an interesting question. In part, what I want to say to begin with, is that I don't think Baldwin had it right. And I think we still don't fully understand. Baldwin in evolution, we assumed that Baldwin got it right, and did not but let me sort of lay out the the argument, it begins in the late 1890s. And this is a time when there's a very powerful set of debates going on between Lamarckian evolution, the acquired, characters being passed on acquired during your life being passed on some out of the next generation, versus kind of a Darwinian story, we often think about in which those things that we learn and acquire during our lives are only passed on culturally or behaviorally but not genetically, into the next generation. So a number of people, James, Mark Baldwin, and in fact to others, at the same time, came up with a response that said, maybe what we can do is think about a Darwinian process is not Lamarckian. But it produces kind of Lamarckian effects. And it works this way. The argument is that somehow, you develop a plastic way of behaving. Whereas the environment is changed. But by virtue of being flexible, you adapt to this to this change in the environment. Those who are not flexible, don't adapt, and they are eliminated. Now, you have adaptations specializations for the old environment, but you're just able to get by by your flexible behavior. Baldwin and these other arguments, other theorists said, Well, if that's true, then over time, there's going to be selective selection that favors those who have this plastic capacity. But because being flexible in plastic, takes effort, and this trial and error, wouldn't there now be natural selection to favorite not being so flexible, not being just sort of plastic, but being more ineluctable being automatic, more automatic? So the argument from Baldwin was that maybe behavioral plasticity can become internalized what worse, what was acquired at one point in time, by sort of active participation and responding? can look so it becomes passed on genetically, not because it got passed on genetically, but but because those who did it more automatically and more easily, we're better at passing on their offspring method on this capacity to their offspring. So it's not a direct inheritance of acquired characteristics. But an argument that said if flexibility sets you up, and there are costs to flexibility that will create this new kinds of selection. To some extent, this has been argued both by myself and others. Steven Pinker makes an argument similar to this about how he thinks that For example, grammar. This is a theorist who thinks that universal grammar was sort of built into the brain that we had this grammar device built in. And he argued that well, maybe it was acquired initially by trial and error by learning. And then we just simply got better and better at it, we put more of the grammar into the brain until it's now just in the brain. And we don't need to do much learning. I argued that, in fact, we need to not think about it in terms of something that's language specific, but that, in fact, all the demands that language places on us for combinatorially learning for suppressing certain associations compared to others. For this kind of abstraction, we didn't affect those things would be selected early on would be acquired with some effort, and will become better and better. And so rather than something specific to language, my argument, which I made earlier in this discussion, is that many, many different aspects of neurological learning, and behavior, in a sense would be affected this way, and would just simply make us better.
Subsequently, I think that there's problems with this argument. And let me spell those out. And the main thing that we've discovered, as more we've learned about genetics, and how genes evolve in these contexts, is that if something can be acquired plastically, without having to sort of build it in, it oftentimes does the reverse. It oftentimes allows some genetic support to degrade. If it can be acquired by something outside, that takes less work, we oftentimes by a kind of a less work principle, I like to call it the lazy gene hypothesis, that you know, genes only do what they need to do. And if it's supplied elsewhere, if you can get it from something else, give it up. My favorite example of this is our ability, our need for vitamin C in our diet. Almost all other animals make their own vitamin C, it's just a primate, somewhere, beginning about 60 million years ago, began eating fruit where there's a lot of it out there, we actually still have a pseudo gene, a non functional gene, for making the last enzyme that makes vitamin C, it just doesn't work anymore. And it's degraded because this capacity could be acquired outside easier. One of the problems with the Darwinian story is that, that it basically says that if you don't have to do it yourself, oftentimes you lose the ability, you know, don't use it, you know, you don't use it, you lose it. So the Boland story is not quite as helpful as I thought it was initially. On the other hand, it's led me to believe that may be some of the about advantages that we have in learning language, actually, might be the results of loss of function of loss of specificity. And, in fact, we've done some recent work using a bird example of bird that, by virtue of domestications become a better singer, so to speak, because we think it's lost some of its capacity, it's lost some of its innate bias, and as a result of losing innate bias is able to acquire biases from the outside more easily. And so one of the thoughts I had is that our human capacity is not just the result of adaptation that's produced, more flex more capacity to be bias towards language, but maybe also a loss of specificity in some directions. So one of the classic examples of this is that chimpanzees have somewhere in the range of 20 to 30 distinct vocalizations that they give, that refer to different aspects of their life that are built in that is they don't have to learn them. They're there. They're from birth, having to do with threats having to do with food, having to do with sexuality, having to do with solicitation of health, and that sort of thing. We have some innate vocalizations to that we acquire. Genetically. There's laughter there sobbing, there's groans, there's shrieks. But I'm starting to run out of vocalizations. We human beings have a very small repertoire of these innate vocalizations, surprisingly small compared to other primates. And I think what has happened is that they've degraded they've degraded in part because our linguistic communication can take it over. But this also carries me back to this notion of PROSONIC features of language. The fact that when I'm more excited, I'm talking faster with a higher frequency. This is also something that we find in primate calls, when they're more excited. They're also producing the same thing their calls become faster and higher frequency when you're soliciting aid. You can tell the vocalizations are more nasal like this, we human beings know what it means. And our vocalizations, our speech can be much more diminutive and demanding. Because what's happened is much of the, you might say background, autonomic and emotional side of it is still being communicated. But now it's subordinated to the language. It's a separate channel almost, in which we can now actually have much more sophistication, because we can now adapt it to the language use. So we have far fewer innate vocalizations. But a lot of the features of innate vocalizations have been carried forward. And now, in a sense, adapted to language.
Nick Jikomes 1:50:46
So this sort of CO evolutionary dance happens, but it can only kick off after you come across the so called symbolic threshold after the brain has evolved the capability of using symbols. And so I'm curious to get your speculation on when, during primate evolution, when's the earliest in primate evolution, you think we could have crossed the symbolic threshold? And given what you described earlier about Kanzi? Is it possible that it goes all the way back to the common ancestor with chimpanzees? Or if not, then, you know, shortly after that.
Terrence Deacon 1:51:20
So what I want to use in this regard is, is what empirical evidence might we even think about drawing, obviously, languages, our brains don't, don't fossilize. The only thing that we can see from about brains is that sometimes the cast of the inside of skulls give us a sense of how big the brain was, or know some trivial surface features. But one of the things that happened in our evolution is that there is a transition about 2 million years ago, which a number of things change at once. It's the point at which brain size compared to body size begins to diverge from what we find in our common ancestors, the Australopithecines that preceded this, and brain size, body size relationships, very much like chimpanzees. But by about 1.8 million years, we begin to see this depart, brains begin to expand. They expand over the course of the next million and a half years, so that it's not until just a few 100,000 years ago that we're seeing brain sizes, like ours. But this transition also takes place at another point. The first stone tool set is chipped stones, the sharpened edges begin to show up in the fossil record, about two and a half million years ago. They show up, they disappear, they show up, they disappear. But by 1.8 million years ago, we never find them separate from hominids, that is early precursors. So something is stabilized at this point in time. Now we see stone tools and slightly enlarged brains happening together. What are stone tools for? Well, they're for butchery. They're not good yet for killing animals, probably. But they're really good for sort of cutting up meat, taking chunks of meat away from other animals. The problem is that if you're not hunting, and all you have is stone tools, you're not so good at catching these animals and eating them. As as our cats big cats are big dogs, you know, the predators that are out there, even the hyenas and the wild dogs that are out there, you know, as good at it. How are you going to get this? Well, first of all, it looks as though probably our early ancestors were scavengers, they stole any open savanna. And what a way to be a scavenger is to be able to go out and grab a little bit of this meat, and then get away from these guys that are dangerous. How might you do that? Well, you probably can't do it on your own. And if you can't do it on your own, that means you have to cooperate in some respect. So now imagine the problem of somebody is down there with this stone tool, trying to cut off a limb with a little bit of meat on it. And there's a bunch of hungry hyenas surrounding you. Obviously, the last thing you want to do is to have your head down in the carcass. When these guys are closing in, you want somebody else out there chasing them away, keeping holding them at bay, you got to cooperate. I think one of the things that happened in this transition is that there had to be an ability to create stable cooperation. So you can rely on each other in life and death kind of situations to get at a kind of food source that's remarkably powerful, a lot of calories, a lot of nutrients that are not able to be attained in other other regards. So I see this transition to a new kind of foraging that requires cooperation requires passing on down this niche of toolmaking. but also requires a kind of communication that allows us to talk about things that that might happen, that could happen that could happen in the future. That is, we've got to be able to get away from now, the kind of immediacy that icons in index
provides and deal with these these other questions, I think it applies also to make choice and mate exclusion relationships, I think it's a much more systematic relationship. And this points to another thing that's happens right at this time as well. That's something called sexual dimorphism begin to disappear. sexual dimorphism is that in species where there's a lot of male male competition for mates, males are oftentimes quite a bit bigger than females. We see this to be the case in baboons, for example. But there's a lot of competition where a single adult male or a few small group of males will dominate a group of females, we see this in Elephant Seals, we see it in lots of different species. We also see it in the ancestors, our early ancestors, the Australopithecine, the Australopithecine, males are probably two to three times larger than females on average, as adults. We know this by looking at the size of mature bones particular, particularly mature jaws and maxvill maxilla, where we can look at the teeth and say, these are maturities. And notice that there's a sort of bimodal distribution of sizes here.
Nick Jikomes 1:56:30
What is the what is the ratio today,
Terrence Deacon 1:56:33
the ratio is just a slight fraction, I can't tell you, because I don't know the exact ratio, men are slightly larger than women. But on average, there's a lot of overlap number one, and it's certainly not two to one, it's a fraction of that. And that fraction shows up and is pretty much established by about one and a half million years ago. And that tells us that also the male female relationships, the you might say that sexual competition has had to be modified. Where we see more model morphism is we see a lot more male offspring care taking place. And oftentimes, we see it associated with exclusive mating, where there's not a lot of competition over mates, because mating is separated off from the social group in some way or another. In fact, most monomorphic species are monomorphic. I mean, by by more morphism means to different sizes, different morphologies metamorphism, one size, we see more monomorphic species being isolated pairs, the weirdness about our own ancestry, is that here we are needing to cooperate. But becoming monomorphic. It makes the human situation really unusual. simply looking at it in the context of other animal behavior. Somebody else has described this as deacons paradox, there's a paradox where the only real species is mostly monomorphic, in which there's extensive male offspring care, in which there's relatively separated male female bonding, which you don't find a lot of sort of crossing over, you do find cheating, of course, that that's something that we're very much aware of. But it's something that we call cheating. We recognize that it shouldn't be the way it's supposed to happen. These are, this normally happens in the rest of the world in isolated populations. Givens, for example, are pair bonded and monomorphic. But they are not in troops. They're in separate pairs. of humans, when these large social groups that have to cooperate, were monomorphic. For the most part, males play a lot of role in offspring care, provide food and resources that females and babies might have difficulty getting at chasing after meat, for example, not a very safe place to carry your babies to. So division of labor may have already begun. So it's a this transition somewhere between what 2 million or two and a half million and 1.8 million years ago, but there had to be a transition into this process. Because after about a million and a half years ago, all of these things always are coexisting, that is tool use, cooperative groups, larger brains, loss of sexual dimorphism. In fact, there's some other things that have happened as well, that are interesting in all of this. But basically it says that somehow things have really changed. And it's at this point that we begin to see this take off where brain size begins to change radically over the next million and a half years. And we get sort of modern situation. I think that what hat what's happening here is that over that period of time, also this language like communication, symbolic communication is getting more and more sophisticated. Slowly, I think all So moving to the vocal oral medium from more ritual like medium.
Nick Jikomes 2:00:06
So one of the things that's very interesting to speculate on is whether or not other forms of human that are now extinct, had language or language like abilities. And so I want to talk about Neandertals for just a moment for a couple of reasons. People are really interesting, the intertops, for obvious reasons, I think everyone, you know, is intrigued by this idea that there's something very much like ourselves, that that was walking around. And, and, you know, directly adjacent to what we would otherwise called Modern humans for quite a while. I have another episode with the anthropologist John Hawkes, if people are interested that he goes into a lot of interesting stuff about Neanderthals and interbreeding and all of that night. And I want to read a passage from the symbolic species where you talk about Neanderthals and their potential abilities, cognitive abilities, one of the reasons I think this passage is striking is because you wrote this book almost 25 years ago. And to this day, you know, today my understanding is we really we don't know for certain if Neandertals had the ability to do language or used language. But 25 years ago, most people including most academics, I think, basically thought of Neanderthals as dumb human cousins. They were cave people. And they were definitely not as smart as as smart as us. They certainly weren't speaking to each other. But given what you were talking about before about development and proportions, and thanks, I thought this passage was interesting. And I'd like you to comment on it. You said that in neurological terms, it seems likely that Neanderthals were fully modern, and our mental equals, they had a brain size, slightly above Modern values, slightly smaller stature. And so we can extrapolate that the internal proportions of the brain structures they had, were consistent with a symbolic capacity equal to anatomically modern humans. So is that is that the way you still think about it? And can you explain?
Terrence Deacon 2:02:01
Yes, I think that, I think that we, in part, because, you know, we wanted to think of Neanderthals as somehow, on the way to us, we have this sort of progress notion of human evolution, in which, you know, we're at the top of the progress where the winners in the race, I wanted to think about it in terms of the mechanisms. And this is why the development of the brain by allometry was important in my thinking about this, and why looking at this change in the relative size of the brain, beginning at about 1.8 million years ago, was important for my thinking about the problem. If Neanderthals at the end of this multi, you know, almost 2 million year epic have a brain size, like ours, a body size, like ours developmental process like ours, there was no reason for me to think that their symbolic ability would be any less than ours. So using those criteria, not some criterion about whether we want and they disappeared, or they went extinct, and we didn't, whether they lived in caves didn't have the same kind of tool sophistication that, that anatomically modern humans living at about the same time had using just those evolutionary physiological and developmental criteria. It seemed to me that we couldn't make that claim. Interestingly enough, as the years have gone by the sort of general population of anthropologists and paleontologists have begun to sort of upgrade the Neanderthals, as you might say, recognizing that we had to interbreed with them, that we that, you know, I've, I've learned that I have a little over 2% Neanderthal genes I can even I now have a list of which genes for the Neanderthal genes. They're in my genome that you know, was provided by these services. But one of these that happened is that there was a period of time when we had just been looking, this is now back in the early 2000s, in which this gene Fox P two was associated with a family who had damaged the fox b two that had some serious articulatory problems. They had difficulties forming words, their speech was were harder to understand than articulatory problems. And they had some problems with regularized verb endings, for example, regularized noun, so in English, and it was thought that well, maybe this this gene has something to do with the syntax and grammar and that maybe that's what makes humans sophisticated and others not. Well, it turns out that when the Neanderthal genome was was sequenced, it turns out that they also had this variant and that this variant was distinct in the editors and humans. From before the point that they split, the Neanderthal human split was probably somewhere in the range of three and a half to 500,000 years ago, long time ago. Now, if this articulatory capacity was there in the editor tools, and it's an articulatory capacity that maybe other primates don't quite have, that we clearly do have. It suggests that in effect that articulatory capacity was around maybe half a million half as half half a million years ago, 500,000 years ago, before we split, that Neanderthals were probably capable of this. I think the very fact that Neanderthals and humans did interact with each other interbred with each other suggests that probably they had their own culture, and language, probably as sophisticated as ours. We've also identified subsequently, that a bone that was thought to be different in the editor was the hyoid bone, this little horseshoe like shaped bone at the very top of your larynx, bottom of your throat, that sort of holds the larynx in place, suggests that it's also very much like our modern hyoid bone, which sort of showed that somehow our larynx, our vocal track was not that radically different than Neanderthals and humans. So they're probably capable of producing the kinds of sounds that we could have produced as well. So for a lot of anatomical and evolutionary reasons, I had no reason to doubt that Neanderthals were our equals. So it's sort of looking at the inside as opposed to the outside source of evidence.
Nick Jikomes 2:06:39
So we've spent most of our time so far, talking about the past, more or less. And I want to spend a little bit time talking about the future of language. And, you know, we'll give ourselves the liberty of just speculating, we can do that. I, so I'm really interested in the relationship between technology and communication, I think most people are in some sense, you know, obviously, in our evolutionary history, there's been a relationship there, we sort of taught, we were talking about stone tools and cooperation, how social structures and social relationships connect to tool making connect to this ability to use symbols that we've evolved. I'm curious what you make of modern technology. So I'm thinking about social media apps, I'm thinking about the emoticons and all of the icons, that we use more and more to communicate with each other. And, you know, all of the apps that we have, so and then again, connecting that to what we were talking about, just a few minutes ago about this, use it or lose it principle in evolution. Do you think it's possible? what ways do you think technology might impact the trajectory of human language evolution? And more specifically, do you think it's possible that our increased reliance on technology to offload our mnemonic strategies to offer like, I don't need to remember numbers anymore? My memory needs now are very different than they were when I was a child, for example. And do you think our increased use of iconographic modes of communication might actually degrade our ability to use symbolic representation?
Terrence Deacon 2:08:20
Digitally question, first of all, for evolution of the nervous system to respond to this? We're talking about hundreds of generations. So yes, if this capacity persists for the next few 1000 years, and I would say we probably got to go to 10,000 years to have any really significant impact on the nervous system. Maybe, but you know, I don't think I can speculate 100 years out, much less 10,000 years out, much less 100,000 years out. So so I don't know about that. That's hard to say. But I like to think about something we know about in terms of this, what happened to written language, how written language affected us. My favorite example comes from Plato's Phaedrus, in which Plato worries that writing that's becoming available to the Greeks, and written word is going to decrease human intellectual capacity. That somehow now we don't have to remember the, the great epics, that we just look it up, we can just read it somehow will become less intelligent in this process. What's happened is, of course, not quite the same. As we've been able to offload some of these memory capacities, we've gained other things that is in its place. And I think this has to do with the flexibility of our capacities. The other thing that I would, I think is interesting about particularly written language and all kinds written forms, including written musical forms, is that to some extent, they have evolved as well. Many of the very earliest stages of written language are not phonetic languages, their logo graphic languages, that is they picked her picture. What they refer to their semantics is iconic, but it's about symbols. But in fact, to make them mnemonic, it's easy to have them look like something the same is true with with you know, we have a lot of mnemonic supports, even in language as artificial as it is. So like words like pop, for example, sound like things that are popping. So we do have a lot of mnemonics in this. But what is it happened in written language? Is it particularly in the West, beginning of the Middle East, we shifted quickly from languages, written symbols, or written tokens, you might say, that show their relationship to meaning. iconically is a direct connection to some symbols or tokens, that's that stand for not things but sounds of words. So shifting to these phonetic alphabets, from iconic alphabets suddenly made possible, a very different way of thinking, a very different way of doing things like mathematics, for example, because we can now make it out of letters. So you can compare it, people have done these comparisons with the fairies in populations that were still using some degree of logographic writing systems. I should say that all of those systems that we think about characters in Chinese and Japanese and Korean character systems do have some non logo graphic features, some sound features and so on associated with them. So they have they're not simple in that respect. But what is it that's happened is that, that now our writing system is iconic of sound, in a way that they were iconic of meaning. And that allowed us to sort of shift our way of storing information and transmitting information. So that writing became something that was a little bit different. But it also meant that whereas in China, if you can read characters, even if you can't understand somebody else's speech, you can write it and read it, because it's directly linked to its reference its meaning, whereas I can't read. Russian, Finnish, Swedish, make any sense of it, even though it's using many of the same sound characters. What's happened is because it's iconic of sound, and the sounds of those languages have changed.
Now they become mutually untranslatable. So there's this interesting, interesting sort of divergence has happened in the west and the east with respect to these ways in which writing has evolved. So writing is evolved for certain purposes under certain selection. And we can learn a little bit about our future, I think, by looking at this because emojis are logo graphic. That is they directly refer to something. And yet we're now using them to refer, in a sense in another indirect way, it's not quite the same. So I'm curious as to what's going to happen with this. But I, I'm not sure exactly how to think about it, except in the following sense. One thing that's happening by virtue of the internet is that we're offloading a lot of our knowledge into this sort of public space. Now, that was always the case with language. That is a language is not something that's inside of a person. It's picked up from the social group, it's past in the social group. It's a niche that we find ourselves into we adapt, we take it in, and pass it on. So in one sense, I like to think of us as symbolically use social you sociality is a term that we use to talk about, for example, ants and honeybees. That is they're so social, they can't exist outside of the social world, because their entire way of being depends upon the continuation of this social organization. But we're symbolically eusocial symbols are not able to be passed on, intrinsically by genetics, the soil we don't have innate words. But as a result, we're symbolically use social, we have now blown this out of proportion. With these electronic media, we're so used social, not just in the languages we're using. But now in knowledge. You know, like you said, you don't know all the phone numbers of your friends, they're on your phone. The problem is, when we lose that our dependency on something extrinsic like that, we are now much weaker. We all know that if somehow there was a huge solar flare and pulse that wiped out all electronic communications, we'd be in serious trouble, not just here in the United States, but around the world. That that we are now so dependent on this. This is suggesting that to some extent, we're much more a larger organism than single individuals is a kind of shift towards super organism, as some people would describe it. It's going on, in in our future. The question is, how much of that? Will we allow to take place? How much well, artificial intelligence begin to do some of these things for us? And help us sort of link together? I think the question about what will happen to human this has a lot to do with how we're going to more and more distributes our cognition into the world, amongst each other. And in our things,
Nick Jikomes 2:16:29
I do want to ask you a little bit about artificial intelligence with respect to its current linguistic capabilities. So we certainly have forms of machine intelligence programs that we can make, that are capable of parsing sentences, and defining words, and actually constructing sentences and, you know, pretty, you know, relatively impressive if, you know, sometimes awkward ways. GTP three is probably like the latest thing that's out there that people may have heard about, you can train it on some corpus of human text, and it can construct new sentences, which are usually fluent, they're technically fluent, but they're, they're sort of awkward and funny in interesting ways. And it's clear that these systems don't quite have what you would call true fluency or true comprehension of, of the language of the same way that a human even a human child does. So a Do you agree with that? And B? If so, what do you think is missing from the machine intelligence we have today? That is preventing them from having that level of fluency?
Terrence Deacon 2:17:34
So first of all, I would say that machine intelligence we have today, even the best at this has no symbolic capacity zero. And how is able to do what it does? If that's true? And I think the answer here is that you we noticed this because we have to feed it a huge corpus. We have to give it lots and lots of sentences. What's happening is that it's these systems are developing elaborate statistics of how words are related to each other. Remember that I described a sentence as a kind of diagram. A sentence has iconic and indexical relationships to each other inferences, for example, deduction is iconic and indexical, in the sense that if all men are mortal, Socrates is a man. Well, there, there are iconic features. There. I mentioned Socrates, I mentioned mortality, I mentioned men, and they're in a particular relationship with each other, I can use the iconicity, to sort of make the guests that therefore, you know, Socrates is mortal. There's iconicity, and indexicality, in the grammar and syntax of language. And to the extent that large corpus has incorporated these iconic and indexical relations, the kinds of things that are not just in the thesaurus and dictionary, but in encyclopedias, so to speak. There's structure there, and these systems capture that structure, and as a result can kick it back to us. But notice that what they're capturing is the iconic and indexical structure. What we would want to say is that they don't quite understand what they're saying. They don't know what they're saying. And as a result, we'd say it's, it's simulating language, and it can answer questions that we might put to it, because that structure is there in our question, and it's there in the corpus that it has. On the other hand, a young child can acquire new words, in a very short period of time, a lot of words at once, with a very few occasions of hearing it being used. When we try to train a network to do this, we oftentimes need millions of utterances to do it right. And that's because it's acquiring it in a very different way. Children are going right to the symbols. The surface doesn't matter so much In fact, if they don't get the syntax, right, so what they just want to communicate, they just want to understand. And that's why they jump to the symbolic side of it as fast as they can, and learn good grammar and syntax later. Good grammar and syntax, you know, it comes in as they go to school. And finally, when we have to learn to write the problem with writing compared to speaking, instead, a lot of the clues we have for indexical and iconic features, in speaking to each other is a set of assumptions that we already set up, that have been built up by our interactions already, and have been built up by interactions with others. In writing, we don't have that pragmatic context to rely on. So all the iconic and indexical cues that are extrinsic in speech, and in one to one interactions have to be poured into the writing, therefore, the writing has to be much more precise in its grammar and syntax. So when we look at, you know, transcribe speech, if you were to transcribe what we were talking about today, it would look very ungrammatical in lots of ways. But if we were to then have to write it down, we want to correct it, fix it, so it's not quite so ungrammatical. Why? Because if it's written, the clues, aren't there, the cues aren't there. So in many respects, there is this process that I would say in artificial intelligence, we precisely have not figured out how to create machine competence to interpret things symbolically. Now, am I saying that it can't be done? Not? I think it could be done. But because we don't think about the problem in terms of icons, indices, and symbols, and the difference with that, having collapsed that all down to just this associational notion of words and meanings. We don't even think that it's necessary. We're building these devices without even thinking about the difference between iconic indexical and symbolic reference. So that we have these devices that produce things like speech, understand is like, you know, I ask Alexa to do things, to turn on my lights and things like that, to order things from Amazon,
by speech, but there's nobody home. On the other side, there's no symbolic interpretation. And so I think one of the excitements that I have about, you know, once we begin to rethink this problem this way, once we get below the level of the surface of just these correlations, we might be able to build very different ways of doing what we call today, artificial intelligence. And in which case, it probably wouldn't be artificial anymore.
Nick Jikomes 2:23:01
I'm, you know, I'm thinking right now about so you're saying that there's no symbolic representation present in the systems of artificial intelligence, you also just said, in reference to your Alexa, that no one's home. So I'm interested in your take on the relationship between language and symbolic thought. And the idea of having a self and self reflective consciousness? What do you think the relationship between language and self reflective consciousness, the type of consciousness that we often uniquely associate with humans, actually is?
Terrence Deacon 2:23:37
Right now? I think that's, that's a good way to think about it. It's not the consciousness question. Because I have no doubt that I have dogs that are conscious and cats that are conscious and so on. But they clearly do not have this kind of what you described as self reflected consciousness. And I think one of the powers of symbols is that we don't need symbols, if I'm alone, if I'm alone in the desert island, never interaction with each other, because all of my experience is available to me. It doesn't need to be translated into another mind that has a totally different experiential background symbols are necessary to sort of do this translation from self to non self to take my experiences and make them available to you and vice versa. In this process, there needs to be sort of my experiences, which are iconic and indexical have to be put in another form, because you don't have the same icons and indices I do. But that means that symbols are in effect, in in personal, not only are they acquired from the society, not only am I parasitic on my symbolic culture, but it is now a medium of communication. That is in a sense separated from me displaced from me for my experience. The result is, I think, because of our symbolic ability, we have this ability to have a distance from ourselves, I can, if I am saying these things, then it's not my thoughts. But in fact, they're my thoughts in the public. And that means I can now also begin to think about the public perspective on me what other people are thinking about me? Why, because I need to get, I want to get their knowledge in my head. And I want to pass my knowledge on to other peoples. I think, actually, if you think about it, I was just talking to somebody recently who has been studying chimpanzee communication. And recognize that when one chimpanzee came from a traumatic experience, separated from the group, coming in contact with another troop, came back, and was now having to interact with his natal troop. very disturbed, very upset, there was no way that that chimpanzee could communicate that experience, what had happened to me what's going on what was my past like? That is, there was nothing that I would say that philosophers have called intersubjectivity, there was no ability to get in each other's heads to know what experience we've had. But we were thinking that we have this incredible capacity, because language has allowed us, in a sense, this intermediary form of communication, of representation, that allows me to represent what I'm thinking to you, and vice versa. That we beings are not just in ourselves in our own experience, but we're in each other's experiences all the time. And, and in this sense, we're also a more distributed mind, a more distributed, being more distributed kind of consciousness, I think, is very different than any other species. And, and this goes back to work that even Vigotsky back in the 1930s. And 20s, was thinking about the children, when they mature and they acquire language, they begin to talk to themselves, he suggested that when that happens is that children can become their own parents. In a sense, their parents would tell them to do certain things and to act in ways. As they mature, they repeat these things back to themselves in their mind's eye, so to speak, their minds ear, and they sort of become people outside themselves communicating to themselves. So this is that reflective move, if we can now sort of understand this relationship from another perspective that symbols have given us, it gives us perspective on ourselves. Unfortunately, it also gives us knowledge of our own impermanence. The fact that I wasn't around 50 years, 500 years ago, that I won't be around 50 years from now, it gives us a bunch of knowledge that maybe I'd rather not have, if I was a dog, I'd feel, you know, less worried about this sort of thing. So symbols have given us this incredible gift of intersubjectivity. That I think, you know, we see as one of the one of the greatest qualities that we share with each other that within other species lack. And I think it's responsible, in part for our more moral and ethical traditions, and so on, being able to get out of ourselves see ourselves from another perspective, and to get a sense of what other people are experiencing.
Nick Jikomes 2:28:37
So it sounds like you're saying that, you know, this ability to use symbols, is actually necessary to make self other distinctions. And so do you think that and other animals that can pass the you know, the mirror test, if you buy that as a measure of self, the ability to recognize self? Does that mean that they must then have the ability to deal with symbols, even if it's not quite at the level of weekend?
Terrence Deacon 2:29:05
Not Not exactly. I, I do think that first of all, having a little bit of sophistication with you know, look, looking at reflections in water, and so on, probably has some role to play in all of this. I try to try to put it another sense that what we, what we look at, when I think about self reflection, is what somebody else sees when they see me what somebody else is thinking, when they're getting my thoughts in words. It's the others perspective that looking at me. Now, the question is, is that what the chimpanzee is getting? If it recognizes a spot on its forehead? Is it seeing that in terms of what another sees of me? Or is it seeing it simply as a reflection, like in a mirror, or in water? One of my favorite examples of this actually comes from the Christian Bible, of Adam and Eve. I think that the apple that, that Adam bytes and Eve bytes, is knowledge of symbols is symbolic capacity, its knowledge of good and evil. And one of the things that they do, once they've awakened after eating the apple, is they cover their private parts, the fig leaf story, right? They suddenly aware of what the other person sees of them. So I think in a cryptic, metaphoric sense, it's telling us that somehow Adam and Eve at this point, cross this threshold. But it's not just knowledge of each other's bodies, knowledge of what another sees in them, but it's also knowledge of good and evil. I don't think there is knowledge of good and evil in other species, because I think it requires this inter subjective capacity. I think there's degrees of empathy. Possible. Why because we observe, you know, the crying behavior, the screaming behave of other species, and we experiencing ourselves, we know how it comes out of ourselves, I think there's a sort of a reflective empathy in that response. But the kind of empathy, we say that this person is a good person, or this person has done something evil, done, some done something unkind. Now, we're already in the realm of recognizing the importance of the intersubjectivity. That one of the reasons why we don't hold children responsible for some of these features, is that they can't quite get into each other's minds, can't quite simulate the experience that they're causing in someone else. Why we don't hold people in psychotic states responsible for making appropriate decisions like this, because they can't do that simulation well, but but for normal people, we hold each other responsible for being able to do that kind of simulation, you hold me responsible for knowing something about what effect I'm having on your experience. And I hold you responsible for that. That piece of it is where the sort of morality and ethics comes from, I think it's something that has to require symbols to begin with.
Nick Jikomes 2:32:46
I want to spend, we're not going to have time to really go as deep as we could into some of your other work. But I want to talk about the concept of emergence. So, you know, we talked about how language emerged in the human brain. And, and you know, that whole story is very interesting. But you have sort of this other very related body of work. A lot of it is in this book called incomplete nature, where you dwell on how mind or how consciousness emerges from the brain. And you also talk about how life emerges from non life. And at first blush, one might not think those are directly connected. But you seem to think that they are connected. And so I'm wondering if you could just talk a little bit about the phenomenon of emergence, what does it mean for something to be an emergent phenomenon? And how do you? How do you connect the dots at a high level between things like the emergence of mind and the emergence of life?
Terrence Deacon 2:33:43
All right, well, you've obviously opened up a huge can of worms. And and I should say that the book you're referring to, I apologize for its length, but it's well over 500 pages. So it's not an easy question. And it's a it's a question of the very foundations of modern science, I think, to understand this notion, one of the reasons I wrote the book, I actually started to write a book, mostly about the problem of consciousness, it was going to be titled homunculus as a kind of tongue in cheek argument that there is no homunculus in the brain. But we need to explain the macular since we do explain the agency, the feeling of self. So although there's no place in the brain that does it. Somehow we needed to explain how it comes about. But I realized in working on this problem is that in fact, we didn't even have that answer for life itself. I'm the sort of, you know, what makes a mind of mine as distinct from something else? It's the same version of a problem of why life isn't just mechanism. Why thinking isn't just computing. And what I realized is that until we can understand this sort of basic notion of what Self is, every organism has a sense that we would call self, not in terms of self consciousness, like we're talking about, but they're organized around the preservation self. Even virus, I can talk about viruses. They're not alive in the sense that even a bacterium is alive, in a sense, but we know that they're organized around the persistence of themselves, and the transmission of themselves. We know that getting a vaccine is working against the virus self interest, so to speak. So there's a very, very primitive notion of self that we wouldn't ascribe to just chemistry. Viruses are not just chemicals. They're chemicals organized with respect to maintaining that organization, preserving that organization against being disrupted. Now, what I would describe I used when I talked about ethics is a sort of broader term we call normativity. Norms are things in which you can be right or wrong, good or bad, correct or incorrect. There is no good or bad chemistry, there's no right or wrong chemistry. There is no chemical reaction that is better than another chemical reaction, unless it's in the services, something usually with respect to something alive. But for a virus, there are good and bad environments, there are good and bad hosts, there are toxins that are bad for you. Even a virus as simple as it is, has normative character. So the transition to life is a transition to something like the very basis itself, and the very basis of normativity. How is it that there's a transition that looks like it's just a chemical transition, that something is, at one point in time, made up of molecules that have no normative character. And yet, as a collection, the collection is normative. It has a self, it can repair itself if damaged. And it sets it the key is it's working against what are the most general features of the universe, the increase of entropy, the second law of thermodynamics, all living things, are organized in such a way that their innocence resisting this basic tendency of all of nature. And it maintains themselves living things, of course, maintain themselves on the surface of the earth, for, you know, billions of years, against this ubiquitous tendency for things to break down. The organization that we call life has been this transition from non normative to normative chemistry, from non self to self.
And now, norms itself, over the course of three and a half billion years, just gotten more and more complex, and added level upon level upon level, minds and brains and consciousness and eventually, symbolic communication, are just one of these last layers in this process. So my argument, the reason I wrote the book, in fact, was to say, look, we need to explain how normativity how in directedness, how self comes into the world, at this very simplest basic level, if we want to have even a chance at making sense of it at the level of a mind. So that I decided that I had to sort of go back to the beginning, and ask these very basic, you might say, philosophical questions almost, about how this could happen. What kind of a molecular system, what is it about a molecular system that would show the crossing over from this one kind of form to another type of form, in which it has indirectness to it? The term that's also associated with this, philosophically, it historically is teleology, or purpose in the directedness. An end is something that doesn't yet exist. But there are things that we do that are organized to achieve ends. Physical causality chemistry is not trying to achieve ends. All living things are if for no other reason, it just one of the ends is to keep from degrading, to keep from being eliminated. So what we're seeing is that in the origins of life, is the this the transition is not a gradual and slowly evolving thing. I think it's actually quite sudden, and accidental maybe to some extent, In which the organization itself becomes the critical thing, not the stuff. If you think about that, in terms of you and I, the stuff that I was made of 50 years ago, is gone. It's passed on, but the organization has had continuity. From the beginning of the last universal common ancestor of all DNA and RNA based life forms. There has been an organizational continuity, there's been unbreaking, unbroken. I've linked to that, by this unbroken chain of continuity of form. It's the form that's been maintained, the organization has been maintained, even though the matter and energy has been, has come and gone, you know, trillions of times. So the question is, what kind of organization is it, because it's not new matter, not New Energy, it's new organization, how a new organization come into the world, in such that the organization, not the stuff, but the organization kept itself going. And this is why it oftentimes seems like it's a disembodied something. This is a material process. It's a chemical process. But it's the organization that was you might say, it's got exchangeable matter and energy. In the same way, that information that I'm producing with sound is being turned into electronic signals popped into sound on your end. And maybe somebody will take some of this and turn it into text on a page at some point in time. In that process, the form has been maintained, the organization has been maintained, but the embodiment has changed. That feature that we associate with knowledge with with information is what life is about. It begins. Very, very simply, the origins of life had to be very simple. And yet it had to do something radically different than the rest of chemistry and physics. So the origins of life question is so fascinating, because it had to be ultra simple, and ultra divergent from the rest of chemistry and physics. That's a kind of conundrum that just it's got to draw us in. But unless we can answer that question, satisfactorily, I don't think we have a shot at understanding things like consciousness.
Nick Jikomes 2:42:43
Well, we've been talking for almost three hours, I think this is going to be my longest episode so far. I want to thank you for your time. So again, we spent most of our time talking about the book, the symbolic species, which is all about the origin of language, we sort of did a preview of incomplete nature here at the end, I would love to talk to you more about that stuff. At some point, we could probably do another three hour podcast, if you're willing to suffer through it at some point. Thank you, again, for your time. Are there any thoughts that you want to leave people with? Or perhaps, you know, the symbolic species was written? almost 25 years ago? Are there any? Are you working on anything new related to language? Are there any books or or thinkers out there that are working on this today that you might point people to,
Terrence Deacon 2:43:30
ah, there are some off the top of my head, it's gonna be hard to sort of kick it out. I would say that one of the things that I see happening now, and this does have to do with what links these two books together, is that we're beginning to realize that we can no longer think about life, minds, and computation for that matter. In purely mechanistic terms, we're beginning to realize we, although the Enlightenment was a chance in which we tried to say everything could be explained mechanistically, we can get rid of teleology, we can get rid of indirectness. We can get rid of value talk. And maybe Darwin gets rid of design, you know, and now we have this purely mechanistic universe. I think now that we've run into these problems with the origins of life, the nature of consciousness, and what our machines are capable of doing, we're now forced to realize that maybe we're going to have to come back and deal with these questions that we thought we'd overcome about meaning and value. And teleology, about purpose. I think that's what the next century is going to be about. reintegrating purpose value teleology, back into the sciences. And as soon as we do that, it's I think it's going to be a radical change because it's, in some sense, it's a figure background change. things. I like to think about what I did in the symbolic species and many things we've talked about and figure background shifts, looking not at the external, but looking at the internal process, looking at not the appearance of communication. But looking at the semiotics looking at the referential, the hidden, the sort of the absent, not on the surface features of it, the referential side of it, no, turn those questions upside down. I think that a lot of what's happening these days is we're being forced to do figure background shifts. And this is also being generated by our, you know, advances in electronic media, our advances in artificial intelligence, it's forcing us to ask these questions. Why is computing not like thinking? And why is thinking not like computing? We need to answer those questions. In fact, that's the subtitle, or roughly the subtitle of a book that I'm working on now.
Nick Jikomes 2:46:01
Do you have a title yet? Yes, it's
Terrence Deacon 2:46:03
called. So I'm working on two two books. And and they're, they're related in in two ways. The one I just mentioned is called Beyond bits. Why brains don't compute. And machines don't think. And it's, it's an it's an attempt to bring what was called information theory, together with these theories of reference and semiotic reference, to try to create a formal theory of that. The other thing I'm working on is a book I'm almost finished with now, it's called Falling up the paradox of biological complexity. And it's an argument that says that the increased complexity in biology is actually a less is more problem, that, in fact, things have gotten more complex, not because they've added more parts, but because in fact, they've simplified and they've become more dependent on each other. And this is forced complexity, we're sort of backing into, we're falling up into complexity. And they even think that the language story is explained this way. We haven't gone in this direction today. But, but that's where I'm going with this, that we have to rethink the evolutionary process that one way to think about it is, I gave the example of vitamin C, we've become dependent upon vitamin C, we're effectively addicted to dietary vitamin C, but simply because it's always been there. But because we have to eat vitamin C and find it, we primates develop three color vision. We primate to develop taste cells that are responsive to sweet and sour in ways that other species are not. We've developed transporters for ascorbic acid in our blood that's associated with glucose transporter molecules. What's happened is that although we've lost one capacity, as this one gene is degraded, we've now distributed selection on a whole variety of features in our bodies, that because this thing is now externally supplied, we have to adapt to that externalization, we've become much more complex, in terms of we've had to add all of these new features, just to handle what degraded. So what's happened is something that was just handled by one thing is now handled by dozens of things. It's become more complex by virtue of having loss function, not just game function. So it turns out that I've just been looking at lots of different features of evolution that we sometimes call major transitions or hierarchic transitions that I think are driven in most cases by a degrading feature. That is a kind of less is more argument.
Nick Jikomes 2:48:57
Well, Terrence Deacon, thank you for your time. Hopefully, I'll be able to talk to you once you're once you're about to release one of these new books, and have a good rest of your day.
Terrence Deacon 2:49:06
All right, then. Wonderful, great fun. I'm glad that we had this chance.
Comments