Previous Lecture Complete and continue  

  Introduction: Is Your Brain a Computer?

Transcript

In May of 2016, Aeon magazine published an essay by a fairly well-known and respected psychologist, Robert Epstein, called “The Empty Brain”. In the essay he argued that the root metaphor that has guided research in cognitive science for more than half a century, the computer metaphor, is wrong. “Your brain does not process information, retrieve knowledge or store memories”, he says. “In short: your brain is not a computer”. When this article came across my desk I was intrigued.

As a philosopher who’s taught courses on the philosophy of mind and technology and artificial intelligence, I read this as a bold statement that I could see some philosophers endorsing, but when I learned that Epstein is an experimental psychologist, and he’s edited an anthology on conceptual issues in artificial intelligence, I admit I was a bit surprised.

In psychology and cognitive science, when people talk about the brain being a computer, they can mean different things by that statement, so it’s not hard to find a range of opinions on the usefulness of the computer metaphor. But lots of people might reject the “computer” language but still endorse the language of information processing - that the brain may not be performing computations in the way that artificial computers do, but it’s still accurate to describe it as processing information in some way.

But Epstein doesn’t even accept this. He thinks this whole network of descriptions, of our brains processing, retrieving, representing, storing information — this is all tied to the computer metaphor, which he believes, is deeply misleading. It’s a bad metaphor for understanding how the brain works.

Well over the next few days and weeks the science blogosphere and internet magazines lit up over this article, and quite a few response articles were written. Most of them were critical, but some were cautiously supportive.

“Yes, your brain certainly is a computer”. This is from a computer science professor.

“Epstein’s empty essay”. This author called it a “badly misguided essay” with a flawed argument that “triggered his rage”.

The comments section for the article itself, in Aeon magazine, was mostly critical, with over 400 comments at the time I grabbed this screenshot. But there was quite a bit of interesting disagreement about what exactly was wrong with Epstein’s reasoning. The real action was on social media though, where the article has been shared over 150,000 times on FaceBook, so it obviously touched a nerve.

And then there were the mildly supportive responses. The author of this response is a neuroscientist. He says that the essay overstates many points but fundamentally, the position that Epstein is endorsing is correct.

So, I’m looking at these responses and thinking to myself, “I can’t imagine this debate has made its way too far into the general public. But I also can’t imagine how the average person coming across this story would even start to assess it.”

And then I thought, maybe there’s an opportunity here to use this debate as a framing device for a video course on the science and philosophy of minds, brains, computers and artificial intelligence. A course that doesn’t presuppose any science or philosophy, that is accessible to any reasonably intelligent layperson, but that opens up a debate that has enormous implications for society, one with the potential to fundamentally alter how we understand what it means to be human.

Is your brain a computer? What do people mean when they say this? What would it mean for it to be true? Why do so many scientists believe it is true? What are the implications of this view, for our understanding of what a mind is, and what it means to be a conscious, thinking being? Could an artificial being, like a robot, ever be conscious in the way that we are? Can it have thoughts in the way that we have thoughts? Is it possible to upload our minds into a computer, to transfer our psychological identity into another physical body, or exist as a digital entity entirely, an abstract pattern of information?

We’ve all seen these ideas portrayed in science fiction movies for many years.

One of the most iconic portrayals of artificial intelligence is HAL, the paranoid ship’s computer from Stanley Kubrick’s 2001: A Space Odyssey. The film doesn’t take a stance on whether HAL is fully conscious in the way that we are, but the crew certainly relate to him as an intelligent being with its own set of beliefs and motivations and feelings. Hal is an example of a disembodied or computer-based artificial intelligence, but there are many other examples of embodied artificial intelligences which we associate with the term “robot”, or “android”.

In the movie I, Robot, robot helpers have become widespread, but the story revolves around two kinds of AIs — Sonny, a new type of robot that appears to have crossed the threshold into self-awareness, and the threat posed by a central AI computer called VIKI, which has designs of its own on humanity. So this film has both kinds of AIs — robotic, embodied AIs and computer AIs, on the model of HAL.

Robotic AIs are a different archetype for artificial intelligence than computer AIs because you’re essentially putting a computer brain in a physical body that has localized sensory systems, that can move around and interact in physical space, and the body and brain have to be controlled as an integrated unit. With a computer AI, like HAL or VIKI, they may have access to data from external sources, but they’re not localized, embodied entities in the way that robots or androids are, and that makes a difference to how we imagine intelligence and consciousness arising in these kinds of systems.

More recently in Ex Machina we see a story about a new model of intelligent robot named Ava that is being tested to see if a human subject, Caleb, can come to relate to Ava as a conscious, intelligent, self-aware being, even though he knows she’s an artificial being, a robot.

There’s a very interesting scene where Nathan, Ava’s creator, is explaining to Caleb why he gave Ava sexuality and a capacity for sexual response. Nathan says, basically, that all consciousness we know of is embodied consciousness, and every form of embodied consciousness that we know of has a sex drive of some sort, which connects it to the requirements of survival, reproduction and psychological bonding.

That’s an interesting idea, whether there are any fundamental drives that are a requirement for embodied, intelligent life as we know it.

But the more serious question is whether embodiment is required for consciousness at all.

The other side of this debate is explored in movies like Her. Joaquin Phoenix plays Theodore, a writer who falls in love with Samantha, an intelligent computer operating system voiced by Scarlett Johansson. Samantha doesn’t have a body, but she communicates in such an authentic and relatable way that we can accept the premise that a human being could relate to Samantha as a fully realized person, and she certainly seems genuine in her love for Theodore. The love story quickly gets complicated, however, as the computer learns to rewrite its own code and evolves at an exponential rate, so that she has a harder and harder time relating to Theodore, and eventually they have to part ways.

This idea of exponentially increasing artificial intelligence is another idea that has entered popular consciousness in recent years, and it’s explored in a different way in the movie Transcendence, which stars Johnny Depp as a computer scientist who is trying to create a sentient, intelligent computer, which he predicts will usher in a new golden age as machine intelligence increases at an exponential rate and eventually overtakes human intelligence in an event that is referred to as the “singularity”.

The movie isn’t very good but I was interested to see ideas from the so-called “singularity movement” get such a prominent representation in recent movies, like Transcendence, and Her. The figure most associated with the concept of a technological singularity is Ray Kurzweil, through books like The Singularity is Near, though he'd didn’t originate the idea.

Kurzweil is the figurehead for a broad movement that has a surprising number of backers in silicon valley and the high-tech startup world, which embraces the idea of exponentially accelerating technological change, and is trying to steer technology in this direction.

But one could argue that everything that I just talked about, regarding the plausibility of artificial intelligence and sentient robots and the singularity, is predicated on the computer model of the brain, and of the mind as, in some sense, a manifestation of a computational process being implemented in the hardware of our physical brains and bodies.

But if this model is wrong, as Robert Epstein and others are claiming — if it’s wrong to even think of the brain as processing information — then the implications seem to be dramatic.

It means that worries about us developing conscious, human-like robots are misplaced — this isn’t happening.

It means that there’s no reason to think we’ll ever be able to upload our consciousness into a computer, as Kurzweil and others believe.

It means that we shouldn’t worry about the possibility that we may all be living in a digital simulation already, a view that is taken seriously by a number of very smart people.

These ideas really are just science fiction, not precursors to science fact.

So, this debate clearly has implications for these questions about the future of technology. But it also raises a second question.

If the brain is not a computer -- if it doesn’t store memories, if it’s not even an information processor -- then what is the brain doing? What’s the alternative story? What is thinking, if not the processing of mental representations or symbols? What is remembering, if not the retrieving of stored memories?

I’m not a cognitive scientist, I’m not a psychologist, but as a philosopher I’ve been following these debates for many years, and I’ve had the privilege of learning from a lot of smart people and teaching courses on this topic at the university level.

But these debates are closed to the public, for the most part, and that saddens me, because these are among the most profound and exhilarating questions we can ask.

So, what I’d like to do with this video course is to act as a tour guide of sorts, through the conceptual issues that connect debates about the nature of computers to the nature of thought and the brain, and to consciousness.

At the end of this tour, my goal is that you have a solid grasp of what the issues actually are, the arguments for and against the major positions, and an understanding of the range of reasonable positions one can have on these questions.

When we’re done, you’ll know more about the modern cognitive science revolution than 99% of the population, and you’ll be able to read and critically assess essays like this one, in a way that few other people can.

In the next video I’ll give you a more detailed breakdown of the organization of the course, so let’s hop over there and I’ll run through that for you.