Free will, AI, and vibrating vests: investigating the science of Westworld
HBO’s Westworld has returned for its second season at last, complete with bloodletting, espionage, and self-aware artificial intelligence (AI). After humanlike robots at a Wild West theme park started to gain consciousness and rebelled against their human owners in the first season, David Eagleman came on as the show’s scientific adviser.
Eagleman, a neuroscientist at Stanford University in Palo Alto, California, spoke with Science about how much we should fear such an AI uprising.
This interview has been edited for brevity and clarity.
Q: How did you get involved in the show?
A: I was talking with one of the writers, and I asked who their scientific adviser was. Turns out, they didn’t have one. So that’s how I got on board. Then I went to [Los Angeles, California,] and had a long session with the producers and writers, for about 6 hours, maybe 8, about free will and the possibility of robot consciousness.
I also showed them some tech that I’d invented. I gave a TED talk a few years ago on this vest with vibratory motors on it. That’s now part of the season two plot. I can’t tell you anything about it. The real vest vibrates in response to sound, for deaf people, but in Westworld it serves a different purpose, giving the wearers an important data stream.
Q: What else did you talk about?
A: What is special, if anything, about the human brain, and whether we might come to replicate its important features on another substrate to make a conscious robot. The answer to that of course is not known. Generally, the issue is that all Mother Nature had to work with were cells, such as neurons. But once we understand the neural code, there may be no reason that we can’t build it out of better substrates so that it’s accomplishing the same algorithms but in a much simpler way. This is one of the questions addressed this season. Here’s an analogy: We wanted to fly like birds for centuries, and so everybody started by building devices that flapped wings. But eventually we figured out the principles of flight, and that enabled us to build fixed-wing aircraft that can fly much farther and faster than birds. Possibly we’ll be able to build better brains on our modern computational substrates.
Q: Has anything on the show made you think differently about intelligence?
A: The show forces me to consider what level of intelligence would be required to make us believe that an android is conscious. As humans we’re very ready to anthropomorphize anything. Consider the latest episode, in which the androids at the party so easily fool the person into thinking they are humans, simply because they play the piano a certain way, or take off their glasses to wipe them, or give a funny facial expression. Once robots pass the Turing test, we’ll probably recognize that we’re just not that hard to fool.
Q: Can we make androids behave like humans, but without the selfishness and violence that appears in Westworld and other works of science fiction?
A: I certainly think so. I would hate to be wrong about this, but so much of human behavior has to do with evolutionary constraints. Things like competition for survival and for mating and for eating. This shapes every bit of our psychology. And so androids, not possessing that history, would certainly show up with a very different psychology. It would be more of an acting job—they wouldn’t necessarily have the same kind of emotions as us, if they had them period. And this is tied into the question of whether they would even have any consciousness—any internal experience—at all.
Q: In Westworld and Blade Runner, programmers give androids vivid memories to enhance their humanness. Are such backstories necessary?
A: Humans have memory so we can use our experience to avoid repeating mistakes. But memory is also what allows us to simulate the future, as studies have shown in the recent decades of neuroscience research. Memory allows us to write down these building blocks to construct our model of what happens next. One of the things that is often unappreciated about the famous amnesic patient HM, who couldn’t form new memories, is that he was also unable to simulate possible futures. If you take someone with a bad case of amnesia and you say, “I want you to picture your vacation to Hawaii next month and what it’s going to be like standing on the beach,” they’ll say, “I’m drawing a blank.” So one advantage of giving robots vivid memories, in theory, would be to steer how they put together futures.
Q: Are there any moments of especially humanlike behavior in the show?
A: In my book Incognito, I describe the brain as a team of rivals, by which I mean you have all these competing neural networks that want different things. If I offer you strawberry ice cream, part of your brain wants to eat it, part of your brain says, “Don’t eat it, you’ll get fat,” and so on. We’re machines built of many different voices, and this is what makes humans interesting and nuanced and complex. In the Westworld writers’ room, I pointed out that one of the [android] hosts, Maeve, in the final episode of season one, finally gets on a train to escape Westworld, and she decides she’s going back in to find her daughter. She’s torn, she’s conflicted. If the androids had a single internal voice, they’d be missing much of the emotional coloration that humans have, such as regret and uncertainty and so on. Also, it just wouldn’t be very interesting to watch them.