2016 Isaac Asimov Memorial Debate: Is the Universe a Simulation?
American Museum of Natural History
What may have started as a science fiction speculation—that perhaps the universe as we know it is a computer simulation—has become a serious line of theoretical and experimental investigation among physicists, astrophysicists, and philosophers.
Neil deGrasse Tyson, Frederick P. Rose Director of the Hayden Planetarium, hosts and moderates a panel of experts in a lively discussion about the merits and shortcomings of this provocative and revolutionary idea. The 17th annual Isaac Asimov Memorial Debate took place at The American Museum of Natural History on April 5, 2016.
2016 Asimov Panelists:
Professor of philosophy, New York University
Theoretical physicist, Massachusetts Institute of Technology
Theoretical physicist, University of Maryland
Theoretical physicist, Harvard University
Cosmologist, Massachusetts Institute of Technology
The late Dr. Isaac Asimov, one of the most prolific and influential authors of our time, was a dear friend and supporter of the American Museum of Natural History. In his memory, the Hayden Planetarium is honored to host the annual Isaac Asimov Memorial Debate — generously endowed by relatives, friends, and admirers of Isaac Asimov and his work — bringing the finest minds in the world to the Museum each year to debate pressing questions on the frontier of scientific discovery. Proceeds from ticket sales of the Isaac Asimov Memorial Debates benefit the scientific and educational programs of the Hayden Planetarium.
In this episode of the Waking Up podcast, Sam Harris speaks with Richard Dawkins at a live event in Los Angeles (second of two). They discuss Richard’s experience of having a stroke, the genetic future of humanity, the analogy between genes and memes, the “extended phenotype,” Islam and bigotry, the biology of race, how to find meaning without religion, and other topics.
Want to support the Waking Up podcast?
Please visit: http://www.samharris.org/support
Subscribe to the podcast: https://www.youtube.com/channel/UCNAxrHudMfdzNi6NxruKPLw?sub_confirmation=1
Get the email newsletter: https://www.samharris.org/email_signup
Follow Sam Harris on Twitter: https://twitter.com/samharrisorg
Follow Sam Harris on Facebook: https://www.facebook.com/Sam-Harris-22457171014/?fref=ts
For more information about Sam Harris: https://www.samharris.org
Nick Bostrom: What happens when our computers get smarter than we are?
I work with a bunch of mathematicians, philosophers and computer scientists, and we sit around and think about the future of machine intelligence, among other things. Some people think that some of these things are sort of science fiction-y, far out there, crazy. But I like to say, okay, let’s look at the modern human condition. (Laughter) This is the normal way for things to be.
But if we think about it, we are actually recently arrived guests on this planet, the human species. Think about if Earth was created one year ago, the human species, then, would be 10 minutes old. The industrial era started two seconds ago. Another way to look at this is to think of world GDP over the last 10,000 years, I’ve actually taken the trouble to plot this for you in a graph. It looks like this. (Laughter) It’s a curious shape for a normal condition. I sure wouldn’t want to sit on it. (Laughter)
Let’s ask ourselves, what is the cause of this current anomaly? Some people would say it’s technology. Now it’s true, technology has accumulated through human history, and right now, technology advances extremely rapidly — that is the proximate cause, that’s why we are currently so very productive. But I like to think back further to the ultimate cause.
Look at these two highly distinguished gentlemen: We have Kanzi — he’s mastered 200 lexical tokens, an incredible feat. And Ed Witten unleashed the second superstring revolution. If we look under the hood, this is what we find: basically the same thing. One is a little larger, it maybe also has a few tricks in the exact way it’s wired. These invisible differences cannot be too complicated, however, because there have only been 250,000 generations since our last common ancestor. We know that complicated mechanisms take a long time to evolve. So a bunch of relatively minor changes take us from Kanzi to Witten, from broken-off tree branches to intercontinental ballistic missiles.
So this then seems pretty obvious that everything we’ve achieved, and everything we care about, depends crucially on some relatively minor changes that made the human mind. And the corollary, of course, is that any further changes that could significantly change the substrate of thinking could have potentially enormous consequences.
Some of my colleagues think we’re on the verge of something that could cause a profound change in that substrate, and that is machine superintelligence. Artificial intelligence used to be about putting commands in a box. You would have human programmers that would painstakingly handcraft knowledge items. You build up these expert systems, and they were kind of useful for some purposes, but they were very brittle, you couldn’t scale them. Basically, you got out only what you put in. But since then, a paradigm shift has taken place in the field of artificial intelligence.
Today, the action is really around machine learning. So rather than handcrafting knowledge representations and features, we create algorithms that learn, often from raw perceptual data. Basically the same thing that the human infant does. The result is A.I. that is not limited to one domain — the same system can learn to translate between any pairs of languages, or learn to play any computer game on the Atari console. Now of course, A.I. is still nowhere near having the same powerful, cross-domain ability to learn and plan as a human being has. The cortex still has some algorithmic tricks that we don’t yet know how to match in machines.
So the question is, how far are we from being able to match those tricks? A couple of years ago, we did a survey of some of the world’s leading A.I. experts, to see what they think, and one of the questions we asked was, “By which year do you think there is a 50 percent probability that we will have achieved human-level machine intelligence?” We defined human-level here as the ability to perform almost any job at least as well as an adult human, so real human-level, not just within some limited domain. And the median answer was 2040 or 2050, depending on precisely which group of experts we asked. Now, it could happen much, much later, or sooner, the truth is nobody really knows.
What we do know is that the ultimate limit to information processing in a machine substrate lies far outside the limits in biological tissue. This comes down to physics. A biological neuron fires, maybe, at 200 hertz, 200 times a second. But even a present-day transistor operates at the Gigahertz. Neurons propagate slowly in axons, 100 meters per second, tops. But in computers, signals can travel at the speed of light. There are also size limitations, like a human brain has to fit inside a cranium, but a computer can be the size of a warehouse or larger. So the potential for superintelligence lies dormant in matter, much like the power of the atom lay dormant throughout human history, patiently waiting there until 1945. In this century, scientists may learn to awaken the power of artificial intelligence. And I think we might then see an intelligence explosion.
Now most people, when they think about what is smart and what is dumb, I think have in mind a picture roughly like this. So at one end we have the village idiot, and then far over at the other side we have Ed Witten, or Albert Einstein, or whoever your favorite guru is. But I think that from the point of view of artificial intelligence, the true picture is actually probably more like this: AI starts out at this point here, at zero intelligence, and then, after many, many years of really hard work, maybe eventually we get to mouse-level artificial intelligence, something that can navigate cluttered environments as well as a mouse can. And then, after many, many more years of really hard work, lots of investment, maybe eventually we get to chimpanzee-level artificial intelligence. And then, after even more years of really, really hard work, we get to village idiot artificial intelligence. And a few moments later, we are beyond Ed Witten. The train doesn’t stop at Humanville Station. It’s likely, rather, to swoosh right by.
Now this has profound implications, particularly when it comes to questions of power. For example, chimpanzees are strong — pound for pound, a chimpanzee is about twice as strong as a fit human male. And yet, the fate of Kanzi and his pals depends a lot more on what we humans do than on what the chimpanzees do themselves. Once there is superintelligence, the fate of humanity may depend on what the superintelligence does. Think about it: Machine intelligence is the last invention that humanity will ever need to make. Machines will then be better at inventing than we are, and they’ll be doing so on digital timescales. What this means is basically a telescoping of the future. Think of all the crazy technologies that you could have imagined maybe humans could have developed in the fullness of time: cures for aging, space colonization, self-replicating nanobots or uploading of minds into computers, all kinds of science fiction-y stuff that’s nevertheless consistent with the laws of physics. All of this superintelligence could develop, and possibly quite rapidly.
Now, a superintelligence with such technological maturity would be extremely powerful, and at least in some scenarios, it would be able to get what it wants. We would then have a future that would be shaped by the preferences of this A.I. Now a good question is, what are those preferences? Here it gets trickier. To make any headway with this, we must first of all avoid anthropomorphizing. And this is ironic because every newspaper article about the future of A.I. has a picture of this: So I think what we need to do is to conceive of the issue more abstractly, not in terms of vivid Hollywood scenarios.
We need to think of intelligence as an optimization process, a process that steers the future into a particular set of configurations. A superintelligence is a really strong optimization process. It’s extremely good at using available means to achieve a state in which its goal is realized. This means that there is no necessary conenction between being highly intelligent in this sense, and having an objective that we humans would find worthwhile or meaningful.
Suppose we give an A.I. the goal to make humans smile. When the A.I. is weak, it performs useful or amusing actions that cause its user to smile. When the A.I. becomes superintelligent, it realizes that there is a more effective way to achieve this goal: take control of the world and stick electrodes into the facial muscles of humans to cause constant, beaming grins. Another example, suppose we give A.I. the goal to solve a difficult mathematical problem. When the A.I. becomes superintelligent, it realizes that the most effective way to get the solution to this problem is by transforming the planet into a giant computer, so as to increase its thinking capacity. And notice that this gives the A.I.s an instrumental reason to do things to us that we might not approve of. Human beings in this model are threats, we could prevent the mathematical problem from being solved.
Of course, perceivably things won’t go wrong in these particular ways; these are cartoon examples. But the general point here is important: if you create a really powerful optimization process to maximize for objective x, you better make sure that your definition of x incorporates everything you care about. This is a lesson that’s also taught in many a myth. King Midas wishes that everything he touches be turned into gold. He touches his daughter, she turns into gold. He touches his food, it turns into gold. This could become practically relevant, not just as a metaphor for greed, but as an illustration of what happens if you create a powerful optimization process and give it misconceived or poorly specified goals.
Now you might say, if a computer starts sticking electrodes into people’s faces, we’d just shut it off. A, this is not necessarily so easy to do if we’ve grown dependent on the system — like, where is the off switch to the Internet? B, why haven’t the chimpanzees flicked the off switch to humanity, or the Neanderthals? They certainly had reasons. We have an off switch, for example, right here. (Choking) The reason is that we are an intelligent adversary; we can anticipate threats and plan around them. But so could a superintelligent agent, and it would be much better at that than we are. The point is, we should not be confident that we have this under control here.
And we could try to make our job a little bit easier by, say, putting the A.I. in a box, like a secure software environment, a virtual reality simulation from which it cannot escape. But how confident can we be that the A.I. couldn’t find a bug. Given that merely human hackers find bugs all the time, I’d say, probably not very confident. So we disconnect the ethernet cable to create an air gap, but again, like merely human hackers routinely transgress air gaps using social engineering. Right now, as I speak, I’m sure there is some employee out there somewhere who has been talked into handing out her account details by somebody claiming to be from the I.T. department.
More creative scenarios are also possible, like if you’re the A.I., you can imagine wiggling electrodes around in your internal circuitry to create radio waves that you can use to communicate. Or maybe you could pretend to malfunction, and then when the programmers open you up to see what went wrong with you, they look at the source code — Bam! — the manipulation can take place. Or it could output the blueprint to a really nifty technology, and when we implement it, it has some surreptitious side effect that the A.I. had planned. The point here is that we should not be confident in our ability to keep a superintelligent genie locked up in its bottle forever. Sooner or later, it will get out.
I believe that the answer here is to figure out how to create superintelligent A.I. such that even if — when — it escapes, it is still safe because it is fundamentally on our side because it shares our values. I see no way around this difficult problem.
Now, I’m actually fairly optimistic that this problem can be solved. We wouldn’t have to write down a long list of everything we care about, or worse yet, spell it out in some computer language like C++ or Python, that would be a task beyond hopeless. Instead, we would create an A.I. that uses its intelligence to learn what we value, and its motivation system is constructed in such a way that it is motivated to pursue our values or to perform actions that it predicts we would approve of. We would thus leverage its intelligence as much as possible to solve the problem of value-loading.
This can happen, and the outcome could be very good for humanity. But it doesn’t happen automatically. The initial conditions for the intelligence explosion might need to be set up in just the right way if we are to have a controlled detonation. The values that the A.I. has need to match ours, not just in the familiar context, like where we can easily check how the A.I. behaves, but also in all novel contexts that the A.I. might encounter in the indefinite future.
And there are also some esoteric issues that would need to be solved, sorted out: the exact details of its decision theory, how to deal with logical uncertainty and so forth. So the technical problems that need to be solved to make this work look quite difficult — not as difficult as making a superintelligent A.I., but fairly difficult. Here is the worry: Making superintelligent A.I. is a really hard challenge. Making superintelligent A.I. that is safe involves some additional challenge on top of that. The risk is that if somebody figures out how to crack the first challenge without also having cracked the additional challenge of ensuring perfect safety.
So I think that we should work out a solution to the control problem in advance, so that we have it available by the time it is needed. Now it might be that we cannot solve the entire control problem in advance because maybe some elements can only be put in place once you know the details of the architecture where it will be implemented. But the more of the control problem that we solve in advance, the better the odds that the transition to the machine intelligence era will go well.
This to me looks like a thing that is well worth doing and I can imagine that if things turn out okay, that people a million years from now look back at this century and it might well be that they say that the one thing we did that really mattered was to get this thing right.
In this episode of the Waking Up podcast, Sam Harris speaks with Maajid Nawaz about the Southern Poverty Law Center, Robert Spencer, Keith Ellison, moderate Muslims, Shadi Hamid’s notion of “Islamic exceptionalism,” the migrant crisis in Europe, foreign interventions, Trump, Putin, Obama’s legacy, and other topics.
Maajid Nawaz is a counter-extremist, author, columnist, broadcaster and Founding Chairman of Quilliam – a globally active organization focusing on matters of integration, citizenship & identity, religious freedom, immigration, extremism, and terrorism. Maajid’s work is informed by years spent in his youth as a leadership member of a global Islamist group, and his gradual transformation towards liberal democratic values. Having served four years as an Amnesty International adopted “prisoner of conscience” in Egypt, Maajid is now a leading critic of Islamism, while remaining a secular liberal Muslim.
Maajid is an Honorary Associate of the UK’s National Secular Society, a weekly columnist for the Daily Beast, a monthly columnist for the liberal UK paper the ‘Jewish News’ and LBC radio’s weekend afternoon radio host. He also provides occasional columns for the London Times, the New York Times and Wall Street Journal, among others. Maajid was the Liberal Democrat Parliamentary candidate in London’s Hampstead & Kilburn for the May 2015 British General Election.
A British-Pakistani born in Essex, Maajid speaks English, Arabic, and Urdu, holds a BA (Hons) from SOAS in Arabic and Law and an MSc in Political Theory from the London School of Economics (LSE).
Maajid relates his life story in his first book, Radical. He co-authored his second book, Islam and the Future of Tolerance, with Sam Harris.
Scared of superintelligent AI? You should be, says neuroscientist and philosopher Sam Harris — and not just in some theoretical way. We’re going to build superhuman machines, says Harris, but we haven’t yet grappled with the problems associated with creating something that may treat us the way we treat ants.
Sam Harris: Can we build AI without losing control over it?
I’m going to talk about a failure of intuition that many of us suffer from. It’s really a failure to detect a certain kind of danger. I’m going to describe a scenario that I think is both terrifying and likely to occur, and that’s not a good combination, as it turns out. And yet rather than be scared, most of you will feel that what I’m talking about is kind of cool.
I’m going to describe how the gains we make in artificial intelligence could ultimately destroy us. And in fact, I think it’s very difficult to see how they won’t destroy us or inspire us to destroy ourselves. And yet if you’re anything like me, you’ll find that it’s fun to think about these things. And that response is part of the problem. OK? That response should worry you. And if I were to convince you in this talk that we were likely to suffer a global famine, either because of climate change or some other catastrophe, and that your grandchildren, or their grandchildren, are very likely to live like this, you wouldn’t think, “Interesting. I like this TED Talk.”
Famine isn’t fun. Death by science fiction, on the other hand, is fun, and one of the things that worries me most about the development of AI at this point is that we seem unable to marshal an appropriate emotional response to the dangers that lie ahead. I am unable to marshal this response, and I’m giving this talk.
It’s as though we stand before two doors. Behind door number one, we stop making progress in building intelligent machines. Our computer hardware and software just stops getting better for some reason. Now take a moment to consider why this might happen. I mean, given how valuable intelligence and automation are, we will continue to improve our technology if we are at all able to. What could stop us from doing this? A full-scale nuclear war? A global pandemic? An asteroid impact? Justin Bieber becoming president of the United States?
The point is, something would have to destroy civilization as we know it. You have to imagine how bad it would have to be to prevent us from making improvements in our technology permanently, generation after generation. Almost by definition, this is the worst thing that’s ever happened in human history.
So the only alternative, and this is what lies behind door number two, is that we continue to improve our intelligent machines year after year after year. At a certain point, we will build machines that are smarter than we are, and once we have machines that are smarter than we are, they will begin to improve themselves. And then we risk what the mathematician IJ Good called an “intelligence explosion,” that the process could get away from us.
Now, this is often caricatured, as I have here, as a fear that armies of malicious robots will attack us. But that isn’t the most likely scenario. It’s not that our machines will become spontaneously malevolent. The concern is really that we will build machines that are so much more competent than we are that the slightest divergence between their goals and our own could destroy us.
Just think about how we relate to ants. We don’t hate them. We don’t go out of our way to harm them. In fact, sometimes we take pains not to harm them. We step over them on the sidewalk. But whenever their presence seriously conflicts with one of our goals, let’s say when constructing a building like this one, we annihilate them without a qualm. The concern is that we will one day build machines that, whether they’re conscious or not, could treat us with similar disregard.
Now, I suspect this seems far-fetched to many of you. I bet there are those of you who doubt that superintelligent AI is possible, much less inevitable. But then you must find something wrong with one of the following assumptions. And there are only three of them.
Intelligence is a matter of information processing in physical systems. Actually, this is a little bit more than an assumption. We have already built narrow intelligence into our machines, and many of these machines perform at a level of superhuman intelligence already. And we know that mere matter can give rise to what is called “general intelligence,” an ability to think flexibly across multiple domains, because our brains have managed it. Right? I mean, there’s just atoms in here, and as long as we continue to build systems of atoms that display more and more intelligent behavior, we will eventually, unless we are interrupted, we will eventually build general intelligence into our machines.
It’s crucial to realize that the rate of progress doesn’t matter, because any progress is enough to get us into the end zone. We don’t need Moore’s law to continue. We don’t need exponential progress. We just need to keep going.
The second assumption is that we will keep going. We will continue to improve our intelligent machines. And given the value of intelligence — I mean, intelligence is either the source of everything we value or we need it to safeguard everything we value. It is our most valuable resource. So we want to do this. We have problems that we desperately need to solve. We want to cure diseases like Alzheimer’s and cancer. We want to understand economic systems. We want to improve our climate science. So we will do this, if we can. The train is already out of the station, and there’s no brake to pull.
Finally, we don’t stand on a peak of intelligence, or anywhere near it, likely. And this really is the crucial insight. This is what makes our situation so precarious, and this is what makes our intuitions about risk so unreliable.
Now, just consider the smartest person who has ever lived. On almost everyone’s shortlist here is John von Neumann. I mean, the impression that von Neumann made on the people around him, and this included the greatest mathematicians and physicists of his time, is fairly well-documented. If only half the stories about him are half true, there’s no question he’s one of the smartest people who has ever lived. So consider the spectrum of intelligence. Here we have John von Neumann. And then we have you and me. And then we have a chicken.
Sorry, a chicken.
There’s no reason for me to make this talk more depressing than it needs to be.
It seems overwhelmingly likely, however, that the spectrum of intelligence extends much further than we currently conceive, and if we build machines that are more intelligent than we are, they will very likely explore this spectrum in ways that we can’t imagine, and exceed us in ways that we can’t imagine.
And it’s important to recognize that this is true by virtue of speed alone. Right? So imagine if we just built a superintelligent AI that was no smarter than your average team of researchers at Stanford or MIT. Well, electronic circuits function about a million times faster than biochemical ones, so this machine should think about a million times faster than the minds that built it. So you set it running for a week, and it will perform 20,000 years of human-level intellectual work, week after week after week. How could we even understand, much less constrain, a mind making this sort of progress?
The other thing that’s worrying, frankly, is that, imagine the best case scenario. So imagine we hit upon a design of superintelligent AI that has no safety concerns. We have the perfect design the first time around. It’s as though we’ve been handed an oracle that behaves exactly as intended. Well, this machine would be the perfect labor-saving device. It can design the machine that can build the machine that can do any physical work, powered by sunlight, more or less for the cost of raw materials. So we’re talking about the end of human drudgery. We’re also talking about the end of most intellectual work.
So what would apes like ourselves do in this circumstance? Well, we’d be free to play Frisbee and give each other massages. Add some LSD and some questionable wardrobe choices, and the whole world could be like Burning Man.
Now, that might sound pretty good, but ask yourself what would happen under our current economic and political order? It seems likely that we would witness a level of wealth inequality and unemployment that we have never seen before. Absent a willingness to immediately put this new wealth to the service of all humanity, a few trillionaires could grace the covers of our business magazines while the rest of the world would be free to starve.
And what would the Russians or the Chinese do if they heard that some company in Silicon Valley was about to deploy a superintelligent AI? This machine would be capable of waging war, whether terrestrial or cyber, with unprecedented power. This is a winner-take-all scenario. To be six months ahead of the competition here is to be 500,000 years ahead, at a minimum. So it seems that even mere rumors of this kind of breakthrough could cause our species to go berserk.
Now, one of the most frightening things, in my view, at this moment, are the kinds of things that AI researchers say when they want to be reassuring. And the most common reason we’re told not to worry is time. This is all a long way off, don’t you know. This is probably 50 or 100 years away. One researcher has said, “Worrying about AI safety is like worrying about overpopulation on Mars.” This is the Silicon Valley version of “don’t worry your pretty little head about it.”
No one seems to notice that referencing the time horizon is a total non sequitur. If intelligence is just a matter of information processing, and we continue to improve our machines, we will produce some form of superintelligence. And we have no idea how long it will take us to create the conditions to do that safely. Let me say that again. We have no idea how long it will take us to create the conditions to do that safely.
And if you haven’t noticed, 50 years is not what it used to be. This is 50 years in months. This is how long we’ve had the iPhone. This is how long “The Simpsons” has been on television. Fifty years is not that much time to meet one of the greatest challenges our species will ever face. Once again, we seem to be failing to have an appropriate emotional response to what we have every reason to believe is coming.
The computer scientist Stuart Russell has a nice analogy here. He said, imagine that we received a message from an alien civilization, which read: “People of Earth, we will arrive on your planet in 50 years. Get ready.” And now we’re just counting down the months until the mothership lands? We would feel a little more urgency than we do.
Another reason we’re told not to worry is that these machines can’t help but share our values because they will be literally extensions of ourselves. They’ll be grafted onto our brains, and we’ll essentially become their limbic systems. Now take a moment to consider that the safest and only prudent path forward, recommended, is to implant this technology directly into our brains. Now, this may in fact be the safest and only prudent path forward, but usually one’s safety concerns about a technology have to be pretty much worked out before you stick it inside your head.
The deeper problem is that building superintelligent AI on its own seems likely to be easier than building superintelligent AI and having the completed neuroscience that allows us to seamlessly integrate our minds with it. And given that the companies and governments doing this work are likely to perceive themselves as being in a race against all others, given that to win this race is to win the world, provided you don’t destroy it in the next moment, then it seems likely that whatever is easier to do will get done first.
Now, unfortunately, I don’t have a solution to this problem, apart from recommending that more of us think about it. I think we need something like a Manhattan Project on the topic of artificial intelligence. Not to build it, because I think we’ll inevitably do that, but to understand how to avoid an arms race and to build it in a way that is aligned with our interests. When you’re talking about superintelligent AI that can make changes to itself, it seems that we only have one chance to get the initial conditions right, and even then we will need to absorb the economic and political consequences of getting them right.
But the moment we admit that information processing is the source of intelligence, that some appropriate computational system is what the basis of intelligence is, and we admit that we will improve these systems continuously, and we admit that the horizon of cognition very likely far exceeds what we currently know, then we have to admit that we are in the process of building some sort of god. Now would be a good time to make sure it’s a god we can live with.
Paul Root Wolpe: It’s time to question bio-engineering
Today I want to talk about design, but not design as we usually think about it. I want to talk about what is happening now in our scientific, biotechnological culture, where, for really the first time in history, we have the power to design bodies, to design animal bodies, to design human bodies. In the history of our planet, there have been three great waves of evolution.
The first wave of evolution is what we think of as Darwinian evolution. So, as you all know, species lived in particular ecological niches and particular environments, and the pressures of those environments selected which changes, through random mutation in species, were going to be preserved. Then human beings stepped out of the Darwinian flow of evolutionary history and created the second great wave of evolution, which was we changed the environment in which we evolved. We altered our ecological niche by creating civilization. And that has been the second great — couple 100,000 years, 150,000 years — flow of our evolution. By changing our environment, we put new pressures on our bodies to evolve. Whether it was through settling down in agricultural communities, all the way through modern medicine, we have changed our own evolution. Now we’re entering a third great wave of evolutionary history, which has been called many things: “intentional evolution,” “evolution by design” — very different than intelligent design — whereby we are actually now intentionally designing and altering the physiological forms that inhabit our planet.
So I want to take you through a kind of whirlwind tour of that and then at the end talk a little bit about what some of the implications are for us and for our species, as well as our cultures, because of this change. Now we actually have been doing it for a long time. We started selectively breeding animals many, many thousands of years ago. And if you think of dogs for example, dogs are now intentionally-designed creatures. There isn’t a dog on this earth that’s a natural creature. Dogs are the result of selectively breeding traits that we like. But we had to do it the hard way in the old days by choosing offspring that looked a particular way and then breeding them. We don’t have to do it that way anymore.
This is a beefalo. A beefalo is a buffalo-cattle hybrid. And they are now making them, and someday, perhaps pretty soon, you will have beefalo patties in your local supermarket. This is a geep, a goat-sheep hybrid. The scientists that made this cute little creature ended up slaughtering it and eating it afterwards. I think they said it tasted like chicken. This is a cama. A cama is a camel-llama hybrid, created to try to get the hardiness of a camel with some of the personality traits of a llama. And they are now using these in certain cultures. Then there’s the liger. This is the largest cat in the world — the lion-tiger hybrid. It’s bigger than a tiger. And in the case of the liger, there actually have been one or two that have been seen in the wild. But these were created by scientists using both selective breeding and genetic technology. And then finally, everybody’s favorite, the zorse. None of this is Photoshopped. These are real creatures. And so one of the things we’ve been doing is using genetic enhancement, or genetic manipulation, of normal selective breeding pushed a little bit through genetics. And if that were all this was about, then it would be an interesting thing. But something much, much more powerful is happening now.
These are normal mammalian cells genetically engineered with a bioluminescent gene taken out of deep-sea jellyfish. We all know that some deep-sea creatures glow. Well, they’ve now taken that gene, that bioluminescent gene, and put it into mammal cells. These are normal cells. And what you see here is these cells glowing in the dark under certain wavelengths of light. Once they could do that with cells, they could do it with organisms. So they did it with mouse pups, kittens. And by the way, the reason the kittens here are orange and these are green is because that’s a bioluminescent gene from coral, while this is from jellyfish. They did it with pigs. They did it with puppies. And, in fact, they did it with monkeys. And if you can do it with monkeys — though the great leap in trying to genetically manipulate is actually between monkeys and apes — if they can do it in monkeys, they can probably figure out how to do it in apes, which means they can do it in human beings. In other words, it is theoretically possible that before too long we will be biotechnologically capable of creating human beings that glow in the dark. Be easier to find us at night.
And in fact, right now in many states, you can go out and you can buy bioluminescent pets. These are zebra fish. They’re normally black and silver. These are zebra fish that have been genetically engineered to be yellow, green, red, and they are actually available now in certain states. Other states have banned them. Nobody knows what to do with these kinds of creatures. There is no area of the government — not the EPA or the FDA — that controls genetically-engineered pets. And so some states have decided to allow them, some states have decided to ban them.
Some of you may have read about the FDA’s consideration right now of genetically-engineered salmon. The salmon on top is a genetically engineered Chinook salmon, using a gene from these salmon and from one other fish that we eat, to make it grow much faster using a lot less feed. And right now the FDA is trying to make a final decision on whether, pretty soon, you could be eating this fish — it’ll be sold in the stores. And before you get too worried about it, here in the United States, the majority of food you buy in the supermarket already has genetically-modified components to it. So even as we worry about it, we have allowed it to go on in this country — much different in Europe — without any regulation, and even without any identification on the package.
These are all the first cloned animals of their type. So in the lower right here, you have Dolly, the first cloned sheep — now happily stuffed in a museum in Edinburgh; Ralph the rat, the first cloned rat; CC the cat, for cloned cat; Snuppy, the first cloned dog — Snuppy for Seoul National University puppy — created in South Korea by the very same man that some of you may remember had to end up resigning in disgrace because he claimed he had cloned a human embryo, which he had not. He actually was the first person to clone a dog, which is a very difficult thing to do, because dog genomes are very plastic. This is Prometea, the first cloned horse. It’s a Haflinger horse cloned in Italy, a real “gold ring” of cloning, because there are many horses that win important races who are geldings. In other words, the equipment to put them out to stud has been removed. But if you can clone that horse, you can have both the advantage of having a gelding run in the race and his identical genetic duplicate can then be put out to stud. These were the first cloned calves, the first cloned grey wolves, and then, finally, the first cloned piglets: Alexis, Chista, Carrel, Janie and Dotcom.
In addition, we’ve started to use cloning technology to try to save endangered species. This is the use of animals now to create drugs and other things in their bodies that we want to create. So with antithrombin in that goat — that goat has been genetically modified so that the molecules of its milk actually include the molecule of antithrombin that GTC Genetics wants to create. And then in addition, transgenic pigs, knockout pigs, from the National Institute of Animal Science in South Korea, are pigs that they are going to use, in fact, to try to create all kinds of drugs and other industrial types of chemicals that they want the blood and the milk of these animals to produce for them, instead of producing them in an industrial way.
These are two creatures that were created in order to save endangered species. The guar is an endangered Southeast Asian ungulate. A somatic cell, a body cell, was taken from its body, gestated in the ovum of a cow, and then that cow gave birth to a guar. Same thing happened with the mouflon, where it’s an endangered species of sheep. It was gestated in a regular sheep body, which actually raises an interesting biological problem. We have two kinds of DNA in our bodies. We have our nucleic DNA that everybody thinks of as our DNA, but we also have DNA in our mitochondria, which are the energy packets of the cell. That DNA is passed down through our mothers. So really, what you end up having here is not a guar and not a mouflon, but a guar with cow mitochondria, and therefore cow mitochondrial DNA, and a mouflon with another species of sheep’s mitochondrial DNA. These are really hybrids, not pure animals. And it raises the question of how we’re going to define animal species in the age of biotechnology — a question that we’re not really sure yet how to solve.
This lovely creature is an Asian cockroach. And what they’ve done here is they’ve put electrodes in its ganglia and its brain and then a transmitter on top, and it’s on a big computer tracking ball. And now, using a joystick, they can send this creature around the lab and control whether it goes left or right, forwards or backwards. They’ve created a kind of insect bot, or bugbot. It gets worse than that — or perhaps better than that. This actually is one of DARPA’s very important — DARPA is the Defense Research Agency — one of their projects. These goliath beetles are wired in their wings. They have a computer chip strapped to their backs, and they can fly these creatures around the lab. They can make them go left, right. They can make them take off. They can’t actually make them land. They put them about one inch above the ground, and then they shut everything off and they go pfft. But it’s the closest they can get to a landing.
And in fact, this technology has gotten so developed that this creature — this is a moth — this is the moth in its pupa stage, and that’s when they put the wires in and they put in the computer technology, so that when the moth actually emerges as a moth, it is already prewired. The wires are already in its body, and they can just hook it up to their technology, and now they’ve got these bugbots that they can send out for surveillance. They can put little cameras on them and perhaps someday deliver other kinds of ordinance to warzones.
It’s not just insects. This is the ratbot, or the robo-rat by Sanjiv Talwar at SUNY Downstate. Again, it’s got technology — it’s got electrodes going into its left and right hemispheres; it’s got a camera on top of its head. The scientists can make this creature go left, right. They have it running through mazes, controlling where it’s going. They’ve now created an organic robot. The graduate students in Sanjiv Talwar’s lab said, “Is this ethical? We’ve taken away the autonomy of this animal.” I’ll get back to that in a minute.
There’s also been work done with monkeys. This is Miguel Nicolelis of Duke. He took owl monkeys, wired them up so that a computer watched their brains while they moved, especially looking at the movement of their right arm. The computer learned what the monkey brain did to move its arm in various ways. They then hooked it up to a prosthetic arm, which you see here in the picture, put the arm in another room. Pretty soon, the computer learned, by reading the monkey’s brainwaves, to make that arm in the other room do whatever the monkey’s arm did. Then he put a video monitor in the monkey’s cage that showed the monkey this prosthetic arm, and the monkey got fascinated. The monkey recognized that whatever she did with her arm, this prosthetic arm would do. And eventually she was moving it and moving it, and eventually stopped moving her right arm and, staring at the screen, could move the prosthetic arm in the other room only with her brainwaves — which means that monkey became the first primate in the history of the world to have three independent functional arms.
And it’s not just technology that we’re putting into animals. This is Thomas DeMarse at the University of Florida. He took 20,000 and then 60,000 disaggregated rat neurons — so these are just individual neurons from rats — put them on a chip. They self-aggregated into a network, became an integrated chip. And he used that as the IT piece of a mechanism which ran a flight simulator. So now we have organic computer chips made out of living, self-aggregating neurons. Finally, Mussa-Ivaldi of Northwestern took a completely intact, independent lamprey eel brain. This is a brain from a lamprey eel. It is living — fully-intact brain in a nutrient medium with these electrodes going off to the sides, attached photosensitive sensors to the brain, put it into a cart — here’s the cart, the brain is sitting there in the middle — and using this brain as the sole processor for this cart, when you turn on a light and shine it at the cart, the cart moves toward the light; when you turn it off, it moves away. It’s photophilic. So now we have a complete living lamprey eel brain. Is it thinking lamprey eel thoughts, sitting there in its nutrient medium? I don’t know, but in fact it is a fully living brain that we have managed to keep alive to do our bidding.
So, we are now at the stage where we are creating creatures for our own purposes. This is a mouse created by Charles Vacanti of the University of Massachusetts. He altered this mouse so that it was genetically engineered to have skin that was less immunoreactive to human skin, put a polymer scaffolding of an ear under it and created an ear that could then be taken off the mouse and transplanted onto a human being. Genetic engineering coupled with polymer physiotechnology coupled with xenotransplantation. This is where we are in this process.
Finally, not that long ago, Craig Venter created the first artificial cell, where he took a cell, took a DNA synthesizer, which is a machine, created an artificial genome, put it in a different cell — the genome was not of the cell he put it in — and that cell then reproduced as the other cell. In other words, that was the first creature in the history of the world that had a computer as its parent — it did not have an organic parent. And so, asks The Economist: “The first artificial organism and its consequences.”
So you may have thought that the creation of life was going to happen in something that looked like that. (Laughter) But in fact, that’s not what Frankenstein’s lab looks like. This is what Frankenstein’s lab looks like. This is a DNA synthesizer, and here at the bottom are just bottles of A, T, C and G — the four chemicals that make up our DNA chain.
And so, we need to ask ourselves some questions. For the first time in the history of this planet, we are able to directly design organisms. We can manipulate the plasmas of life with unprecedented power, and it confers on us a responsibility. Is everything okay? Is it okay to manipulate and create whatever creatures we want? Do we have free reign to design animals? Do we get to go someday to Pets ‘R’ Us and say, “Look, I want a dog. I’d like it to have the head of a Dachshund, the body of a retriever, maybe some pink fur, and let’s make it glow in the dark”? Does industry get to create creatures who, in their milk, in their blood, and in their saliva and other bodily fluids, create the drugs and industrial molecules we want and then warehouse them as organic manufacturing machines? Do we get to create organic robots, where we remove the autonomy from these animals and turn them just into our playthings?
And then the final step of this, once we perfect these technologies in animals and we start using them in human beings, what are the ethical guidelines that we will use then? It’s already happening. It’s not science fiction. We are not only already using these things in animals, some of them we’re already beginning to use on our own bodies.
We are now taking control of our own evolution. We are directly designing the future of the species of this planet. It confers upon us an enormous responsibility that is not just the responsibility of the scientists and the ethicists who are thinking about it and writing about it now. It is the responsibility of everybody because it will determine what kind of planet and what kind of bodies we will have in the future.