Is the Universe a Simulation?

2016 Isaac Asimov Memorial Debate: Is the Universe a Simulation?
American Museum of Natural History

What may have started as a science fiction speculation—that perhaps the universe as we know it is a computer simulation—has become a serious line of theoretical and experimental investigation among physicists, astrophysicists, and philosophers.

Neil deGrasse Tyson, Frederick P. Rose Director of the Hayden Planetarium, hosts and moderates a panel of experts in a lively discussion about the merits and shortcomings of this provocative and revolutionary idea. The 17th annual Isaac Asimov Memorial Debate took place at The American Museum of Natural History on April 5, 2016.

2016 Asimov Panelists:

David Chalmers
Professor of philosophy, New York University

Zohreh Davoudi
Theoretical physicist, Massachusetts Institute of Technology

James Gates
Theoretical physicist, University of Maryland

Lisa Randall
Theoretical physicist, Harvard University

Max Tegmark
Cosmologist, Massachusetts Institute of Technology

The late Dr. Isaac Asimov, one of the most prolific and influential authors of our time, was a dear friend and supporter of the American Museum of Natural History. In his memory, the Hayden Planetarium is honored to host the annual Isaac Asimov Memorial Debate — generously endowed by relatives, friends, and admirers of Isaac Asimov and his work — bringing the finest minds in the world to the Museum each year to debate pressing questions on the frontier of scientific discovery. Proceeds from ticket sales of the Isaac Asimov Memorial Debates benefit the scientific and educational programs of the Hayden Planetarium.

Waking Up With Sam Harris #60 – An Evening with Richard Dawkins (Part 2)

https://youtu.be/0namBRjdKng

In this episode of the Waking Up podcast, Sam Harris speaks with Richard Dawkins at a live event in Los Angeles (second of two). They discuss Richard’s experience of having a stroke, the genetic future of humanity, the analogy between genes and memes, the “extended phenotype,” Islam and bigotry, the biology of race, how to find meaning without religion, and other topics.

Want to support the Waking Up podcast?

Please visit: http://www.samharris.org/support

Subscribe to the podcast: https://www.youtube.com/channel/UCNAxrHudMfdzNi6NxruKPLw?sub_confirmation=1

Get the email newsletter: https://www.samharris.org/email_signup

Follow Sam Harris on Twitter: https://twitter.com/samharrisorg

Follow Sam Harris on Facebook: https://www.facebook.com/Sam-Harris-22457171014/?fref=ts

For more information about Sam Harris: https://www.samharris.org


For more interesting articles, visit: Kemk

A black hole in your pocket?


Transcript


1
00:00:00,000 –> 00:00:04,264
What would happen to you if a black hole the size of a coin suddenly appeared near you?

3
00:00:04,836 –> 00:00:05,887
Short answer: you’d die.

4
00:00:06,351 –> 00:00:07,424
Long answer: it depends.

5
00:00:08,081 –> 00:00:09,169
Is it a black hole with
the mass of a coin,

6
00:00:09,963 –> 00:00:11,017
or is it as wide as a coin?

7
00:00:11,503 –> 00:00:14,549
Suppose a US nickel with
the mass of about 5 grams

8
00:00:14,963 –> 00:00:16,975
magically collapsed into a black hole.

9
00:00:17,083 –> 00:00:20,139
This black hole would have a radius
of about 10 to the power of −30 meters.

10
00:00:20,645 –> 00:00:24,727
By comparison, a hydrogen atom is about
10 to the power of −11 meters.

11
00:00:25,465 –> 00:00:27,564
So the black hole compared
to an atom is as small as

12
00:00:28,455 –> 00:00:29,531
an atom compared to the Sun.

13
00:00:30,215 –> 00:00:31,282
Unimaginably small!

14
00:00:31,885 –> 00:00:34,964
And a small black hole would also have
an unimaginably short lifetime

15
00:00:35,675 –> 00:00:36,736
to decay by Hawking radiation.

16
00:00:37,285 –> 00:00:42,302
It would radiate away what little mass it
has in 10 to the power of −23 seconds.

17
00:00:42,455 –> 00:00:46,532
Its 5 grams of mass will be converted
to 450 terajoules of energy,

18
00:00:47,225 –> 00:00:50,246
which will lead to an explosion
roughly 3 times bigger than

19
00:00:50,246 –> 00:00:53,283
the atomic bombs dropped on
Hiroshima and Nagasaki combined.

20
00:00:53,627 –> 00:00:54,674
In this case, you die.

21
00:00:55,097 –> 00:00:56,195
You also lose the coin.

22
00:00:57,092 –> 00:00:59,170
If the black hole had the
diameter of a common coin,

23
00:00:59,088 –> 00:01:01,341
then it would be considerably
more massive.

24
00:01:02,133 –> 00:01:05,265
In fact, a black hole with
the diameter of a nickel

25
00:01:05,265 –> 00:01:07,286
would be slightly more
massive than the Earth.

26
00:01:07,475 –> 00:01:10,547
It would have a surface gravity
a billion billion times greater

27
00:01:11,195 –> 00:01:12,202
than our planet currently does.

28
00:01:12,895 –> 00:01:14,953
Its tidal forces on you would be so strong

29
00:01:15,483 –> 00:01:17,567
that they’d rip your
individual cells apart.

30
00:01:18,323 –> 00:01:22,330
The black hole would consume you before
you even realized what’s happening.

31
00:01:22,393 –> 00:01:24,419
Although the laws of gravity
are still the same,

32
00:01:24,657 –> 00:01:27,692
the phenomenon of gravity that you’d
experience would be very different

33
00:01:28,007 –> 00:01:29,091
around such dense objects.

34
00:01:29,087 –> 00:01:30,169
The range of the gravitational attraction

35
00:01:31,702 –> 00:01:33,729
extends over the entire
observable universe,

36
00:01:33,972 –> 00:01:36,974
with gravity getting weaker the farther
away you are from something.

37
00:01:37,172 –> 00:01:41,221
On Earth right now, your head and your
toes are approximately the same distance

38
00:01:41,662 –> 00:01:42,708
from the center of our planet.

39
00:01:43,122 –> 00:01:45,179
But if you stood on
a nickel-sized black hole,

40
00:01:45,692 –> 00:01:47,771
your feet would be hundreds
of times closer to the center,

41
00:01:48,482 –> 00:01:51,556
and the gravitational force would be
tens of thousands of times as large

42
00:01:52,231 –> 00:01:55,276
as the force on your head and
rip you into a billion pieces.

43
00:01:55,681 –> 00:01:57,730
But the black hole wouldn’t
stop with just you.

44
00:01:58,171 –> 00:02:01,200
The black hole is now a
dominant gravitational piece

45
00:02:01,461 –> 00:02:03,520
of the
Earth–Moon–Black-Hole-of-Death system.

46
00:02:04,051 –> 00:02:07,122
You might think that the black hole would
sink towards the center of the planet

47
00:02:07,761 –> 00:02:09,795
and consume it from the inside out.

48
00:02:10,101 –> 00:02:14,154
In fact, the Earth also moves up onto the black hole and begins to bob around,

49
00:02:14,631 –> 00:02:15,638
as if it were orbiting the black hole,

50
00:02:16,331 –> 00:02:18,405
all while having swathes of mass eaten with each pass,

51
00:02:19,071 –> 00:02:20,133
which is much more creepy.

52
00:02:21,049 –> 00:02:23,100
As the Earth is eaten up from the inside,

53
00:02:23,559 –> 00:02:25,613
it collapses into a
scattered disk of hot rock,

54
00:02:26,099 –> 00:02:28,125
surrounding the black hole
in a tight orbit.

55
00:02:28,359 –> 00:02:31,364
The black hole slowly doubles its mass
by the time it’s done feeding.

56
00:02:31,859 –> 00:02:33,954
The Moon’s orbit is now highly elliptical.

57
00:02:34,809 –> 00:02:36,857
The effects on the Solar system
are awesome—

58
00:02:37,289 –> 00:02:41,289
in the Biblical sense of awesome,
which means terrifying.

59
00:02:41,298 –> 00:02:44,374
Tidal forces from the black hole would
probably disrupt the near-Earth asteroids,

60
00:02:45,058 –> 00:02:47,092
maybe even parts of the asteroid belt,

61
00:02:47,042 –> 00:02:49,084
sending rocks careening
through the Solar system.

62
00:02:49,847 –> 00:02:51,930
Bombardment and impacts
may become commonplace

63
00:02:52,677 –> 00:02:53,725
for the next few million years.

64
00:02:54,157 –> 00:02:58,157
The planets are slightly perturbed, but
stay approximately in the same orbit.

65
00:02:58,157 –> 00:03:00,164
The black hole we used
to call Earth will now

66
00:03:00,857 –> 00:03:02,862
continue on orbiting
the Sun in the Earth’s place.

67
00:03:03,363 –> 00:03:05,429
In this case, you also die.

68
00:03:07,934 –> 00:03:10,953
This bonus video was made possible
by your contributions on Patreon.

69
00:03:11,536 –> 00:03:13,541
Thank you so much for your support!

70
00:03:14,098 –> 00:03:17,116
The topic is based on a question on
the AskScience subreddit

71
00:03:17,278 –> 00:03:21,307
and the glorious answer by Matt [Caplin?],
who also worked with us on this video.

72
00:03:22,023 –> 00:03:25,081
Check out his blog, Quarks and Coffee,
for more awesome stuff like this!

73
00:03:25,757 –> 00:03:28,789
If you want to discuss the video,
we have our own subreddit now.

74
00:03:29,643 –> 00:03:33,643
To learn more about black holes or equally
interesting neutron stars, click here.

75
00:03:35,000 –> 00:03:38,000
Subtitles by the Amara.org community

Christopher Hitchens destroys religion

1
00:00:00,000 –> 00:00:03,003
If anyone thinks that there’s a question

2
00:00:03,003 –> 00:00:07,772
having once heard me thinks is a
question I answered poorly or adequately

3
00:00:08,069 –> 00:00:11,340
or badly or failed to answer at all i
would like to challenge me

4
00:00:11,034 –> 00:00:15,623
I’d happily give them five minutes but
I’ve i have so say shot my build

5
00:00:15,929 –> 00:00:19,980
otherwise it is there anyone who would
like to charge me

6
00:00:20,055 –> 00:00:23,954
yes Peter

7
00:00:24,449 –> 00:00:32,340
if you spend your whole life trying to
convince people that there is why don’t

8
00:00:32,034 –> 00:00:37,035
you just stay all was the repeat that
was the country and the question is if

9
00:00:37,044 –> 00:00:41,079
there if there is no God why spend
you’re watching career are trying to

10
00:00:41,079 –> 00:00:44,085
refute that when I just leave it alone
and stay home

11
00:00:44,085 –> 00:00:54,134
fair enough well it’s it’s not my it
isn’t my whole career for one thing it’s

12
00:00:54,899 –> 00:01:01,320
become a major preoccupation of my life
though in the last eight or nine years

13
00:01:01,032 –> 00:01:10,095
especially since September 11 2001 to
try and help generate an opposition to

14
00:01:10,095 –> 00:01:15,168
theocracy and it’s depredations
internationally that that that is now

15
00:01:16,068 –> 00:01:21,150
probably my main political reputation to
help people in afghanistan in somalia in

16
00:01:22,005 –> 00:01:28,124
Iraq and Lebanon Israel to resist those
who sincerely want to encompass the

17
00:01:28,619 –> 00:01:31,770
destruction of civilization and
sincerely believe they have got on this

18
00:01:31,077 –> 00:01:35,103
side in wanting to do so the thing maybe
i will take the few minutes just to say

19
00:01:36,003 –> 00:01:40,068
something that I find repulsive about
especially monotheistic messianic

20
00:01:40,068 –> 00:01:45,129
religion in it with a large part of
itself it quite clearly do wants us all

21
00:01:46,029 –> 00:01:46,898
to die

22
00:01:47,159 –> 00:01:51,570
it wants this world come to an end you
can tell the yearning for things to be

23
00:01:51,057 –> 00:01:55,110
over whenever you read any of its real
text or listen to any of its real

24
00:01:56,001 –> 00:02:01,005
authentic spokesman not the sort of the
pathetic apologies to sometimes

25
00:02:01,005 –> 00:02:03,084
masquerade for those who talk

26
00:02:04,029 –> 00:02:09,608
there was a famous spokesman for this in
in Virginia until recently about the

27
00:02:09,869 –> 00:02:16,290
rapture say that do those of us who’ve
chosen rightly will be gathered to the

28
00:02:16,029 –> 00:02:17,055
arms of Jesus

29
00:02:17,055 –> 00:02:20,061
leaving all the rest of you behind if
we’re in a car

30
00:02:20,061 –> 00:02:23,157
it’s your lookout that car won’t have a
driver anymore if we’re if we’re pilot

31
00:02:24,057 –> 00:02:26,058
that’s your lookout that plane will
crash

32
00:02:26,067 –> 00:02:30,105
we will be with Jesus and the rest of
you can go straight to hell

33
00:02:31,065 –> 00:02:36,102
the the eschatological element that is
inseparable from Christianity if you

34
00:02:37,002 –> 00:02:41,004
don’t believe that there is to be an
apocalypse there is going to be an end

35
00:02:41,022 –> 00:02:46,101
as a separation of the sheep and the
goats a condemnation final one

36
00:02:47,052 –> 00:02:50,130
then you’re not really a believer and
the contempt for the things of this

37
00:02:51,003 –> 00:02:56,016
world shows through all of them it’s
well put in an old rhyme from a an

38
00:02:56,043 –> 00:03:00,135
English exclusive brethren sector says
that we are the pure and chosen few

39
00:03:02,004 –> 00:03:05,453
and all the rest of jammed there’s room
enough in health for you

40
00:03:05,489 –> 00:03:10,920
we don’t want heaven crammed you can
tell it when you see the extreme Muslims

41
00:03:10,092 –> 00:03:10,167
talk

42
00:03:11,067 –> 00:03:15,129
they cannot wait they cannot wait for
death and destruction to overtake can

43
00:03:16,029 –> 00:03:22,035
overwhelm the world they can’t wait for
for of what i would call without

44
00:03:22,035 –> 00:03:28,086
ambiguity a final solution when you look
at the israeli settlers paid for often

45
00:03:28,086 –> 00:03:32,495
by American tax dollars decide if they
can steal enough land from other people

46
00:03:33,269 –> 00:03:38,310
and get all the Jews into the Promised
Land and all the non Jews out of it then

47
00:03:38,031 –> 00:03:42,123
finally the Jewish people we will be
worthy of the return of the Messiah and

48
00:03:43,023 –> 00:03:46,068
there are Christians in this country you
consider it their job to help this

49
00:03:46,068 –> 00:03:52,086
happen so that Armageddon can occur so
the painful business of living as humans

50
00:03:52,086 –> 00:03:56,172
and studying civilization and trying to
acquire learning and knowledge and

51
00:03:57,072 –> 00:04:03,075
health and medicine and to push mother
can all be scrapped and and the the cult

52
00:04:03,075 –> 00:04:09,120
of death can take over that to me is a
hideous thing in an eschatological terms

53
00:04:10,002 –> 00:04:11,004
in end times

54
00:04:11,022 –> 00:04:15,121
terms on its own hateful idea hateful
price

55
00:04:15,319 –> 00:04:19,342
doctors and hateful theory but very much
to be opposed in our daily lives where

56
00:04:19,549 –> 00:04:22,160
there are people who sincerely mean it

57
00:04:22,016 –> 00:04:26,063
who want who want to ruin the good
relations that could exist between

58
00:04:26,063 –> 00:04:30,472
different peoples nations and races
countries tribes ethnicities

59
00:04:31,082 –> 00:04:36,155
whoo-hoo say openly say they love death
more than we love life and who are

60
00:04:37,055 –> 00:04:39,484
betting with God on their side

61
00:04:39,979 –> 00:04:45,050
they’re right about that so when I say
in is the subject of my book that I

62
00:04:45,005 –> 00:04:50,018
think religion poisons everything I’m
not just doing what publishers like and

63
00:04:50,018 –> 00:04:52,043
coming up with the provocative subtitle

64
00:04:52,043 –> 00:04:57,071
I mean to say it effects us in the in
our most basic integrity it says we

65
00:04:57,071 –> 00:05:01,130
can’t be moral without Big Brother
without a totalitarian permission means

66
00:05:02,003 –> 00:05:06,592
we can’t be good to one another as we
can’t do with without this we we must be

67
00:05:06,889 –> 00:05:13,160
afraid we must also be forced to love
someone who we fear the essence of

68
00:05:13,016 –> 00:05:17,069
sadomasochism at the essence of
abjection the essence of the

69
00:05:17,069 –> 00:05:22,118
master-slave ranger and that knows that
death is coming and can’t wait to bring

70
00:05:23,018 –> 00:05:23,084
it on

71
00:05:23,084 –> 00:05:28,843
I say this is evil and though i do some
nights stay home

72
00:05:29,599 –> 00:05:34,820
I enjoy more the nights when I go out
and fight against this ultimate

73
00:05:34,082 –> 00:05:35,177
wickedness and ultimate stability

74
00:05:36,077 –> 00:05:40,966
thank you

What happens when our computers get smarter than we are?

[ted id=2243 lang=en]

Nick Bostrom: What happens when our computers get smarter than we are?

0:11

I work with a bunch of mathematicians, philosophers and computer scientists, and we sit around and think about the future of machine intelligence, among other things. Some people think that some of these things are sort of science fiction-y, far out there, crazy. But I like to say, okay, let’s look at the modern human condition. (Laughter) This is the normal way for things to be.

0:40

But if we think about it, we are actually recently arrived guests on this planet, the human species. Think about if Earth was created one year ago, the human species, then, would be 10 minutes old. The industrial era started two seconds ago. Another way to look at this is to think of world GDP over the last 10,000 years, I’ve actually taken the trouble to plot this for you in a graph. It looks like this. (Laughter) It’s a curious shape for a normal condition. I sure wouldn’t want to sit on it. (Laughter)

1:18

Let’s ask ourselves, what is the cause of this current anomaly? Some people would say it’s technology. Now it’s true, technology has accumulated through human history, and right now, technology advances extremely rapidly — that is the proximate cause, that’s why we are currently so very productive. But I like to think back further to the ultimate cause.

1:44

Look at these two highly distinguished gentlemen: We have Kanzi — he’s mastered 200 lexical tokens, an incredible feat. And Ed Witten unleashed the second superstring revolution. If we look under the hood, this is what we find: basically the same thing. One is a little larger, it maybe also has a few tricks in the exact way it’s wired. These invisible differences cannot be too complicated, however, because there have only been 250,000 generations since our last common ancestor. We know that complicated mechanisms take a long time to evolve. So a bunch of relatively minor changes take us from Kanzi to Witten, from broken-off tree branches to intercontinental ballistic missiles.

2:31

So this then seems pretty obvious that everything we’ve achieved, and everything we care about, depends crucially on some relatively minor changes that made the human mind. And the corollary, of course, is that any further changes that could significantly change the substrate of thinking could have potentially enormous consequences.

2:55

Some of my colleagues think we’re on the verge of something that could cause a profound change in that substrate, and that is machine superintelligence. Artificial intelligence used to be about putting commands in a box. You would have human programmers that would painstakingly handcraft knowledge items. You build up these expert systems, and they were kind of useful for some purposes, but they were very brittle, you couldn’t scale them. Basically, you got out only what you put in. But since then, a paradigm shift has taken place in the field of artificial intelligence.

3:29

Today, the action is really around machine learning. So rather than handcrafting knowledge representations and features, we create algorithms that learn, often from raw perceptual data. Basically the same thing that the human infant does. The result is A.I. that is not limited to one domain — the same system can learn to translate between any pairs of languages, or learn to play any computer game on the Atari console. Now of course, A.I. is still nowhere near having the same powerful, cross-domain ability to learn and plan as a human being has. The cortex still has some algorithmic tricks that we don’t yet know how to match in machines.

4:18

So the question is, how far are we from being able to match those tricks? A couple of years ago, we did a survey of some of the world’s leading A.I. experts, to see what they think, and one of the questions we asked was, “By which year do you think there is a 50 percent probability that we will have achieved human-level machine intelligence?” We defined human-level here as the ability to perform almost any job at least as well as an adult human, so real human-level, not just within some limited domain. And the median answer was 2040 or 2050, depending on precisely which group of experts we asked. Now, it could happen much, much later, or sooner, the truth is nobody really knows.

5:04

What we do know is that the ultimate limit to information processing in a machine substrate lies far outside the limits in biological tissue. This comes down to physics. A biological neuron fires, maybe, at 200 hertz, 200 times a second. But even a present-day transistor operates at the Gigahertz. Neurons propagate slowly in axons, 100 meters per second, tops. But in computers, signals can travel at the speed of light. There are also size limitations, like a human brain has to fit inside a cranium, but a computer can be the size of a warehouse or larger. So the potential for superintelligence lies dormant in matter, much like the power of the atom lay dormant throughout human history, patiently waiting there until 1945. In this century, scientists may learn to awaken the power of artificial intelligence. And I think we might then see an intelligence explosion.

6:09

Now most people, when they think about what is smart and what is dumb, I think have in mind a picture roughly like this. So at one end we have the village idiot, and then far over at the other side we have Ed Witten, or Albert Einstein, or whoever your favorite guru is. But I think that from the point of view of artificial intelligence, the true picture is actually probably more like this: AI starts out at this point here, at zero intelligence, and then, after many, many years of really hard work, maybe eventually we get to mouse-level artificial intelligence, something that can navigate cluttered environments as well as a mouse can. And then, after many, many more years of really hard work, lots of investment, maybe eventually we get to chimpanzee-level artificial intelligence. And then, after even more years of really, really hard work, we get to village idiot artificial intelligence. And a few moments later, we are beyond Ed Witten. The train doesn’t stop at Humanville Station. It’s likely, rather, to swoosh right by.

7:13

Now this has profound implications, particularly when it comes to questions of power. For example, chimpanzees are strong — pound for pound, a chimpanzee is about twice as strong as a fit human male. And yet, the fate of Kanzi and his pals depends a lot more on what we humans do than on what the chimpanzees do themselves. Once there is superintelligence, the fate of humanity may depend on what the superintelligence does. Think about it: Machine intelligence is the last invention that humanity will ever need to make. Machines will then be better at inventing than we are, and they’ll be doing so on digital timescales. What this means is basically a telescoping of the future. Think of all the crazy technologies that you could have imagined maybe humans could have developed in the fullness of time: cures for aging, space colonization, self-replicating nanobots or uploading of minds into computers, all kinds of science fiction-y stuff that’s nevertheless consistent with the laws of physics. All of this superintelligence could develop, and possibly quite rapidly.

8:23

Now, a superintelligence with such technological maturity would be extremely powerful, and at least in some scenarios, it would be able to get what it wants. We would then have a future that would be shaped by the preferences of this A.I. Now a good question is, what are those preferences? Here it gets trickier. To make any headway with this, we must first of all avoid anthropomorphizing. And this is ironic because every newspaper article about the future of A.I. has a picture of this: So I think what we need to do is to conceive of the issue more abstractly, not in terms of vivid Hollywood scenarios.

9:08

We need to think of intelligence as an optimization process, a process that steers the future into a particular set of configurations. A superintelligence is a really strong optimization process. It’s extremely good at using available means to achieve a state in which its goal is realized. This means that there is no necessary conenction between being highly intelligent in this sense, and having an objective that we humans would find worthwhile or meaningful.

9:38

Suppose we give an A.I. the goal to make humans smile. When the A.I. is weak, it performs useful or amusing actions that cause its user to smile. When the A.I. becomes superintelligent, it realizes that there is a more effective way to achieve this goal: take control of the world and stick electrodes into the facial muscles of humans to cause constant, beaming grins. Another example, suppose we give A.I. the goal to solve a difficult mathematical problem. When the A.I. becomes superintelligent, it realizes that the most effective way to get the solution to this problem is by transforming the planet into a giant computer, so as to increase its thinking capacity. And notice that this gives the A.I.s an instrumental reason to do things to us that we might not approve of. Human beings in this model are threats, we could prevent the mathematical problem from being solved.

10:28

Of course, perceivably things won’t go wrong in these particular ways; these are cartoon examples. But the general point here is important: if you create a really powerful optimization process to maximize for objective x, you better make sure that your definition of x incorporates everything you care about. This is a lesson that’s also taught in many a myth. King Midas wishes that everything he touches be turned into gold. He touches his daughter, she turns into gold. He touches his food, it turns into gold. This could become practically relevant, not just as a metaphor for greed, but as an illustration of what happens if you create a powerful optimization process and give it misconceived or poorly specified goals.

11:15

Now you might say, if a computer starts sticking electrodes into people’s faces, we’d just shut it off. A, this is not necessarily so easy to do if we’ve grown dependent on the system — like, where is the off switch to the Internet? B, why haven’t the chimpanzees flicked the off switch to humanity, or the Neanderthals? They certainly had reasons. We have an off switch, for example, right here. (Choking) The reason is that we are an intelligent adversary; we can anticipate threats and plan around them. But so could a superintelligent agent, and it would be much better at that than we are. The point is, we should not be confident that we have this under control here.

12:03

And we could try to make our job a little bit easier by, say, putting the A.I. in a box, like a secure software environment, a virtual reality simulation from which it cannot escape. But how confident can we be that the A.I. couldn’t find a bug. Given that merely human hackers find bugs all the time, I’d say, probably not very confident. So we disconnect the ethernet cable to create an air gap, but again, like merely human hackers routinely transgress air gaps using social engineering. Right now, as I speak, I’m sure there is some employee out there somewhere who has been talked into handing out her account details by somebody claiming to be from the I.T. department.

12:45

More creative scenarios are also possible, like if you’re the A.I., you can imagine wiggling electrodes around in your internal circuitry to create radio waves that you can use to communicate. Or maybe you could pretend to malfunction, and then when the programmers open you up to see what went wrong with you, they look at the source code — Bam! — the manipulation can take place. Or it could output the blueprint to a really nifty technology, and when we implement it, it has some surreptitious side effect that the A.I. had planned. The point here is that we should not be confident in our ability to keep a superintelligent genie locked up in its bottle forever. Sooner or later, it will get out.

13:26

I believe that the answer here is to figure out how to create superintelligent A.I. such that even if — when — it escapes, it is still safe because it is fundamentally on our side because it shares our values. I see no way around this difficult problem.

13:43

Now, I’m actually fairly optimistic that this problem can be solved. We wouldn’t have to write down a long list of everything we care about, or worse yet, spell it out in some computer language like C++ or Python, that would be a task beyond hopeless. Instead, we would create an A.I. that uses its intelligence to learn what we value, and its motivation system is constructed in such a way that it is motivated to pursue our values or to perform actions that it predicts we would approve of. We would thus leverage its intelligence as much as possible to solve the problem of value-loading.

14:23

This can happen, and the outcome could be very good for humanity. But it doesn’t happen automatically. The initial conditions for the intelligence explosion might need to be set up in just the right way if we are to have a controlled detonation. The values that the A.I. has need to match ours, not just in the familiar context, like where we can easily check how the A.I. behaves, but also in all novel contexts that the A.I. might encounter in the indefinite future.

14:53

And there are also some esoteric issues that would need to be solved, sorted out: the exact details of its decision theory, how to deal with logical uncertainty and so forth. So the technical problems that need to be solved to make this work look quite difficult — not as difficult as making a superintelligent A.I., but fairly difficult. Here is the worry: Making superintelligent A.I. is a really hard challenge. Making superintelligent A.I. that is safe involves some additional challenge on top of that. The risk is that if somebody figures out how to crack the first challenge without also having cracked the additional challenge of ensuring perfect safety.

15:36

So I think that we should work out a solution to the control problem in advance, so that we have it available by the time it is needed. Now it might be that we cannot solve the entire control problem in advance because maybe some elements can only be put in place once you know the details of the architecture where it will be implemented. But the more of the control problem that we solve in advance, the better the odds that the transition to the machine intelligence era will go well.

16:05

This to me looks like a thing that is well worth doing and I can imagine that if things turn out okay, that people a million years from now look back at this century and it might well be that they say that the one thing we did that really mattered was to get this thing right.

16:23

Thank you.

16:25

(Applause)

Waking Up With Sam Harris #59 – Friend & Foe (with Maajid Nawaz)

https://youtu.be/9EB908NRdCc

In this episode of the Waking Up podcast, Sam Harris speaks with Maajid Nawaz about the Southern Poverty Law Center, Robert Spencer, Keith Ellison, moderate Muslims, Shadi Hamid’s notion of “Islamic exceptionalism,” the migrant crisis in Europe, foreign interventions, Trump, Putin, Obama’s legacy, and other topics.

Maajid Nawaz is a counter-extremist, author, columnist, broadcaster and Founding Chairman of Quilliam – a globally active organization focusing on matters of integration, citizenship & identity, religious freedom, immigration, extremism, and terrorism. Maajid’s work is informed by years spent in his youth as a leadership member of a global Islamist group, and his gradual transformation towards liberal democratic values. Having served four years as an Amnesty International adopted “prisoner of conscience” in Egypt, Maajid is now a leading critic of Islamism, while remaining a secular liberal Muslim.

Maajid is an Honorary Associate of the UK’s National Secular Society, a weekly columnist for the Daily Beast, a monthly columnist for the liberal UK paper the ‘Jewish News’ and LBC radio’s weekend afternoon radio host. He also provides occasional columns for the London Times, the New York Times and Wall Street Journal, among others. Maajid was the Liberal Democrat Parliamentary candidate in London’s Hampstead & Kilburn for the May 2015 British General Election.

A British-Pakistani born in Essex, Maajid speaks English, Arabic, and Urdu, holds a BA (Hons) from SOAS in Arabic and Law and an MSc in Political Theory from the London School of Economics (LSE).

Maajid relates his life story in his first book, Radical. He co-authored his second book, Islam and the Future of Tolerance, with Sam Harris.

Sam Harris: Can we build AI without losing control over it?

[ted id=2592 lang=en]

Scared of superintelligent AI? You should be, says neuroscientist and philosopher Sam Harris — and not just in some theoretical way. We’re going to build superhuman machines, says Harris, but we haven’t yet grappled with the problems associated with creating something that may treat us the way we treat ants.

Sam Harris: Can we build AI without losing control over it?

0:12

I’m going to talk about a failure of intuition that many of us suffer from. It’s really a failure to detect a certain kind of danger. I’m going to describe a scenario that I think is both terrifying and likely to occur, and that’s not a good combination, as it turns out. And yet rather than be scared, most of you will feel that what I’m talking about is kind of cool.

0:36

I’m going to describe how the gains we make in artificial intelligence could ultimately destroy us. And in fact, I think it’s very difficult to see how they won’t destroy us or inspire us to destroy ourselves. And yet if you’re anything like me, you’ll find that it’s fun to think about these things. And that response is part of the problem. OK? That response should worry you. And if I were to convince you in this talk that we were likely to suffer a global famine, either because of climate change or some other catastrophe, and that your grandchildren, or their grandchildren, are very likely to live like this, you wouldn’t think, “Interesting. I like this TED Talk.”

1:20

Famine isn’t fun. Death by science fiction, on the other hand, is fun, and one of the things that worries me most about the development of AI at this point is that we seem unable to marshal an appropriate emotional response to the dangers that lie ahead. I am unable to marshal this response, and I’m giving this talk.

1:41

It’s as though we stand before two doors. Behind door number one, we stop making progress in building intelligent machines. Our computer hardware and software just stops getting better for some reason. Now take a moment to consider why this might happen. I mean, given how valuable intelligence and automation are, we will continue to improve our technology if we are at all able to. What could stop us from doing this? A full-scale nuclear war? A global pandemic? An asteroid impact? Justin Bieber becoming president of the United States?

2:19

(Laughter)

2:23

The point is, something would have to destroy civilization as we know it. You have to imagine how bad it would have to be to prevent us from making improvements in our technology permanently, generation after generation. Almost by definition, this is the worst thing that’s ever happened in human history.

2:43

So the only alternative, and this is what lies behind door number two, is that we continue to improve our intelligent machines year after year after year. At a certain point, we will build machines that are smarter than we are, and once we have machines that are smarter than we are, they will begin to improve themselves. And then we risk what the mathematician IJ Good called an “intelligence explosion,” that the process could get away from us.

3:09

Now, this is often caricatured, as I have here, as a fear that armies of malicious robots will attack us. But that isn’t the most likely scenario. It’s not that our machines will become spontaneously malevolent. The concern is really that we will build machines that are so much more competent than we are that the slightest divergence between their goals and our own could destroy us.

3:34

Just think about how we relate to ants. We don’t hate them. We don’t go out of our way to harm them. In fact, sometimes we take pains not to harm them. We step over them on the sidewalk. But whenever their presence seriously conflicts with one of our goals, let’s say when constructing a building like this one, we annihilate them without a qualm. The concern is that we will one day build machines that, whether they’re conscious or not, could treat us with similar disregard.

4:04

Now, I suspect this seems far-fetched to many of you. I bet there are those of you who doubt that superintelligent AI is possible, much less inevitable. But then you must find something wrong with one of the following assumptions. And there are only three of them.

4:22

Intelligence is a matter of information processing in physical systems. Actually, this is a little bit more than an assumption. We have already built narrow intelligence into our machines, and many of these machines perform at a level of superhuman intelligence already. And we know that mere matter can give rise to what is called “general intelligence,” an ability to think flexibly across multiple domains, because our brains have managed it. Right? I mean, there’s just atoms in here, and as long as we continue to build systems of atoms that display more and more intelligent behavior, we will eventually, unless we are interrupted, we will eventually build general intelligence into our machines.

5:10

It’s crucial to realize that the rate of progress doesn’t matter, because any progress is enough to get us into the end zone. We don’t need Moore’s law to continue. We don’t need exponential progress. We just need to keep going.

5:24

The second assumption is that we will keep going. We will continue to improve our intelligent machines. And given the value of intelligence — I mean, intelligence is either the source of everything we value or we need it to safeguard everything we value. It is our most valuable resource. So we want to do this. We have problems that we desperately need to solve. We want to cure diseases like Alzheimer’s and cancer. We want to understand economic systems. We want to improve our climate science. So we will do this, if we can. The train is already out of the station, and there’s no brake to pull.

6:04

Finally, we don’t stand on a peak of intelligence, or anywhere near it, likely. And this really is the crucial insight. This is what makes our situation so precarious, and this is what makes our intuitions about risk so unreliable.

6:22

Now, just consider the smartest person who has ever lived. On almost everyone’s shortlist here is John von Neumann. I mean, the impression that von Neumann made on the people around him, and this included the greatest mathematicians and physicists of his time, is fairly well-documented. If only half the stories about him are half true, there’s no question he’s one of the smartest people who has ever lived. So consider the spectrum of intelligence. Here we have John von Neumann. And then we have you and me. And then we have a chicken.

6:56

(Laughter)

6:58

Sorry, a chicken.

6:59

(Laughter)

7:00

There’s no reason for me to make this talk more depressing than it needs to be.

7:04

(Laughter)

7:07

It seems overwhelmingly likely, however, that the spectrum of intelligence extends much further than we currently conceive, and if we build machines that are more intelligent than we are, they will very likely explore this spectrum in ways that we can’t imagine, and exceed us in ways that we can’t imagine.

7:26

And it’s important to recognize that this is true by virtue of speed alone. Right? So imagine if we just built a superintelligent AI that was no smarter than your average team of researchers at Stanford or MIT. Well, electronic circuits function about a million times faster than biochemical ones, so this machine should think about a million times faster than the minds that built it. So you set it running for a week, and it will perform 20,000 years of human-level intellectual work, week after week after week. How could we even understand, much less constrain, a mind making this sort of progress?

8:07

The other thing that’s worrying, frankly, is that, imagine the best case scenario. So imagine we hit upon a design of superintelligent AI that has no safety concerns. We have the perfect design the first time around. It’s as though we’ve been handed an oracle that behaves exactly as intended. Well, this machine would be the perfect labor-saving device. It can design the machine that can build the machine that can do any physical work, powered by sunlight, more or less for the cost of raw materials. So we’re talking about the end of human drudgery. We’re also talking about the end of most intellectual work.

8:48

So what would apes like ourselves do in this circumstance? Well, we’d be free to play Frisbee and give each other massages. Add some LSD and some questionable wardrobe choices, and the whole world could be like Burning Man.

9:01

(Laughter)

9:05

Now, that might sound pretty good, but ask yourself what would happen under our current economic and political order? It seems likely that we would witness a level of wealth inequality and unemployment that we have never seen before. Absent a willingness to immediately put this new wealth to the service of all humanity, a few trillionaires could grace the covers of our business magazines while the rest of the world would be free to starve.

9:33

And what would the Russians or the Chinese do if they heard that some company in Silicon Valley was about to deploy a superintelligent AI? This machine would be capable of waging war, whether terrestrial or cyber, with unprecedented power. This is a winner-take-all scenario. To be six months ahead of the competition here is to be 500,000 years ahead, at a minimum. So it seems that even mere rumors of this kind of breakthrough could cause our species to go berserk.

10:05

Now, one of the most frightening things, in my view, at this moment, are the kinds of things that AI researchers say when they want to be reassuring. And the most common reason we’re told not to worry is time. This is all a long way off, don’t you know. This is probably 50 or 100 years away. One researcher has said, “Worrying about AI safety is like worrying about overpopulation on Mars.” This is the Silicon Valley version of “don’t worry your pretty little head about it.”

10:37

(Laughter)

10:38

No one seems to notice that referencing the time horizon is a total non sequitur. If intelligence is just a matter of information processing, and we continue to improve our machines, we will produce some form of superintelligence. And we have no idea how long it will take us to create the conditions to do that safely. Let me say that again. We have no idea how long it will take us to create the conditions to do that safely.

11:11

And if you haven’t noticed, 50 years is not what it used to be. This is 50 years in months. This is how long we’ve had the iPhone. This is how long “The Simpsons” has been on television. Fifty years is not that much time to meet one of the greatest challenges our species will ever face. Once again, we seem to be failing to have an appropriate emotional response to what we have every reason to believe is coming.

11:37

The computer scientist Stuart Russell has a nice analogy here. He said, imagine that we received a message from an alien civilization, which read: “People of Earth, we will arrive on your planet in 50 years. Get ready.” And now we’re just counting down the months until the mothership lands? We would feel a little more urgency than we do.

12:03

Another reason we’re told not to worry is that these machines can’t help but share our values because they will be literally extensions of ourselves. They’ll be grafted onto our brains, and we’ll essentially become their limbic systems. Now take a moment to consider that the safest and only prudent path forward, recommended, is to implant this technology directly into our brains. Now, this may in fact be the safest and only prudent path forward, but usually one’s safety concerns about a technology have to be pretty much worked out before you stick it inside your head.

12:35

(Laughter)

12:37

The deeper problem is that building superintelligent AI on its own seems likely to be easier than building superintelligent AI and having the completed neuroscience that allows us to seamlessly integrate our minds with it. And given that the companies and governments doing this work are likely to perceive themselves as being in a race against all others, given that to win this race is to win the world, provided you don’t destroy it in the next moment, then it seems likely that whatever is easier to do will get done first.

13:09

Now, unfortunately, I don’t have a solution to this problem, apart from recommending that more of us think about it. I think we need something like a Manhattan Project on the topic of artificial intelligence. Not to build it, because I think we’ll inevitably do that, but to understand how to avoid an arms race and to build it in a way that is aligned with our interests. When you’re talking about superintelligent AI that can make changes to itself, it seems that we only have one chance to get the initial conditions right, and even then we will need to absorb the economic and political consequences of getting them right.

13:44

But the moment we admit that information processing is the source of intelligence, that some appropriate computational system is what the basis of intelligence is, and we admit that we will improve these systems continuously, and we admit that the horizon of cognition very likely far exceeds what we currently know, then we have to admit that we are in the process of building some sort of god. Now would be a good time to make sure it’s a god we can live with.

14:19

Thank you very much.

14:20

(Applause)

It’s time to question bioengineering

[ted id=1103 lang=en]

Paul Root Wolpe: It’s time to question bio-engineering

0:11
Today I want to talk about design, but not design as we usually think about it. I want to talk about what is happening now in our scientific, biotechnological culture, where, for really the first time in history, we have the power to design bodies, to design animal bodies, to design human bodies. In the history of our planet, there have been three great waves of evolution.

0:38
The first wave of evolution is what we think of as Darwinian evolution. So, as you all know, species lived in particular ecological niches and particular environments, and the pressures of those environments selected which changes, through random mutation in species, were going to be preserved. Then human beings stepped out of the Darwinian flow of evolutionary history and created the second great wave of evolution, which was we changed the environment in which we evolved. We altered our ecological niche by creating civilization. And that has been the second great — couple 100,000 years, 150,000 years — flow of our evolution. By changing our environment, we put new pressures on our bodies to evolve. Whether it was through settling down in agricultural communities, all the way through modern medicine, we have changed our own evolution. Now we’re entering a third great wave of evolutionary history, which has been called many things: “intentional evolution,” “evolution by design” — very different than intelligent design — whereby we are actually now intentionally designing and altering the physiological forms that inhabit our planet.

2:02
So I want to take you through a kind of whirlwind tour of that and then at the end talk a little bit about what some of the implications are for us and for our species, as well as our cultures, because of this change. Now we actually have been doing it for a long time. We started selectively breeding animals many, many thousands of years ago. And if you think of dogs for example, dogs are now intentionally-designed creatures. There isn’t a dog on this earth that’s a natural creature. Dogs are the result of selectively breeding traits that we like. But we had to do it the hard way in the old days by choosing offspring that looked a particular way and then breeding them. We don’t have to do it that way anymore.

2:49
This is a beefalo. A beefalo is a buffalo-cattle hybrid. And they are now making them, and someday, perhaps pretty soon, you will have beefalo patties in your local supermarket. This is a geep, a goat-sheep hybrid. The scientists that made this cute little creature ended up slaughtering it and eating it afterwards. I think they said it tasted like chicken. This is a cama. A cama is a camel-llama hybrid, created to try to get the hardiness of a camel with some of the personality traits of a llama. And they are now using these in certain cultures. Then there’s the liger. This is the largest cat in the world — the lion-tiger hybrid. It’s bigger than a tiger. And in the case of the liger, there actually have been one or two that have been seen in the wild. But these were created by scientists using both selective breeding and genetic technology. And then finally, everybody’s favorite, the zorse. None of this is Photoshopped. These are real creatures. And so one of the things we’ve been doing is using genetic enhancement, or genetic manipulation, of normal selective breeding pushed a little bit through genetics. And if that were all this was about, then it would be an interesting thing. But something much, much more powerful is happening now.

4:27
These are normal mammalian cells genetically engineered with a bioluminescent gene taken out of deep-sea jellyfish. We all know that some deep-sea creatures glow. Well, they’ve now taken that gene, that bioluminescent gene, and put it into mammal cells. These are normal cells. And what you see here is these cells glowing in the dark under certain wavelengths of light. Once they could do that with cells, they could do it with organisms. So they did it with mouse pups, kittens. And by the way, the reason the kittens here are orange and these are green is because that’s a bioluminescent gene from coral, while this is from jellyfish. They did it with pigs. They did it with puppies. And, in fact, they did it with monkeys. And if you can do it with monkeys — though the great leap in trying to genetically manipulate is actually between monkeys and apes — if they can do it in monkeys, they can probably figure out how to do it in apes, which means they can do it in human beings. In other words, it is theoretically possible that before too long we will be biotechnologically capable of creating human beings that glow in the dark. Be easier to find us at night.

5:52
And in fact, right now in many states, you can go out and you can buy bioluminescent pets. These are zebra fish. They’re normally black and silver. These are zebra fish that have been genetically engineered to be yellow, green, red, and they are actually available now in certain states. Other states have banned them. Nobody knows what to do with these kinds of creatures. There is no area of the government — not the EPA or the FDA — that controls genetically-engineered pets. And so some states have decided to allow them, some states have decided to ban them.

6:28
Some of you may have read about the FDA’s consideration right now of genetically-engineered salmon. The salmon on top is a genetically engineered Chinook salmon, using a gene from these salmon and from one other fish that we eat, to make it grow much faster using a lot less feed. And right now the FDA is trying to make a final decision on whether, pretty soon, you could be eating this fish — it’ll be sold in the stores. And before you get too worried about it, here in the United States, the majority of food you buy in the supermarket already has genetically-modified components to it. So even as we worry about it, we have allowed it to go on in this country — much different in Europe — without any regulation, and even without any identification on the package.

7:16
These are all the first cloned animals of their type. So in the lower right here, you have Dolly, the first cloned sheep — now happily stuffed in a museum in Edinburgh; Ralph the rat, the first cloned rat; CC the cat, for cloned cat; Snuppy, the first cloned dog — Snuppy for Seoul National University puppy — created in South Korea by the very same man that some of you may remember had to end up resigning in disgrace because he claimed he had cloned a human embryo, which he had not. He actually was the first person to clone a dog, which is a very difficult thing to do, because dog genomes are very plastic. This is Prometea, the first cloned horse. It’s a Haflinger horse cloned in Italy, a real “gold ring” of cloning, because there are many horses that win important races who are geldings. In other words, the equipment to put them out to stud has been removed. But if you can clone that horse, you can have both the advantage of having a gelding run in the race and his identical genetic duplicate can then be put out to stud. These were the first cloned calves, the first cloned grey wolves, and then, finally, the first cloned piglets: Alexis, Chista, Carrel, Janie and Dotcom.

8:37
(Laughter)

8:41
In addition, we’ve started to use cloning technology to try to save endangered species. This is the use of animals now to create drugs and other things in their bodies that we want to create. So with antithrombin in that goat — that goat has been genetically modified so that the molecules of its milk actually include the molecule of antithrombin that GTC Genetics wants to create. And then in addition, transgenic pigs, knockout pigs, from the National Institute of Animal Science in South Korea, are pigs that they are going to use, in fact, to try to create all kinds of drugs and other industrial types of chemicals that they want the blood and the milk of these animals to produce for them, instead of producing them in an industrial way.

9:35
These are two creatures that were created in order to save endangered species. The guar is an endangered Southeast Asian ungulate. A somatic cell, a body cell, was taken from its body, gestated in the ovum of a cow, and then that cow gave birth to a guar. Same thing happened with the mouflon, where it’s an endangered species of sheep. It was gestated in a regular sheep body, which actually raises an interesting biological problem. We have two kinds of DNA in our bodies. We have our nucleic DNA that everybody thinks of as our DNA, but we also have DNA in our mitochondria, which are the energy packets of the cell. That DNA is passed down through our mothers. So really, what you end up having here is not a guar and not a mouflon, but a guar with cow mitochondria, and therefore cow mitochondrial DNA, and a mouflon with another species of sheep’s mitochondrial DNA. These are really hybrids, not pure animals. And it raises the question of how we’re going to define animal species in the age of biotechnology — a question that we’re not really sure yet how to solve.

10:55
This lovely creature is an Asian cockroach. And what they’ve done here is they’ve put electrodes in its ganglia and its brain and then a transmitter on top, and it’s on a big computer tracking ball. And now, using a joystick, they can send this creature around the lab and control whether it goes left or right, forwards or backwards. They’ve created a kind of insect bot, or bugbot. It gets worse than that — or perhaps better than that. This actually is one of DARPA’s very important — DARPA is the Defense Research Agency — one of their projects. These goliath beetles are wired in their wings. They have a computer chip strapped to their backs, and they can fly these creatures around the lab. They can make them go left, right. They can make them take off. They can’t actually make them land. They put them about one inch above the ground, and then they shut everything off and they go pfft. But it’s the closest they can get to a landing.

11:56
And in fact, this technology has gotten so developed that this creature — this is a moth — this is the moth in its pupa stage, and that’s when they put the wires in and they put in the computer technology, so that when the moth actually emerges as a moth, it is already prewired. The wires are already in its body, and they can just hook it up to their technology, and now they’ve got these bugbots that they can send out for surveillance. They can put little cameras on them and perhaps someday deliver other kinds of ordinance to warzones.

12:35
It’s not just insects. This is the ratbot, or the robo-rat by Sanjiv Talwar at SUNY Downstate. Again, it’s got technology — it’s got electrodes going into its left and right hemispheres; it’s got a camera on top of its head. The scientists can make this creature go left, right. They have it running through mazes, controlling where it’s going. They’ve now created an organic robot. The graduate students in Sanjiv Talwar’s lab said, “Is this ethical? We’ve taken away the autonomy of this animal.” I’ll get back to that in a minute.

13:12
There’s also been work done with monkeys. This is Miguel Nicolelis of Duke. He took owl monkeys, wired them up so that a computer watched their brains while they moved, especially looking at the movement of their right arm. The computer learned what the monkey brain did to move its arm in various ways. They then hooked it up to a prosthetic arm, which you see here in the picture, put the arm in another room. Pretty soon, the computer learned, by reading the monkey’s brainwaves, to make that arm in the other room do whatever the monkey’s arm did. Then he put a video monitor in the monkey’s cage that showed the monkey this prosthetic arm, and the monkey got fascinated. The monkey recognized that whatever she did with her arm, this prosthetic arm would do. And eventually she was moving it and moving it, and eventually stopped moving her right arm and, staring at the screen, could move the prosthetic arm in the other room only with her brainwaves — which means that monkey became the first primate in the history of the world to have three independent functional arms.

14:18
And it’s not just technology that we’re putting into animals. This is Thomas DeMarse at the University of Florida. He took 20,000 and then 60,000 disaggregated rat neurons — so these are just individual neurons from rats — put them on a chip. They self-aggregated into a network, became an integrated chip. And he used that as the IT piece of a mechanism which ran a flight simulator. So now we have organic computer chips made out of living, self-aggregating neurons. Finally, Mussa-Ivaldi of Northwestern took a completely intact, independent lamprey eel brain. This is a brain from a lamprey eel. It is living — fully-intact brain in a nutrient medium with these electrodes going off to the sides, attached photosensitive sensors to the brain, put it into a cart — here’s the cart, the brain is sitting there in the middle — and using this brain as the sole processor for this cart, when you turn on a light and shine it at the cart, the cart moves toward the light; when you turn it off, it moves away. It’s photophilic. So now we have a complete living lamprey eel brain. Is it thinking lamprey eel thoughts, sitting there in its nutrient medium? I don’t know, but in fact it is a fully living brain that we have managed to keep alive to do our bidding.

15:54
So, we are now at the stage where we are creating creatures for our own purposes. This is a mouse created by Charles Vacanti of the University of Massachusetts. He altered this mouse so that it was genetically engineered to have skin that was less immunoreactive to human skin, put a polymer scaffolding of an ear under it and created an ear that could then be taken off the mouse and transplanted onto a human being. Genetic engineering coupled with polymer physiotechnology coupled with xenotransplantation. This is where we are in this process.

16:33
Finally, not that long ago, Craig Venter created the first artificial cell, where he took a cell, took a DNA synthesizer, which is a machine, created an artificial genome, put it in a different cell — the genome was not of the cell he put it in — and that cell then reproduced as the other cell. In other words, that was the first creature in the history of the world that had a computer as its parent — it did not have an organic parent. And so, asks The Economist: “The first artificial organism and its consequences.”

17:10
So you may have thought that the creation of life was going to happen in something that looked like that. (Laughter) But in fact, that’s not what Frankenstein’s lab looks like. This is what Frankenstein’s lab looks like. This is a DNA synthesizer, and here at the bottom are just bottles of A, T, C and G — the four chemicals that make up our DNA chain.

17:34
And so, we need to ask ourselves some questions. For the first time in the history of this planet, we are able to directly design organisms. We can manipulate the plasmas of life with unprecedented power, and it confers on us a responsibility. Is everything okay? Is it okay to manipulate and create whatever creatures we want? Do we have free reign to design animals? Do we get to go someday to Pets ‘R’ Us and say, “Look, I want a dog. I’d like it to have the head of a Dachshund, the body of a retriever, maybe some pink fur, and let’s make it glow in the dark”? Does industry get to create creatures who, in their milk, in their blood, and in their saliva and other bodily fluids, create the drugs and industrial molecules we want and then warehouse them as organic manufacturing machines? Do we get to create organic robots, where we remove the autonomy from these animals and turn them just into our playthings?

18:38
And then the final step of this, once we perfect these technologies in animals and we start using them in human beings, what are the ethical guidelines that we will use then? It’s already happening. It’s not science fiction. We are not only already using these things in animals, some of them we’re already beginning to use on our own bodies.

19:01
We are now taking control of our own evolution. We are directly designing the future of the species of this planet. It confers upon us an enormous responsibility that is not just the responsibility of the scientists and the ethicists who are thinking about it and writing about it now. It is the responsibility of everybody because it will determine what kind of planet and what kind of bodies we will have in the future.

19:26
Thanks.

19:28
(Applause)