Sam Harris: Can we build AI without losing control over it?

Scared of superintelligent AI? You should be, says neuroscientist and philosopher Sam Harris — and not just in some theoretical way. We’re going to build superhuman machines, says Harris, but we haven’t yet grappled with the problems associated with creating something that may treat us the way we treat ants.

Sam Harris: Can we build AI without losing control over it?

0:12

I’m going to talk about a failure of intuition that many of us suffer from. It’s really a failure to detect a certain kind of danger. I’m going to describe a scenario that I think is both terrifying and likely to occur, and that’s not a good combination, as it turns out. And yet rather than be scared, most of you will feel that what I’m talking about is kind of cool.

0:36

I’m going to describe how the gains we make in artificial intelligence could ultimately destroy us. And in fact, I think it’s very difficult to see how they won’t destroy us or inspire us to destroy ourselves. And yet if you’re anything like me, you’ll find that it’s fun to think about these things. And that response is part of the problem. OK? That response should worry you. And if I were to convince you in this talk that we were likely to suffer a global famine, either because of climate change or some other catastrophe, and that your grandchildren, or their grandchildren, are very likely to live like this, you wouldn’t think, “Interesting. I like this TED Talk.”

1:20

Famine isn’t fun. Death by science fiction, on the other hand, is fun, and one of the things that worries me most about the development of AI at this point is that we seem unable to marshal an appropriate emotional response to the dangers that lie ahead. I am unable to marshal this response, and I’m giving this talk.

1:41

It’s as though we stand before two doors. Behind door number one, we stop making progress in building intelligent machines. Our computer hardware and software just stops getting better for some reason. Now take a moment to consider why this might happen. I mean, given how valuable intelligence and automation are, we will continue to improve our technology if we are at all able to. What could stop us from doing this? A full-scale nuclear war? A global pandemic? An asteroid impact? Justin Bieber becoming president of the United States?

2:19

(Laughter)

2:23

The point is, something would have to destroy civilization as we know it. You have to imagine how bad it would have to be to prevent us from making improvements in our technology permanently, generation after generation. Almost by definition, this is the worst thing that’s ever happened in human history.

2:43

So the only alternative, and this is what lies behind door number two, is that we continue to improve our intelligent machines year after year after year. At a certain point, we will build machines that are smarter than we are, and once we have machines that are smarter than we are, they will begin to improve themselves. And then we risk what the mathematician IJ Good called an “intelligence explosion,” that the process could get away from us.

3:09

Now, this is often caricatured, as I have here, as a fear that armies of malicious robots will attack us. But that isn’t the most likely scenario. It’s not that our machines will become spontaneously malevolent. The concern is really that we will build machines that are so much more competent than we are that the slightest divergence between their goals and our own could destroy us.

3:34

Just think about how we relate to ants. We don’t hate them. We don’t go out of our way to harm them. In fact, sometimes we take pains not to harm them. We step over them on the sidewalk. But whenever their presence seriously conflicts with one of our goals, let’s say when constructing a building like this one, we annihilate them without a qualm. The concern is that we will one day build machines that, whether they’re conscious or not, could treat us with similar disregard.

4:04

Now, I suspect this seems far-fetched to many of you. I bet there are those of you who doubt that superintelligent AI is possible, much less inevitable. But then you must find something wrong with one of the following assumptions. And there are only three of them.

4:22

Intelligence is a matter of information processing in physical systems. Actually, this is a little bit more than an assumption. We have already built narrow intelligence into our machines, and many of these machines perform at a level of superhuman intelligence already. And we know that mere matter can give rise to what is called “general intelligence,” an ability to think flexibly across multiple domains, because our brains have managed it. Right? I mean, there’s just atoms in here, and as long as we continue to build systems of atoms that display more and more intelligent behavior, we will eventually, unless we are interrupted, we will eventually build general intelligence into our machines.

5:10

It’s crucial to realize that the rate of progress doesn’t matter, because any progress is enough to get us into the end zone. We don’t need Moore’s law to continue. We don’t need exponential progress. We just need to keep going.

5:24

The second assumption is that we will keep going. We will continue to improve our intelligent machines. And given the value of intelligence — I mean, intelligence is either the source of everything we value or we need it to safeguard everything we value. It is our most valuable resource. So we want to do this. We have problems that we desperately need to solve. We want to cure diseases like Alzheimer’s and cancer. We want to understand economic systems. We want to improve our climate science. So we will do this, if we can. The train is already out of the station, and there’s no brake to pull.

6:04

Finally, we don’t stand on a peak of intelligence, or anywhere near it, likely. And this really is the crucial insight. This is what makes our situation so precarious, and this is what makes our intuitions about risk so unreliable.

6:22

Now, just consider the smartest person who has ever lived. On almost everyone’s shortlist here is John von Neumann. I mean, the impression that von Neumann made on the people around him, and this included the greatest mathematicians and physicists of his time, is fairly well-documented. If only half the stories about him are half true, there’s no question he’s one of the smartest people who has ever lived. So consider the spectrum of intelligence. Here we have John von Neumann. And then we have you and me. And then we have a chicken.

6:56

(Laughter)

6:58

Sorry, a chicken.

6:59

(Laughter)

7:00

There’s no reason for me to make this talk more depressing than it needs to be.

7:04

(Laughter)

7:07

It seems overwhelmingly likely, however, that the spectrum of intelligence extends much further than we currently conceive, and if we build machines that are more intelligent than we are, they will very likely explore this spectrum in ways that we can’t imagine, and exceed us in ways that we can’t imagine.

7:26

And it’s important to recognize that this is true by virtue of speed alone. Right? So imagine if we just built a superintelligent AI that was no smarter than your average team of researchers at Stanford or MIT. Well, electronic circuits function about a million times faster than biochemical ones, so this machine should think about a million times faster than the minds that built it. So you set it running for a week, and it will perform 20,000 years of human-level intellectual work, week after week after week. How could we even understand, much less constrain, a mind making this sort of progress?

8:07

The other thing that’s worrying, frankly, is that, imagine the best case scenario. So imagine we hit upon a design of superintelligent AI that has no safety concerns. We have the perfect design the first time around. It’s as though we’ve been handed an oracle that behaves exactly as intended. Well, this machine would be the perfect labor-saving device. It can design the machine that can build the machine that can do any physical work, powered by sunlight, more or less for the cost of raw materials. So we’re talking about the end of human drudgery. We’re also talking about the end of most intellectual work.

8:48

So what would apes like ourselves do in this circumstance? Well, we’d be free to play Frisbee and give each other massages. Add some LSD and some questionable wardrobe choices, and the whole world could be like Burning Man.

9:01

(Laughter)

9:05

Now, that might sound pretty good, but ask yourself what would happen under our current economic and political order? It seems likely that we would witness a level of wealth inequality and unemployment that we have never seen before. Absent a willingness to immediately put this new wealth to the service of all humanity, a few trillionaires could grace the covers of our business magazines while the rest of the world would be free to starve.

9:33

And what would the Russians or the Chinese do if they heard that some company in Silicon Valley was about to deploy a superintelligent AI? This machine would be capable of waging war, whether terrestrial or cyber, with unprecedented power. This is a winner-take-all scenario. To be six months ahead of the competition here is to be 500,000 years ahead, at a minimum. So it seems that even mere rumors of this kind of breakthrough could cause our species to go berserk.

10:05

Now, one of the most frightening things, in my view, at this moment, are the kinds of things that AI researchers say when they want to be reassuring. And the most common reason we’re told not to worry is time. This is all a long way off, don’t you know. This is probably 50 or 100 years away. One researcher has said, “Worrying about AI safety is like worrying about overpopulation on Mars.” This is the Silicon Valley version of “don’t worry your pretty little head about it.”

10:37

(Laughter)

10:38

No one seems to notice that referencing the time horizon is a total non sequitur. If intelligence is just a matter of information processing, and we continue to improve our machines, we will produce some form of superintelligence. And we have no idea how long it will take us to create the conditions to do that safely. Let me say that again. We have no idea how long it will take us to create the conditions to do that safely.

11:11

And if you haven’t noticed, 50 years is not what it used to be. This is 50 years in months. This is how long we’ve had the iPhone. This is how long “The Simpsons” has been on television. Fifty years is not that much time to meet one of the greatest challenges our species will ever face. Once again, we seem to be failing to have an appropriate emotional response to what we have every reason to believe is coming.

11:37

The computer scientist Stuart Russell has a nice analogy here. He said, imagine that we received a message from an alien civilization, which read: “People of Earth, we will arrive on your planet in 50 years. Get ready.” And now we’re just counting down the months until the mothership lands? We would feel a little more urgency than we do.

12:03

Another reason we’re told not to worry is that these machines can’t help but share our values because they will be literally extensions of ourselves. They’ll be grafted onto our brains, and we’ll essentially become their limbic systems. Now take a moment to consider that the safest and only prudent path forward, recommended, is to implant this technology directly into our brains. Now, this may in fact be the safest and only prudent path forward, but usually one’s safety concerns about a technology have to be pretty much worked out before you stick it inside your head.

12:35

(Laughter)

12:37

The deeper problem is that building superintelligent AI on its own seems likely to be easier than building superintelligent AI and having the completed neuroscience that allows us to seamlessly integrate our minds with it. And given that the companies and governments doing this work are likely to perceive themselves as being in a race against all others, given that to win this race is to win the world, provided you don’t destroy it in the next moment, then it seems likely that whatever is easier to do will get done first.

13:09

Now, unfortunately, I don’t have a solution to this problem, apart from recommending that more of us think about it. I think we need something like a Manhattan Project on the topic of artificial intelligence. Not to build it, because I think we’ll inevitably do that, but to understand how to avoid an arms race and to build it in a way that is aligned with our interests. When you’re talking about superintelligent AI that can make changes to itself, it seems that we only have one chance to get the initial conditions right, and even then we will need to absorb the economic and political consequences of getting them right.

13:44

But the moment we admit that information processing is the source of intelligence, that some appropriate computational system is what the basis of intelligence is, and we admit that we will improve these systems continuously, and we admit that the horizon of cognition very likely far exceeds what we currently know, then we have to admit that we are in the process of building some sort of god. Now would be a good time to make sure it’s a god we can live with.

14:19

Thank you very much.

14:20

(Applause)

Neuroscience and free will

Neuroscience and free will

Neuroethics also encompasses the ethical issues raised by neuroscience as it affects our understanding of the world and of ourselves in the world. For example, if everything we do is physically caused by our brains, which are in turn a product of our genes and our life experiences, how can we be held responsible for our actions? A crime in the United States requires a “guilty act” and a “guilty mind”.

As neuropsychiatry evaluations have become more commonly used in the criminal justice system and neuroimaging technologies have given us a more direct way of viewing brain injuries, scholars have cautioned that this could lead to the inability to hold anyone criminally responsible for their actions. In this way, neuroimaging evidence could suggest that there is no free will and each action a person makes is simply the product of past actions and biological impulses that are out of our control. The question of whether and how personal autonomy is compatible with neuroscience ethics and the responsibility of neuroscientists to society and the state is a central one for neuroethics.

Additionally, in late 2013 U.S. President Barack Obama made recommendations to the Presidential Commission for the Study of Bioethical Issues as part of his $100 million Brain Research through Advancing Innovative Neurotechnologies (BRAIN) Initiative. This Spring discussion resumed in a recent interview and article sponsored by Agence France-Presse (AFP): “It is absolutely critical… to integrate ethics from the get-go into neuroscience research,” and not “for the first time after something has gone wrong,” said Amy Gutmann, Bioethics Commission Chair.” But no consensus has been reached.

Miguel Faria, a Professor of Neurosurgery and an Associate Editor in Chief of Surgical Neurology International, who was not involved in the Commission’s work said, “any ethics approach must be based upon respect for the individual, as doctors pledge according to the Hippocratic Oath which includes vows to be humble, respect privacy and doing no harm; and pursuing a path based on population-based ethics is just as dangerous as having no medical ethics at all.” Why the danger of population-based bioethics? Faria asserts, “it is centered on utilitarianism, monetary considerations, and the fiscal and political interests of the state, rather than committed to placing the interest of the individual patient or experimental subject above all other considerations.”

For her part, Gutmann believes the next step is “to examine more deeply the ethical implications of neuroscience research and its effects on society.”



Free Will Comic

Scientists’ Open Letter on Cryonics

Signatories encompass all disciplines relevant to cryonics, including Biology, Cryobiology, Neuroscience, Physical Science, Nanotechnology and Computing, Ethics and Theology.

The signatories, speaking for themselves, include leading scientists from institutes such as MIT, Harvard, NASA and Cambridge University to name a few.

To whom it may concern,

Cryonics is a legitimate science-based endeavor that seeks to preserve human beings, especially the human brain, by the best technology available. Future technologies for resuscitation can be envisioned that involve molecular repair by nanomedicine, highly advanced computation, detailed control of cell growth, and tissue regeneration.

With a view toward these developments, there is a credible possibility that cryonics performed under the best conditions achievable today can preserve sufficient neurological information to permit eventual restoration of a person to full health.

The rights of people who choose cryonics are important, and should be respected.

Sincerely (69 Signatories)

[Signature date in brackets]

Gregory Benford, Ph.D.
(Physics, UC San Diego) Professor of Physics; University of California; Irvine, CA [3/24/04]

Alex Bokov, Ph.D.
(Physiology, University of Texas Health Science Center, San Antonio) [6/02/2014]

Alaxander Bolonkin, Ph.D.
(Leningrad Politechnic University) Professor, Moscow Aviation Institute; Senior Research Associate NASA Dryden Flight Research Center; Lecturer, New Jersey Institute of Technology, Newark, NJ [3/24/04]

Nick Bostrom, Ph.D.
Research Fellow; University of Oxford; Oxford, United Kingdom [3/25/04]

Kevin Q. Brown, Ph.D.
(Computer Science, Carnegie-Mellon) Member of Technical Staff; Lucent Bell Laboratories (retired); Stanhope, NJ [3/23/04]

Professor Manfred Clynes, Ph.D.
Lombardi Cancer Center; Department of Oncology and Department of Physiology and Biophysics, Georgetown University; Washington, DC [3/28/04]

L. Stephen Coles, M.D., PhD
(RPI, Columbia, Carnegie Mellon University) Director, Supercentenarian Research Foundation Inglewood, California [10/7/06]

Jose Luis Cordeiro, MBA, PhD
The Millennium Project, Venezuelan Director; Founding Faculty, Singularity University, NASA Research Park, California; and Adjunct Professor, Moscow Institute of Physics and Technology, Russia [02/07/06]

Daniel Crevier, Ph.D.
(MIT) President, Ophthalmos Systems Inc., Longueuil, Qc, Canada; Professor of Electrical Engineering (ret.), McGill University & École de Technologie Supérieure, Montreal, Canada. [4/7/05]

Antonei B. Csoka, Ph.D.
Assistant Professor of Obstetrics, Gynecology and Reproductive Sciences, University of Pittsburgh School of Medicine Pittsburgh Development Center, Magee-Womens Research Institute [9/14/05]

Paulo H. R. de Castro, M.D., Ph.D.
Adjunct Professor, Federal University of Rio de Janeiro, Rio de Janeiro, Brazil [01/29/16]

Aubrey D.N.J. de Grey, Ph.D.
Research Associate; University of Cambridge;Cambridge, United Kingdom [3/19/04]

Wesley M. Du Charme, Ph.D.
(Experimental Psychology, University of Michigan) author of Becoming Immortal, Rathdrum, Idaho [11/23/05]

João Pedro de Magalhães, Ph.D.
University of Namur; Namur, Belgium [3/22/04]

Thomas Donaldson, Ph.D.
Editor, Periastron; Founder, Institute for Neural Cryobiology; Canberra, Australia [3/22/04]

Christopher J. Dougherty, Ph.D.
Chief Scientist; Suspended Animation Inc; Boca Raton, FL [3/19/04]

K. Eric Drexler, Ph.D.
Chairman of Foresight Institute; Palo Alto, CA [3/19/04]

Lluís Estrada, MD., Ph.D.

Ex Head of the Clinical Neurophysiology Section (retired) at the University Hospital Joan XXIII of Tarragona, Spain. [11/21/2015]

Robert A. Freitas Jr., J.D.
Author, Nanomedicine Vols. I & II; Research Fellow, Institute for Molecular Manufacturing, Palo Alto, CA [3/27/04]

Mark Galecki, Ph.D.
(Mathematics, Univ of Tennessee), M.S. (Computer Science, Rutgers Univ), Senior System Software Engineer, SBS Technologies [11/23/05]

D. B. Ghare, Ph.D.
Principal Research Scientist, Indian Institute of Science, Bangalore, India [5/24/04]

Ben Goertzel, Ph.D.
(Mathematics, Temple) Chief Scientific Officer, Biomind LLC; Columbia, MD [3/19/04]

Peter Gouras, M.D.
Professor of Ophthalmology, Columbia University; New York City, NY [3/19/04]

Rodolfo G. Goya, PhD
Senior Scientist, Institute for Biochemical Research (INIBIOLP), School of Medicine,, National University of La Plata, La Plata city, Argentina. [11/22/2015]

Amara L. Graps, Ph.D.
Researcher, Astrophysics; Adjunct Professor of Astronomy; Institute of Physics of the Interplanetary Space; American University of Rome (Italy) [3/22/04]

Raphael Haftka, Ph.D.
(UC San Diego) Distinguished Prof. U. of Florida; Dept. of Mechanical & Aerospace Engineering, Gainesville, FL [3/22/04]

David A. Hall, M.D.
Dean of Education, World Health Medical School [11/23/05]

J. Storrs Hall, Ph.D.
Research Fellow, Institute for Molecular Manufacturing, Los Altos, CA
Fellow, Molecular Engineering Research Institute, Laporte, PA [3/26/04]

Robin Hanson, Ph.D.
(Social Science, Caltech) Assistant Professor (of Economics); George Mason University; Fairfax, VA [3/19/04]

Steven B. Harris, M.D.
President and Director of Research; Critical Care Research, Inc; Rancho Cucamonga, CA [3/19/04]

Michael D. Hartl, Ph.D.
(Physics, Harvard & Caltech) Visitor in Theoretical Astrophysics; California Institute of Technology; Pasadena, CA [3/19/04]

Kenneth J. Hayworth, Ph.D. (Neuroscience, University of Southern California) Research Fellow; Harvard University; Cambridge, MA [10/22/10]

Henry R. Hirsch, Ph. D.
(Massachusetts Institute of Technology, 1960) Professor Emeritus, University of Kentucky College of Medicine [11/29/05]

Tad Hogg, Ph.D.
(Physics, Caltech and Stanford) research staff, HP Labs, Palo Alto, CA [10/10/05]

James J. Hughes, Ph.D.
Public Policy Studies Trinity College; Hartford, CT [3/25/04]

James R. Hughes, M.D., Ph.D.
ER Director of Meadows Regoinal Medical Center; Director of Medical Research & Development, Hilton Head Longevity Center, Savanah, GA [4/05/04]

Ravin Jain, M.D.
(Medicine, Baylor) Assistant Clinical Professor of Neurology, UCLA School of Medicine, Los Angeles, CA [3/31/04]

Subhash C. Kak, Ph.D.
Department of Electrical & Computer Engineering, Louisiana State University, Baton Rouge, LA [3/24/04]

Professor Bart Kosko, Ph.D.
Electrical Engineering Department; University of Southern California [3/19/04]

Jaime Lagúnez, PhD
NGS and Systems biologist for INSP (National Institutes of Health of Mexico) and CONACYT (National Science and Technology Council). [11/21/2015]

James B. Lewis, Ph.D.
(Chemistry, Harvard) Senior Research Investigator (retired); Bristol-Myers Squibb Pharmaceutical Research Institute; Seattle, WA [3/19/04]

Marc S. Lewis, Ph.D.
Ph.D. from the University of Cincinnati in Clinical Psychology. Associate Professor at the University of Texas at Austin of Clinical Psychology. [6/12/05]

Brad F. Mellon, STM, Ph.D.
Chair of the Ethics Committee; Frederick Mennonite Community; Frederick, PA [3/25/04]

Ralph C. Merkle, Ph.D.
Distinguished Professor of Computing; Georgia Tech College of Computing; Director, GTISC (GA Tech Information Security Center); VP, Technology Assessment, Foresight Institute [3/19/04]

Marvin Minsky, Ph.D.
(Mathematics, Harvard & Princeton) MIT Media Lab and MIT AI Lab; Toshiba Professor of Media Arts and Sciences; Professor of E.E. and C.S., M.I.T [3/19/04]

John Warwick Montgomery, Ph.D.
(Chicago) D.Théol. (Strasbourg), LL.D. (Cardiff) Professor Emeritus of Law and Humanities, University of Luton, England [3/28/04]

Max More, Ph.D.
Chairman, Extropy Institute, Austin, TX [3/31/04]

Steve Omohundro, Ph.D.
(Physics, University of California at Berkeley) Computer science professor at the University of Illinois at Champaign/Urbana [6/08/04]

Mike O’Neal, Ph.D.
(Computer Science) Assoc. Professor and Computer Science Program Chair; Louisiana Tech Univ.; Ruston, LA [3/19/04]

R. Michael Perry, Ph.D. Computer Science
Patient care and technical services, Alcor Life Extension Foundation [9/30/09]

Yuri Pichugin, Ph.D.
Former Senior Researcher, Institute for Problems of Cryobiology and Cryomedicine; Kharkov, Ukraine [3/19/04]

Peter H. Proctor, M.D., Ph.D.
Independent Physician & Pharmacologist; Houston, Texas [5/02/04]

Martine Rothblatt, Ph.D., J.D., M.B.A.
Responsible for launching several satellite communications companies including Sirius and WorldSpace. Founder and CEO of United Therapeutics. [5/02/04]

Klaus H. Sames, M.D.
University Medical Center Hamburg-Eppendorf, Center of Experimental Medicine (CEM) Institute of Anatomy II: Experimental Morphology; Hamburg, Germany [3/25/04]

Anders Sandberg, Ph.D.
(Computational Neuroscience) Royal Institute of Technology, Stockholm University; Stockholm, Sweden [3/19/04]

Sergey V. Sheleg, M.D., Ph.D.
Senior Research Scientist, Alcor Life Extension Foundation; Scottsdale, AZ [8/11/05]

Stanley Shostak, Ph.D.
Associate Professor of Biological Sciences; University of Pittsburgh; Pittsburgh, PA [3/19/04]

Rafal Smigrodzki, M.D., Ph.D.
Chief Clinical Officer, Gencia Company; Charlottesville VA [3/19/04]

David S. Stodolsky, Ph.D.
(Univ. of Cal., Irvine) Senior Scientist, Institute for Social Informatics [11/24/05]

Gregory Stock, Ph.D.
Director, Program on Medicine, Technology, and Society UCLA School of Public Health; Los Angeles, CA [3/24/04]

Charles Tandy, Ph.D.
Associate Professor of Humanities and Director Center for Interdisciplinary Philosophic Studies Fooyin University (Kaohsiung, Taiwan) [5/25/05]

Peter Toma, Ph.D.
President, Cosmolingua, Inc. Sioux Falls, South Dakota. Inventor and Founder of SYSTRAN. Director of International Relations, Alcor Life Extension Foundation. Residences in Argentina, Germany, New Zealand, Switzerland and USA [5/24/05]

Natasha Vita-More, PhD
Professor, University of Advancing Technology, Tempe, Arizona, USA. [11/22/2015]

Mark A. Voelker, Ph.D.
(Optical Sciences, U. Arizona) Director of Bioengineering; BioTime, Inc.; Berkeley, CA [3/19/04]

Roy L. Walford, M.D.
Professor of Pathology, emeritus; UCLA School of Medicine; Los Angeles, CA [3/19/04]

Mark Walker, Ph.D.
Research Associate, Philosophy; Trinity College; University of Toronto (Canada) [3/19/04]

Michael D. West, Ph.D.
President, Chairman & Chief Executive Office; Advanced Cell Technology, Inc.; Worcester, MA [3/19/04]

Ronald F. White, Ph.D.
Professor of Philosophy; College of Mount St. Joseph; Cincinnati, OH [3/19/04]

James Wilsdon, Ph.D.
(Oxford University) Head of Strategy for Demos, an independent think-tank; London, England [5/04/04]

Brian Wowk, Ph.D.
Senior Scientist 21st Century Medicine, Inc.; Rancho Cucamonga, CA [3/19/04]

Selected Journal Articles Supporting Cryonics:

First paper showing recovery of brain electrical activity after freezing to -20°C. Suda I, Kito K, Adachi C, in: Nature (1966, vol. 212), “Viability of long term frozen cat brain in vitro“, pg. 268-270.

First paper to propose cryonics by neuropreservation: Martin G, in: Perspectives in Biology and Medicine (1971, vol. 14), “Brief proposal on immortality: an interim solution”, pg. 339.

First paper showing recovery of a mammalian organ after cooling to -196°C (liquid nitrogen temperature) and subsequent transplantation: Hamilton R, Holst HI, Lehr HB, in: Journal of Surgical Research (1973, vol 14), “Successful preservation of canine small intestine by freezing“, pg. 527-531.

First paper showing partial recovery of brain electrical activity after 7 years of frozen storage: Suda I, Kito K, Adachi C, in: Brain Research (1974, vol. 70), “Bioelectric discharges of isolated cat brain after revival from years of frozen storage“, pg. 527-531.

First paper suggesting that nanotechnology could reverse freezing injury: Drexler KE, in: Proceedings of the National Academy of Sciences (1981, vol. 78), “Molecular engineering: An approach to the development of general capabilities for molecular manipulation“, pg. 5275-5278.

First paper showing that large organs can be cryopreserved without structural damage from ice: Fahy GM, MacFarlane DR, Angell CA, Meryman HT, in: Cryobiology (1984, vol. 21), “Vitrification as an approach to cryopreservation“, pg. 407-426.

First paper showing that large mammals can be recovered after three hours of total circulatory arrest (“clinical death”) at +3°C (37°F). This supports the reversibility of the hypothermic phase of cryonics: Haneda K, Thomas R, Sands MP, Breazeale DG, Dillard DH, in: Cryobiology (1986, vol. 23), “Whole body protection during three hours of total circulatory arrest: an experimental study“, pg. 483-494.

First detailed discussion of the application of nanotechnology to reverse human cryopreservation: Merkle RC, in: Medical Hypotheses (1992, vol. 39), “The technical feasibility of cryonics“, pg. 6-16.

First successful application of vitrification to a relatively large tissue of medical interest: Song YC, Khirabadi BS, Lightfoot F, Brockbank KG, Taylor MJ, in: Nature Biotechnology (2000, vol. 18), “Vitreous cryopreservation maintains the function of vascular grafts“, pg. 296-299.

First report of the consistent survival of transplanted kidneys after cooling to and rewarming from -45°C: Fahy GM, Wowk B, Wu J, Phan J, Rasch C, Chang A, Zendejas E, in: Cryobiology (2004 vol. 48), “Cryopreservation of organs by vitrification: perspectives and recent advances“, pg. 157-78. PDF here.

First paper showing ice-free vitrification of whole brains, the reversibility of prolonged warm ischemic injury without subsequent neurological deficits, and setting forth the present scientific evidence in support of cryonics: Lemler J, Harris SB, Platt C, Huffman T, in: Annals of the New York Academy of Sciences, (2004 vol. 1019), “The Arrest of Biological Time as a Bridge to Engineered Negligible Senescence“, pg. 559-563. PDF here.

First discussion of cryonics in a major medical journal: Whetstine L, Streat S, Darwin M, Crippen D, in: Critical Care, (2005, vol. 9), “Pro/con ethics debate: When is dead really dead?“, pg. 538-542. PDF here.

First demonstration that both the viability and structure of complex neural networks can be well preserved by vitrification: Pichugin Y, Fahy GM, Morin R, in: Cryobiology, (2006, vol. 52), “Cryopreservation of rat hippocampal slices by vitrification“, pg. 228-240. PDF here.

Rigorous demonstration of memory retention after cooling to +10°C (59°F). Alam HB, Bowyer MW, Koustova E, Gushchin V, Anderson D, Stanton K, Kreishman P, Cryer CM, Hancock T, Rhee P, in: Surgery (2002, vol. 132), “Learning and memory is preserved after induced asanguineous hyperkalemic hypothermic arrest in a swine model of traumatic exsanguination“, pg. 278-88.

Review of scientific justifications of cryonics: Best BP, in: Rejuvenation Research (2008, vol. 11), “Scientific justification of cryonics practice”, pg. 493-503. PDF here.

First successful vitrification, transplantation, and long-term survival of a vital mammalian organ: Fahy GM, Wowk B, Pagotan R, Chang A, Phan J, Thomson B, Phan L, in: Organogensis (2009, vol. 5), “Physical and biological aspects of renal vitrification” pg. 167-175. PDF here.

First demonstration of memory retention in a cryopreserved and revived animal: Vita-More N, Barranco D, in: Rejuvenation Research, (2015, vol. 18), “Persistence of Long-Term Memory in Vitrified and Revived Caenorhabditis elegans“, pg. 458-463. PDF here.

First demonstration of whole brain vitrification with perfect preservation of neural connectivity (“connectome”) throughout the entire brain: McIntyre RM, Fahy GM, in: Cryobiology, (2015, vol. 71), “Aldehyde-stabilized cryopreservation“, pg. 448-458. PDF here.

Note: Signing of this letter does not imply endorsement of any particular cryonics organization or its practices. Opinions on how much cerebral ischemic injury (delay after clinical death) and preservation injury may be reversible in the future vary widely among signatories.

Contact: contact@evidencebasedcryonics.org