Waking Up with Sam Harris #73 – Forbidden Knowledge (with Charles Murray)

The Waking Up Podcat #73 – Forbidden Knowledge:

A Conversation with Charles Murray

Twitter: @charlesmurray

In this episode of the Waking Up podcast, Sam Harris speaks with Charles Murray about the controversy over his book The Bell Curve, the validity and significance of IQ as a measure of intelligence, the problem of social stratification, the rise of Trump, universal basic income, and other topics.

Charles Murray is a political scientist and author. His 1994 New York Times bestseller, The Bell Curve (coauthored with the late Richard J. Herrnstein), sparked heated controversy for its analysis of the role of IQ in shaping America’s class structure. Murray’s other books include What It Means to Be a Libertarian, Human Accomplishment, and In Our Hands. His 2012 book, Coming Apart: The State of White America, 1960-2010 describes an unprecedented divergence in American classes over the last half century.

https://youtu.be/Y1lEPQYQk8s

Want to support the Waking Up podcast?

Please visit: samharris.org/support

Subscribe to the Waking Up podcast

Get Sam’s email newsletter

Follow Sam on Twitter

Follow Sam on Facebook

For more information about Sam Harris: Sam Harris


(Photo via the Mukashi Mukashi Photography)

What happens when our computers get smarter than we are?

Nick Bostrom: What happens when our computers get smarter than we are?

0:11

I work with a bunch of mathematicians, philosophers and computer scientists, and we sit around and think about the future of machine intelligence, among other things. Some people think that some of these things are sort of science fiction-y, far out there, crazy. But I like to say, okay, let’s look at the modern human condition. (Laughter) This is the normal way for things to be.

0:40

But if we think about it, we are actually recently arrived guests on this planet, the human species. Think about if Earth was created one year ago, the human species, then, would be 10 minutes old. The industrial era started two seconds ago. Another way to look at this is to think of world GDP over the last 10,000 years, I’ve actually taken the trouble to plot this for you in a graph. It looks like this. (Laughter) It’s a curious shape for a normal condition. I sure wouldn’t want to sit on it. (Laughter)

1:18

Let’s ask ourselves, what is the cause of this current anomaly? Some people would say it’s technology. Now it’s true, technology has accumulated through human history, and right now, technology advances extremely rapidly — that is the proximate cause, that’s why we are currently so very productive. But I like to think back further to the ultimate cause.

1:44

Look at these two highly distinguished gentlemen: We have Kanzi — he’s mastered 200 lexical tokens, an incredible feat. And Ed Witten unleashed the second superstring revolution. If we look under the hood, this is what we find: basically the same thing. One is a little larger, it maybe also has a few tricks in the exact way it’s wired. These invisible differences cannot be too complicated, however, because there have only been 250,000 generations since our last common ancestor. We know that complicated mechanisms take a long time to evolve. So a bunch of relatively minor changes take us from Kanzi to Witten, from broken-off tree branches to intercontinental ballistic missiles.

2:31

So this then seems pretty obvious that everything we’ve achieved, and everything we care about, depends crucially on some relatively minor changes that made the human mind. And the corollary, of course, is that any further changes that could significantly change the substrate of thinking could have potentially enormous consequences.

2:55

Some of my colleagues think we’re on the verge of something that could cause a profound change in that substrate, and that is machine superintelligence. Artificial intelligence used to be about putting commands in a box. You would have human programmers that would painstakingly handcraft knowledge items. You build up these expert systems, and they were kind of useful for some purposes, but they were very brittle, you couldn’t scale them. Basically, you got out only what you put in. But since then, a paradigm shift has taken place in the field of artificial intelligence.

3:29

Today, the action is really around machine learning. So rather than handcrafting knowledge representations and features, we create algorithms that learn, often from raw perceptual data. Basically the same thing that the human infant does. The result is A.I. that is not limited to one domain — the same system can learn to translate between any pairs of languages, or learn to play any computer game on the Atari console. Now of course, A.I. is still nowhere near having the same powerful, cross-domain ability to learn and plan as a human being has. The cortex still has some algorithmic tricks that we don’t yet know how to match in machines.

4:18

So the question is, how far are we from being able to match those tricks? A couple of years ago, we did a survey of some of the world’s leading A.I. experts, to see what they think, and one of the questions we asked was, “By which year do you think there is a 50 percent probability that we will have achieved human-level machine intelligence?” We defined human-level here as the ability to perform almost any job at least as well as an adult human, so real human-level, not just within some limited domain. And the median answer was 2040 or 2050, depending on precisely which group of experts we asked. Now, it could happen much, much later, or sooner, the truth is nobody really knows.

5:04

What we do know is that the ultimate limit to information processing in a machine substrate lies far outside the limits in biological tissue. This comes down to physics. A biological neuron fires, maybe, at 200 hertz, 200 times a second. But even a present-day transistor operates at the Gigahertz. Neurons propagate slowly in axons, 100 meters per second, tops. But in computers, signals can travel at the speed of light. There are also size limitations, like a human brain has to fit inside a cranium, but a computer can be the size of a warehouse or larger. So the potential for superintelligence lies dormant in matter, much like the power of the atom lay dormant throughout human history, patiently waiting there until 1945. In this century, scientists may learn to awaken the power of artificial intelligence. And I think we might then see an intelligence explosion.

6:09

Now most people, when they think about what is smart and what is dumb, I think have in mind a picture roughly like this. So at one end we have the village idiot, and then far over at the other side we have Ed Witten, or Albert Einstein, or whoever your favorite guru is. But I think that from the point of view of artificial intelligence, the true picture is actually probably more like this: AI starts out at this point here, at zero intelligence, and then, after many, many years of really hard work, maybe eventually we get to mouse-level artificial intelligence, something that can navigate cluttered environments as well as a mouse can. And then, after many, many more years of really hard work, lots of investment, maybe eventually we get to chimpanzee-level artificial intelligence. And then, after even more years of really, really hard work, we get to village idiot artificial intelligence. And a few moments later, we are beyond Ed Witten. The train doesn’t stop at Humanville Station. It’s likely, rather, to swoosh right by.

7:13

Now this has profound implications, particularly when it comes to questions of power. For example, chimpanzees are strong — pound for pound, a chimpanzee is about twice as strong as a fit human male. And yet, the fate of Kanzi and his pals depends a lot more on what we humans do than on what the chimpanzees do themselves. Once there is superintelligence, the fate of humanity may depend on what the superintelligence does. Think about it: Machine intelligence is the last invention that humanity will ever need to make. Machines will then be better at inventing than we are, and they’ll be doing so on digital timescales. What this means is basically a telescoping of the future. Think of all the crazy technologies that you could have imagined maybe humans could have developed in the fullness of time: cures for aging, space colonization, self-replicating nanobots or uploading of minds into computers, all kinds of science fiction-y stuff that’s nevertheless consistent with the laws of physics. All of this superintelligence could develop, and possibly quite rapidly.

8:23

Now, a superintelligence with such technological maturity would be extremely powerful, and at least in some scenarios, it would be able to get what it wants. We would then have a future that would be shaped by the preferences of this A.I. Now a good question is, what are those preferences? Here it gets trickier. To make any headway with this, we must first of all avoid anthropomorphizing. And this is ironic because every newspaper article about the future of A.I. has a picture of this: So I think what we need to do is to conceive of the issue more abstractly, not in terms of vivid Hollywood scenarios.

9:08

We need to think of intelligence as an optimization process, a process that steers the future into a particular set of configurations. A superintelligence is a really strong optimization process. It’s extremely good at using available means to achieve a state in which its goal is realized. This means that there is no necessary conenction between being highly intelligent in this sense, and having an objective that we humans would find worthwhile or meaningful.

9:38

Suppose we give an A.I. the goal to make humans smile. When the A.I. is weak, it performs useful or amusing actions that cause its user to smile. When the A.I. becomes superintelligent, it realizes that there is a more effective way to achieve this goal: take control of the world and stick electrodes into the facial muscles of humans to cause constant, beaming grins. Another example, suppose we give A.I. the goal to solve a difficult mathematical problem. When the A.I. becomes superintelligent, it realizes that the most effective way to get the solution to this problem is by transforming the planet into a giant computer, so as to increase its thinking capacity. And notice that this gives the A.I.s an instrumental reason to do things to us that we might not approve of. Human beings in this model are threats, we could prevent the mathematical problem from being solved.

10:28

Of course, perceivably things won’t go wrong in these particular ways; these are cartoon examples. But the general point here is important: if you create a really powerful optimization process to maximize for objective x, you better make sure that your definition of x incorporates everything you care about. This is a lesson that’s also taught in many a myth. King Midas wishes that everything he touches be turned into gold. He touches his daughter, she turns into gold. He touches his food, it turns into gold. This could become practically relevant, not just as a metaphor for greed, but as an illustration of what happens if you create a powerful optimization process and give it misconceived or poorly specified goals.

11:15

Now you might say, if a computer starts sticking electrodes into people’s faces, we’d just shut it off. A, this is not necessarily so easy to do if we’ve grown dependent on the system — like, where is the off switch to the Internet? B, why haven’t the chimpanzees flicked the off switch to humanity, or the Neanderthals? They certainly had reasons. We have an off switch, for example, right here. (Choking) The reason is that we are an intelligent adversary; we can anticipate threats and plan around them. But so could a superintelligent agent, and it would be much better at that than we are. The point is, we should not be confident that we have this under control here.

12:03

And we could try to make our job a little bit easier by, say, putting the A.I. in a box, like a secure software environment, a virtual reality simulation from which it cannot escape. But how confident can we be that the A.I. couldn’t find a bug. Given that merely human hackers find bugs all the time, I’d say, probably not very confident. So we disconnect the ethernet cable to create an air gap, but again, like merely human hackers routinely transgress air gaps using social engineering. Right now, as I speak, I’m sure there is some employee out there somewhere who has been talked into handing out her account details by somebody claiming to be from the I.T. department.

12:45

More creative scenarios are also possible, like if you’re the A.I., you can imagine wiggling electrodes around in your internal circuitry to create radio waves that you can use to communicate. Or maybe you could pretend to malfunction, and then when the programmers open you up to see what went wrong with you, they look at the source code — Bam! — the manipulation can take place. Or it could output the blueprint to a really nifty technology, and when we implement it, it has some surreptitious side effect that the A.I. had planned. The point here is that we should not be confident in our ability to keep a superintelligent genie locked up in its bottle forever. Sooner or later, it will get out.

13:26

I believe that the answer here is to figure out how to create superintelligent A.I. such that even if — when — it escapes, it is still safe because it is fundamentally on our side because it shares our values. I see no way around this difficult problem.

13:43

Now, I’m actually fairly optimistic that this problem can be solved. We wouldn’t have to write down a long list of everything we care about, or worse yet, spell it out in some computer language like C++ or Python, that would be a task beyond hopeless. Instead, we would create an A.I. that uses its intelligence to learn what we value, and its motivation system is constructed in such a way that it is motivated to pursue our values or to perform actions that it predicts we would approve of. We would thus leverage its intelligence as much as possible to solve the problem of value-loading.

14:23

This can happen, and the outcome could be very good for humanity. But it doesn’t happen automatically. The initial conditions for the intelligence explosion might need to be set up in just the right way if we are to have a controlled detonation. The values that the A.I. has need to match ours, not just in the familiar context, like where we can easily check how the A.I. behaves, but also in all novel contexts that the A.I. might encounter in the indefinite future.

14:53

And there are also some esoteric issues that would need to be solved, sorted out: the exact details of its decision theory, how to deal with logical uncertainty and so forth. So the technical problems that need to be solved to make this work look quite difficult — not as difficult as making a superintelligent A.I., but fairly difficult. Here is the worry: Making superintelligent A.I. is a really hard challenge. Making superintelligent A.I. that is safe involves some additional challenge on top of that. The risk is that if somebody figures out how to crack the first challenge without also having cracked the additional challenge of ensuring perfect safety.

15:36

So I think that we should work out a solution to the control problem in advance, so that we have it available by the time it is needed. Now it might be that we cannot solve the entire control problem in advance because maybe some elements can only be put in place once you know the details of the architecture where it will be implemented. But the more of the control problem that we solve in advance, the better the odds that the transition to the machine intelligence era will go well.

16:05

This to me looks like a thing that is well worth doing and I can imagine that if things turn out okay, that people a million years from now look back at this century and it might well be that they say that the one thing we did that really mattered was to get this thing right.

16:23

Thank you.

16:25

(Applause)

Sam Harris: Can we build AI without losing control over it?

Scared of superintelligent AI? You should be, says neuroscientist and philosopher Sam Harris — and not just in some theoretical way. We’re going to build superhuman machines, says Harris, but we haven’t yet grappled with the problems associated with creating something that may treat us the way we treat ants.

Sam Harris: Can we build AI without losing control over it?

0:12

I’m going to talk about a failure of intuition that many of us suffer from. It’s really a failure to detect a certain kind of danger. I’m going to describe a scenario that I think is both terrifying and likely to occur, and that’s not a good combination, as it turns out. And yet rather than be scared, most of you will feel that what I’m talking about is kind of cool.

0:36

I’m going to describe how the gains we make in artificial intelligence could ultimately destroy us. And in fact, I think it’s very difficult to see how they won’t destroy us or inspire us to destroy ourselves. And yet if you’re anything like me, you’ll find that it’s fun to think about these things. And that response is part of the problem. OK? That response should worry you. And if I were to convince you in this talk that we were likely to suffer a global famine, either because of climate change or some other catastrophe, and that your grandchildren, or their grandchildren, are very likely to live like this, you wouldn’t think, “Interesting. I like this TED Talk.”

1:20

Famine isn’t fun. Death by science fiction, on the other hand, is fun, and one of the things that worries me most about the development of AI at this point is that we seem unable to marshal an appropriate emotional response to the dangers that lie ahead. I am unable to marshal this response, and I’m giving this talk.

1:41

It’s as though we stand before two doors. Behind door number one, we stop making progress in building intelligent machines. Our computer hardware and software just stops getting better for some reason. Now take a moment to consider why this might happen. I mean, given how valuable intelligence and automation are, we will continue to improve our technology if we are at all able to. What could stop us from doing this? A full-scale nuclear war? A global pandemic? An asteroid impact? Justin Bieber becoming president of the United States?

2:19

(Laughter)

2:23

The point is, something would have to destroy civilization as we know it. You have to imagine how bad it would have to be to prevent us from making improvements in our technology permanently, generation after generation. Almost by definition, this is the worst thing that’s ever happened in human history.

2:43

So the only alternative, and this is what lies behind door number two, is that we continue to improve our intelligent machines year after year after year. At a certain point, we will build machines that are smarter than we are, and once we have machines that are smarter than we are, they will begin to improve themselves. And then we risk what the mathematician IJ Good called an “intelligence explosion,” that the process could get away from us.

3:09

Now, this is often caricatured, as I have here, as a fear that armies of malicious robots will attack us. But that isn’t the most likely scenario. It’s not that our machines will become spontaneously malevolent. The concern is really that we will build machines that are so much more competent than we are that the slightest divergence between their goals and our own could destroy us.

3:34

Just think about how we relate to ants. We don’t hate them. We don’t go out of our way to harm them. In fact, sometimes we take pains not to harm them. We step over them on the sidewalk. But whenever their presence seriously conflicts with one of our goals, let’s say when constructing a building like this one, we annihilate them without a qualm. The concern is that we will one day build machines that, whether they’re conscious or not, could treat us with similar disregard.

4:04

Now, I suspect this seems far-fetched to many of you. I bet there are those of you who doubt that superintelligent AI is possible, much less inevitable. But then you must find something wrong with one of the following assumptions. And there are only three of them.

4:22

Intelligence is a matter of information processing in physical systems. Actually, this is a little bit more than an assumption. We have already built narrow intelligence into our machines, and many of these machines perform at a level of superhuman intelligence already. And we know that mere matter can give rise to what is called “general intelligence,” an ability to think flexibly across multiple domains, because our brains have managed it. Right? I mean, there’s just atoms in here, and as long as we continue to build systems of atoms that display more and more intelligent behavior, we will eventually, unless we are interrupted, we will eventually build general intelligence into our machines.

5:10

It’s crucial to realize that the rate of progress doesn’t matter, because any progress is enough to get us into the end zone. We don’t need Moore’s law to continue. We don’t need exponential progress. We just need to keep going.

5:24

The second assumption is that we will keep going. We will continue to improve our intelligent machines. And given the value of intelligence — I mean, intelligence is either the source of everything we value or we need it to safeguard everything we value. It is our most valuable resource. So we want to do this. We have problems that we desperately need to solve. We want to cure diseases like Alzheimer’s and cancer. We want to understand economic systems. We want to improve our climate science. So we will do this, if we can. The train is already out of the station, and there’s no brake to pull.

6:04

Finally, we don’t stand on a peak of intelligence, or anywhere near it, likely. And this really is the crucial insight. This is what makes our situation so precarious, and this is what makes our intuitions about risk so unreliable.

6:22

Now, just consider the smartest person who has ever lived. On almost everyone’s shortlist here is John von Neumann. I mean, the impression that von Neumann made on the people around him, and this included the greatest mathematicians and physicists of his time, is fairly well-documented. If only half the stories about him are half true, there’s no question he’s one of the smartest people who has ever lived. So consider the spectrum of intelligence. Here we have John von Neumann. And then we have you and me. And then we have a chicken.

6:56

(Laughter)

6:58

Sorry, a chicken.

6:59

(Laughter)

7:00

There’s no reason for me to make this talk more depressing than it needs to be.

7:04

(Laughter)

7:07

It seems overwhelmingly likely, however, that the spectrum of intelligence extends much further than we currently conceive, and if we build machines that are more intelligent than we are, they will very likely explore this spectrum in ways that we can’t imagine, and exceed us in ways that we can’t imagine.

7:26

And it’s important to recognize that this is true by virtue of speed alone. Right? So imagine if we just built a superintelligent AI that was no smarter than your average team of researchers at Stanford or MIT. Well, electronic circuits function about a million times faster than biochemical ones, so this machine should think about a million times faster than the minds that built it. So you set it running for a week, and it will perform 20,000 years of human-level intellectual work, week after week after week. How could we even understand, much less constrain, a mind making this sort of progress?

8:07

The other thing that’s worrying, frankly, is that, imagine the best case scenario. So imagine we hit upon a design of superintelligent AI that has no safety concerns. We have the perfect design the first time around. It’s as though we’ve been handed an oracle that behaves exactly as intended. Well, this machine would be the perfect labor-saving device. It can design the machine that can build the machine that can do any physical work, powered by sunlight, more or less for the cost of raw materials. So we’re talking about the end of human drudgery. We’re also talking about the end of most intellectual work.

8:48

So what would apes like ourselves do in this circumstance? Well, we’d be free to play Frisbee and give each other massages. Add some LSD and some questionable wardrobe choices, and the whole world could be like Burning Man.

9:01

(Laughter)

9:05

Now, that might sound pretty good, but ask yourself what would happen under our current economic and political order? It seems likely that we would witness a level of wealth inequality and unemployment that we have never seen before. Absent a willingness to immediately put this new wealth to the service of all humanity, a few trillionaires could grace the covers of our business magazines while the rest of the world would be free to starve.

9:33

And what would the Russians or the Chinese do if they heard that some company in Silicon Valley was about to deploy a superintelligent AI? This machine would be capable of waging war, whether terrestrial or cyber, with unprecedented power. This is a winner-take-all scenario. To be six months ahead of the competition here is to be 500,000 years ahead, at a minimum. So it seems that even mere rumors of this kind of breakthrough could cause our species to go berserk.

10:05

Now, one of the most frightening things, in my view, at this moment, are the kinds of things that AI researchers say when they want to be reassuring. And the most common reason we’re told not to worry is time. This is all a long way off, don’t you know. This is probably 50 or 100 years away. One researcher has said, “Worrying about AI safety is like worrying about overpopulation on Mars.” This is the Silicon Valley version of “don’t worry your pretty little head about it.”

10:37

(Laughter)

10:38

No one seems to notice that referencing the time horizon is a total non sequitur. If intelligence is just a matter of information processing, and we continue to improve our machines, we will produce some form of superintelligence. And we have no idea how long it will take us to create the conditions to do that safely. Let me say that again. We have no idea how long it will take us to create the conditions to do that safely.

11:11

And if you haven’t noticed, 50 years is not what it used to be. This is 50 years in months. This is how long we’ve had the iPhone. This is how long “The Simpsons” has been on television. Fifty years is not that much time to meet one of the greatest challenges our species will ever face. Once again, we seem to be failing to have an appropriate emotional response to what we have every reason to believe is coming.

11:37

The computer scientist Stuart Russell has a nice analogy here. He said, imagine that we received a message from an alien civilization, which read: “People of Earth, we will arrive on your planet in 50 years. Get ready.” And now we’re just counting down the months until the mothership lands? We would feel a little more urgency than we do.

12:03

Another reason we’re told not to worry is that these machines can’t help but share our values because they will be literally extensions of ourselves. They’ll be grafted onto our brains, and we’ll essentially become their limbic systems. Now take a moment to consider that the safest and only prudent path forward, recommended, is to implant this technology directly into our brains. Now, this may in fact be the safest and only prudent path forward, but usually one’s safety concerns about a technology have to be pretty much worked out before you stick it inside your head.

12:35

(Laughter)

12:37

The deeper problem is that building superintelligent AI on its own seems likely to be easier than building superintelligent AI and having the completed neuroscience that allows us to seamlessly integrate our minds with it. And given that the companies and governments doing this work are likely to perceive themselves as being in a race against all others, given that to win this race is to win the world, provided you don’t destroy it in the next moment, then it seems likely that whatever is easier to do will get done first.

13:09

Now, unfortunately, I don’t have a solution to this problem, apart from recommending that more of us think about it. I think we need something like a Manhattan Project on the topic of artificial intelligence. Not to build it, because I think we’ll inevitably do that, but to understand how to avoid an arms race and to build it in a way that is aligned with our interests. When you’re talking about superintelligent AI that can make changes to itself, it seems that we only have one chance to get the initial conditions right, and even then we will need to absorb the economic and political consequences of getting them right.

13:44

But the moment we admit that information processing is the source of intelligence, that some appropriate computational system is what the basis of intelligence is, and we admit that we will improve these systems continuously, and we admit that the horizon of cognition very likely far exceeds what we currently know, then we have to admit that we are in the process of building some sort of god. Now would be a good time to make sure it’s a god we can live with.

14:19

Thank you very much.

14:20

(Applause)

Dyson Spheres

XIX: The Dyson Sun
By Anders Sandberg

Kepler struggled for many years with his model of the solar system. Since there were six planets orbiting the sun, and five platonic polyhedra, should their distances not be related? He attempted inscribing the polyhedra in spheres marking the circular orbits, placing the solar system in perfect mathematical harmony. It made wonderful hermetic sense – and it never worked.

When Kepler finally discarded his cherished model and looked at what the data had been truly saying, he discovered something else entirely. Three simple laws:

The orbits of the planets are ellipses, with the Sun at one focus of the ellipse.

The line joining the planet to the Sun sweeps out equal areas in equal times as the planet travels around the ellipse.

The ratio of the squares of the revolutionary periods for two planets is equal to the ratio of the cubes of their semimajor axes.

Suddenly the data made sense. The solar system did contain harmonies, but new harmonies never expected by any Greek philosophers. The way towards gravitation, central forces and space was open.

Will our descendants make Kepler’s dream true? Most of the solar system is a waste of matter, just lying there and dissipating the sacred rays of sunlight into the void. But what if that matter was rearranged to collect the light, to make it available for life, work, thought and growth?

Freeman Dyson‘s idea (based on a similar concept in Stapledon’s Starmaker) was to englobe the sun in a shell of orbiting habitats and solar collectors: a Dyson sphere. It would have 600 million times the surface area of the Earth. He suggested it as the logical result of current exponential growth in energy and resource use.

Robert Bradbury went further in suggesting the Matrioshka Brain: making use of all available matter to process information – thinking, feeling, being – in a nested structure where each kind of material available would be used for energy collection and dissipation, information processing and storage. Planets would be disassembled by self-replicating machines in a matter of years, the stellar atmosphere tamed and the entire system turned into something not unlike the Kepler vision of interlocking spheres and polyhedra. Optimization and forethought would shape this structure, by necessity introducing mathematical and physical harmonies.

We recoil at the idea of disassembling planets (especially the Earth). But maybe we should burn our cradle to light the library of mind.

Abandoning outmoded ways of thinking. Central forces. Energy. The birth of something new through creative destruction. Maximum productiveness. Perfect unity aesthetics and efficiency.


About Author

Anders Sandberg (born 11 July 1972) is a researcher, science debater, futurist, transhumanist and author. He holds a Ph.D. in computational neuroscience from Stockholm University, and is currently a James Martin Research Fellow at the Future of Humanity Institute at Oxford University.

http://www.aleph.se/andart/

Define: Cybernetics

Cybernetics is a transdisciplinary approach for exploring regulatory systems – their structures, constraints, and possibilities. Norbert Wiener defined cybernetics in 1948 as “the scientific study of control and communication in the animal and the machine.” In the 21st century, the term is often used in a rather loose way to imply “control of any system using technology.”

Cybernetics is applicable when a system being analyzed incorporates a closed signaling loop – originally referred to as a “circular causal” relationship – that is, where action by the system generates some change in its environment and that change is reflected in the system in some manner (feedback) that triggers a system change. Cybernetics is relevant to, for example, mechanical, physical, biological, cognitive, and social systems. The essential goal of the broad field of cybernetics is to understand and define the functions and processes of systems that have goals and that participate in circular, causal chains that move from action to sensing to comparison with desired goal, and again to action. Its focus is how anything (digital, mechanical or biological) processes information, reacts to information, and changes or can be changed to better accomplish the first two tasks. Cybernetics includes the study of feedback, black boxes and derived concepts such as communication and control in living organisms, machines, and organizations including self-organization.

Concepts studied by cyberneticists include, but are not limited to learning, cognition, adaptation, social control, emergence, convergence, communication, efficiency, efficacy, and connectivity. In cybernetics, these concepts (otherwise already objects of study in other disciplines such as biology and engineering) are abstracted from the context of the specific organism or device.

The new bionics


Hugh Herr is building the next generation of bionic limbs, robotic prosthetics inspired by nature’s own designs. Herr lost both legs in a climbing accident 30 years ago; now, as the head of the MIT Media Lab’s Biomechatronics group, he shows his incredible technology in a talk that’s both technical and deeply personal — with the help of ballroom dancer Adrianne Haslet-Davis, who lost her left leg in the 2013 Boston Marathon bombing, and performs again for the first time on the TED stage.


Visit the website at: bionxmed.com


The new bionics that let us run, climb and dance

Hugh Herr

Transcript

0:12
Looking deeply inside nature, through the magnifying glass of science, designers extract principles, processes and materials that are forming the very basis of design methodology. From synthetic constructs that resemble biological materials, to computational methods that emulate neural processes, nature is driving design. Design is also driving nature. In realms of genetics, regenerative medicine and synthetic biology, designers are growing novel technologies, not foreseen or anticipated by nature.
0:52
Bionics explores the interplay between biology and design. As you can see, my legs are bionic. Today, I will tell human stories of bionic integration; how electromechanics attached to the body, and implanted inside the body are beginning to bridge the gap between disability and ability, between human limitation and human potential.
1:23
Bionics has defined my physicality. In 1982, both of my legs were amputated due to tissue damage from frostbite, incurred during a mountain-climbing accident. At that time, I didn’t view my body as broken. I reasoned that a human being can never be “broken.” Technology is broken. Technology is inadequate. This simple but powerful idea was a call to arms, to advance technology for the elimination of my own disability, and ultimately, the disability of others. I began by developing specialized limbs that allowed me to return to the vertical world of rock and ice climbing. I quickly realized that the artificial part of my body is malleable; able to take on any form, any function — a blank slate for which to create, perhaps, structures that could extend beyond biological capability. I made my height adjustable. I could be as short as five feet or as tall as I’d like.
2:32
(Laughter)
2:34
So when I was feeling bad about myself, insecure, I would jack my height up.
2:40
(Laughter)
2:42
But when I was feeling confident and suave, I would knock my height down a notch, just to give the competition a chance.
2:48
(Laughter)
2:50
(Applause)
2:52
Narrow-edged feet allowed me to climb steep rock fissures, where the human foot cannot penetrate, and spiked feet enabled me to climb vertical ice walls, without ever experiencing muscle leg fatigue. Through technological innovation, I returned to my sport, stronger and better. Technology had eliminated my disability, and allowed me a new climbing prowess. As a young man, I imagined a future world where technology so advanced could rid the world of disability, a world in which neural implants would allow the visually impaired to see. A world in which the paralyzed could walk, via body exoskeletons.
3:31
Sadly, because of deficiencies in technology, disability is rampant in the world. This gentleman is missing three limbs. As a testimony to current technology, he is out of the wheelchair, but we need to do a better job in bionics, to allow, one day, full rehabilitation for a person with this level of injury. At the MIT Media Lab, we’ve established the Center for Extreme Bionics. The mission of the center is to put forth fundamental science and technological capability that will allow the biomechatronic and regenerative repair of humans, across a broad range of brain and body disabilities.
4:12
Today, I’m going to tell you how my legs function, how they work, as a case in point for this center. Now, I made sure to shave my legs last night, because I knew I’d be showing them off.
4:23
(Laughter)
4:25
Bionics entails the engineering of extreme interfaces. There’s three extreme interfaces in my bionic limbs: mechanical, how my limbs are attached to my biological body; dynamic, how they move like flesh and bone; and electrical, how they communicate with my nervous system.
4:41
I’ll begin with mechanical interface. In the area of design, we still do not understand how to attach devices to the body mechanically. It’s extraordinary to me that in this day and age, one of the most mature, oldest technologies in the human timeline, the shoe, still gives us blisters. How can this be? We have no idea how to attach things to our bodies. This is the beautifully lyrical design work of Professor Neri Oxman at the MIT Media Lab, showing spatially varying exoskeletal impedances, shown here by color variation in this 3D-printed model. Imagine a future where clothing is stiff and soft where you need it, when you need it, for optimal support and flexibility, without ever causing discomfort.
5:31
My bionic limbs are attached to my biological body via synthetic skins with stiffness variations, that mirror my underlying tissue biomechanics. To achieve that mirroring, we first developed a mathematical model of my biological limb. To that end, we used imaging tools such as MRI, to look inside my body, to figure out the geometries and locations of various tissues. We also took robotic tools — here’s a 14-actuator circle that goes around the biological limb. The actuators come in, find the surface of the limb, measure its unloaded shape, and then they push on the tissues to measure tissue compliances at each anatomical point.
6:14
We combine these imaging and robotic data to build a mathematical description of my biological limb, shown on the left. You see a bunch of points, or nodes? At each node, there’s a color that represents tissue compliance. We then do a mathematical transformation to the design of the synthetic skin, shown on the right. And we’ve discovered optimality is: where the body is stiff, the synthetic skin should be soft, where the body is soft, the synthetic skin is stiff, and this mirroring occurs across all tissue compliances. With this framework, we’ve produced bionic limbs that are the most comfortable limbs I’ve ever worn. Clearly, in the future, our clothing, our shoes, our braces, our prostheses, will no longer be designed and manufactured using artisan strategies, but rather, data-driven quantitative frameworks. In that future, our shoes will no longer give us blisters.
7:08
We’re also embedding sensing and smart materials into the synthetic skins. This is a material developed by SRI International, California. Under electrostatic effect, it changes stiffness. So under zero voltage, the material is compliant, it’s floppy like paper. Then the button’s pushed, a voltage is applied, and it becomes stiff as a board.
7:29
(Tapping sounds)
7:32
We embed this material into the synthetic skin that attaches my bionic limb to my biological body. When I walk here, it’s no voltage. My interface is soft and compliant. The button’s pushed, voltage is applied, and it stiffens, offering me a greater maneuverability over the bionic limb.
7:49
We’re also building exoskeletons. This exoskeleton becomes stiff and soft in just the right areas of the running cycle, to protect the biological joints from high impacts and degradation. In the future, we’ll all be wearing exoskeletons in common activities, such as running.
8:07
Next, dynamic interface. How do my bionic limbs move like flesh and bone? At my MIT lab, we study how humans with normal physiologies stand, walk and run. What are the muscles doing, and how are they controlled by the spinal cord? This basic science motivates what we build. We’re building bionic ankles, knees and hips. We’re building body parts from the ground up. The bionic limbs that I’m wearing are called BiOMs. They’ve been fitted to nearly 1,000 patients, 400 of which have been wounded U.S. soldiers.
8:41
How does it work?
8:42
At heel strike, under computer control, the system controls stiffness, to attenuate the shock of the limb hitting the ground. Then at mid-stance, the bionic limb outputs high torques and powers to lift the person into the walking stride, comparable to how muscles work in the calf region. This bionic propulsion is very important clinically to patients. So on the left, you see the bionic device worn by a lady, on the right, a passive device worn by the same lady, that fails to emulate normal muscle function, enabling her to do something everyone should be able to do: go up and down their steps at home. Bionics also allows for extraordinary athletic feats. Here’s a gentleman running up a rocky pathway. This is Steve Martin — not the comedian — who lost his legs in a bomb blast in Afghanistan.
9:33
We’re also building exoskeletal structures using these same principles, that wrap around the biological limb. This gentleman does not have any leg condition, any disability. He has a normal physiology, so these exoskeletons are applying muscle-like torques and powers, so that his own muscles need not apply those torques and powers. This is the first exoskeleton in history that actually augments human walking. It significantly reduces metabolic cost. It’s so profound in its augmentation, that when a normal, healthy person wears the device for 40 minutes and then takes it off, their own biological legs feel ridiculously heavy and awkward. We’re beginning the age in which machines attached to our bodies will make us stronger and faster and more efficient.
10:26
Moving on to electrical interface: How do my bionic limbs communicate with my nervous system? Across my residual limb are electrodes that measure the electrical pulse of my muscles. That’s communicated to the bionic limb, so when I think about moving my phantom limb, the robot tracks those movement desires. This diagram shows fundamentally how the bionic limb is controlled. So we model the missing biological limb, and we’ve discovered what reflexes occurred, how the reflexes of the spinal cord are controlling the muscles. And that capability is embedded in the chips of the bionic limb. What we’ve done, then, is we modulate the sensitivity of the reflex, the modeled spinal reflex, with the neural signal, so when I relax my muscles in my residual limb, I get very little torque and power, but the more I fire my muscles, the more torque I get, and I can even run. And that was the first demonstration of a running gait under neural command. Feels great.
11:29
(Applause)
11:35
We want to go a step further. We want to actually close the loop between the human and the bionic external limb. We’re doing experiments where we’re growing nerves, transected nerves, through channels, or micro-channel arrays. On the other side of the channel, the nerve then attaches to cells, skin cells and muscle cells. In the motor channels, we can sense how the person wishes to move. That can be sent out wirelessly to the bionic limb, then [sensory information] on the bionic limb can be converted to stimulations in adjacent channels, sensory channels. So when this is fully developed and for human use, persons like myself will not only have synthetic limbs that move like flesh and bone, but actually feel like flesh and bone.
12:25
This video shows Lisa Mallette, shortly after being fitted with two bionic limbs. Indeed, bionics is making a profound difference in people’s lives.
12:33
(Video) Lisa Mallette: Oh my God. LM: Oh my God, I can’t believe it!
12:41
(Video) (Laughter)
12:43
LM: It’s just like I’ve got a real leg!
12:48
Woman: Now, don’t start running.
12:49
Man: Now turn around, and do the same thing walking up, but get on your heel to toe, like you would normally just walk on level ground. Try to walk right up the hill.
13:00
LM: Oh my God.
13:03
Man: Is it pushing you up?
13:04
LM: Yes! I’m not even — I can’t even describe it.
13:09
Man: It’s pushing you right up.
13:11
Hugh Herr: Next week, I’m visiting the Center —
13:14
Thank you. Thank you.
13:15
(Applause)
13:18
Thank you.
13:20
Next week I’m visiting the Center for Medicare and Medicaid Services, and I’m going to try to convince CMS to grant appropriate code language and pricing, so this technology can be made available to the patients that need it.
13:33
(Applause)
13:34
Thank you.
13:35
(Applause)
13:38
It’s not well appreciated, but over half of the world’s population suffers from some form of cognitive, emotional, sensory or motor condition, and because of poor technology, too often, conditions result in disability and a poorer quality of life. Basic levels of physiological function should be a part of our human rights. Every person should have the right to live life without disability if they so choose — the right to live life without severe depression; the right to see a loved one, in the case of seeing-impaired; or the right to walk or to dance, in the case of limb paralysis or limb amputation. As a society, we can achieve these human rights, if we accept the proposition that humans are not disabled. A person can never be broken. Our built environment, our technologies, are broken and disabled. We the people need not accept our limitations, but can transcend disability through technological innovation. Indeed, through fundamental advances in bionics in this century, we will set the technological foundation for an enhanced human experience, and we will end disability.
14:52
I’d like to finish up with one more story, a beautiful story. The story of Adrianne Haslet-Davis. Adrianne lost her left leg in the Boston terrorist attack. I met Adrianne when this photo was taken, at Spaulding Rehabilitation Hospital. Adrianne is a dancer, a ballroom dancer.
15:10
Adrianne breathes and lives dance. It is her expression. It is her art form. Naturally, when she lost her limb in the Boston terrorist attack, she wanted to return to the dance floor.
15:22
After meeting her and driving home in my car, I thought, I’m an MIT professor. I have resources. Let’s build her a bionic limb, to enable her to go back to her life of dance. I brought in MIT scientists with expertise in prosthetics, robotics, machine learning and biomechanics, and over a 200-day research period, we studied dance. We brought in dancers with biological limbs, and we studied how they move, what forces they apply on the dance floor, and we took those data, and we put forth fundamental principles of dance, reflexive dance capability, and we embedded that intelligence into the bionic limb. Bionics is not only about making people stronger and faster. Our expression, our humanity can be embedded into electromechanics.
16:14
It was 3.5 seconds between the bomb blasts in the Boston terrorist attack. In 3.5 seconds, the criminals and cowards took Adrianne off the dance floor. In 200 days, we put her back. We will not be intimidated, brought down, diminished, conquered or stopped by acts of violence.
16:36
(Applause)
16:44
Ladies and gentlemen, please allow me to introduce Adrianne Haslet-Davis, her first performance since the attack. She’s dancing with Christian Lightner.
16:53
(Applause)
17:04
(Music: “Ring My Bell” performed by Enrique Iglesias)
17:50
(Applause)
18:21
Ladies and gentlemen, members of the research team: Elliott Rouse and Nathan Villagaray-Carski.
18:28
Elliott and Nathan.
18:31
(Applause)


What is a resistor?

A resistor limits the electrical current that flows through a circuit. Resistance is the restriction of current. In a resistor the energy of the electrons that pass through the resistor are changed to heat and/or light. For example, in a light bulb there is a resistor made of tungsten which converts the electrons into light.

Series and parallel

Resistors can be linked in various combinations to help make a circuit:

  1. Series – Where the resistors are linked one after another.
  2. Parallel – Where the resistors are linked over one another.

There are many different types of resistors. Resistors have different ratings to tell electricians how much power they can handle before they break and how accurately they can slow the flow of electricity. Connecting two resistors in series results in a higher resistance than when you connect the same two resistors in parallel. To prevent the resistor from reaching its capacity, place the resistors in parallel to keep the total resistance lower. Nowadays the electrical industry in many cases uses so called surface-mount technology based resistors which can be very small.

What is an inductor?

An inductor is an electrical device used in electrical circuits because of magnetic charge.

An inductor is usually made from a coil of conducting material, like copper wire, that is then wrapped around a core made from either air or a magnetic metal. If you use a more magnetic material as the core, you can get the magnetic field around the inductor to be pushed in towards the inductor, giving it better inductance. Small inductors can also be put onto integrated circuits using the same ways that are used to make transistors. Aluminum is usually used as the conducting material in this case.

How inductors are used

Inductors are used often in analog circuits. Two or more inductors that have coupled magnetic flux make a transformer. Transformers are used in every power grid around the world.

Inductors are also used in electrical transmission systems, where they are used to lower the amount of voltage an electrical device gives off or lower the fault current. Because inductors are heavier than other electrical components, people have been using them in electrical equipment less often.

Inductors with an iron core are used for audio equipment, power conditioning, inverter systems, rapid transit and industrial power supplies.

Transhumanism is an international and intellectual movement

Transhumanism (abbreviated as H+ or h+) is an international and intellectual movement that aims to transform the human condition by developing and creating widely available sophisticated technologies to greatly enhance human intellectual, physical, and psychological capacities. Transhumanist thinkers study the potential benefits and dangers of emerging technologies that could overcome fundamental human limitations, as well as the ethics of using such technologies. The most common thesis is that human beings may eventually be able to transform themselves into different beings with abilities so greatly expanded from the natural condition as to merit the label of posthuman beings.

Transhumanism (abbreviated as H+ or h+) is an international and intellectual movement that aims to transform the human condition by developing and creating widely available sophisticated technologies to greatly enhance human intellectual, physical, and psychological capacities.

The contemporary meaning of the term transhumanism was foreshadowed by one of the first professors of futurology, FM-2030, who taught “new concepts of the human” at The New School in the 1960s, when he began to identify people who adopt technologies, lifestyles and worldviews “transitional” to posthumanity as “transhuman”.

This hypothesis would lay the intellectual groundwork for the British philosopher Max More to begin articulating the principles of transhumanism as a futurist philosophy in 1990 and organizing in California an intelligentsia that has since grown into the worldwide transhumanist movement.

The year 1990 is seen as a “fundamental shift” in human existence by the transhuman community, as the first gene therapy trial, the first designer babies, as well as the mind-augmenting World Wide Web all emerged in that year. In many ways, one could argue the conditions that will eventually lead to the Singularity were set in place by these events in 1990.

Influenced by seminal works of science fiction, the transhumanist vision of a transformed future humanity has attracted many supporters and detractors from a wide range of perspectives including philosophy and religion.Transhumanism has been characterized by one critic, Francis Fukuyama, as among the world’s most dangerous ideas, to which Ronald Bailey countered that it is rather the “movement that epitomizes the most daring, courageous, imaginative and idealistic aspirations of humanity”.

Understanding Cryonics: Part 1 – Good Science? Or Science Fiction?

In this, the first in a series of feature articles I will be publishing on the topic of cryonics, we will look at the very basics of the technology and dispel many of the common myths regarding the ‘fantasy’ of cryonic suspension and re-animation.

First, to get rid of the most commonly perceived myth, no, Walt Disney was NOT cryonically suspended. In fact, his body (including his head – more on the significance of this later) was cremated and his ashes set to rest at the now infamous Forrest Lawn Cemetery in rural Los Angeles; very close to the final resting place of pop icon Michael Jackson.

Alcor Life Extension Foundation
Alcor Life Extension Foundation public relations manager Paula Lemler looks over storage units which contain liquid hydrogen in Scottsdale, Ariz., Wednesday, July 30, 2003. (AP Photo/Tom Hood)

The most notable person who was cryonically preserved is former baseball legend, Ted Williams. After his death in 2002, his head was surgically removed and preserved using one of the fascinating cryotechnologies called neuro suspension, which we will begin to explore now.

There are two basic types of cryonic suspension: full-body suspension, and suspension of only a subject’s head, commonly referred to as neuro-suspension. The goal of Full-body suspension is typically to revive the subject at a future time, when the affliction which set about their cardiac arrest is cured and it is reasonably deduced that they could regain a seemingly normal life.

The goal of neuro-suspension is to preserve only the brain with the hope that once human cloning technology is perfected and commonplace in our society, the subject’s DNA can be used to clone a new body and that the memories, emotions, and personality of the suspended brain can be placed into the healthy clone.

Sound far fetched? Maybe. But, before we jump to conclusions, we should at least take a much closer look at the science and technology behind cryonics so that we can make an informed and educated opinion on the subject, right? After all, the science is very real and the technology to suspend people does exist and is, in fact, in practice all over the world.

Are those people signing up to be frozen (or worse, decapitated then frozen) all crazy? Are the doctors and scientists that spend and dedicate their lives to this science nuts too?

In order to properly examine the reality of cryonics and all of the elements that go into a successful suspension, we have to understand the legality of the field and the science that drives it. Foremost in this discussion, we must understand that it is against the law to cryonically suspend any human before they are legally dead – and yes, there are (at least from strict legal and medical viewpoints) several different types of death.

Legal death occurs anytime the heart stops. This is an important distinction because there are thousands of people who legally die and are brought back by medical science every day through the use of defibrillators, bypass machines, pacemakers, and even good old fashioned CPR.

Clinical death, or total death, as it is sometimes referred to, does not occur until all brain function stops. This is the point where most medical professionals agree any attempt at resuscitation is futile since irreparable brain damage is likely to have occurred due to a prolonged lack of oxygen and/or blood circulation.

These definitions lay the platform that allows hope for the science of cryonics. The science thrives because it is believed that by properly preserving a human body at or just after the time of legal death, successful reanimation can be achieved, provided no irreversible damage is done to the cells, organs, brain, or nervous system of the subject during suspension. The preservation process, called vitrification within the industry, is the key to having any hope of successful resuscitation.

Because it is so crucial that no physical damage be done to the subject body during vitrification, the subject is not simply dipped into a vat of liquid nitrogen at the time of death. While this would immediately cease all cellular degeneration and preserve the body without further decay, it would not prevent the water content in the body from forming ice crystals which could expand causing catastrophic and irreparable damage to veins, cells, and organs. Therefore, as part of the vitrification process, shortly after a declaration of legal death, doctors immediately begin removing the water from the subject body and replacing it with a glycerol-based chemical, called a cryoprotectant. This ‘human anti-freeze’, has proven far more efficient at preserving the intricacies of the human body during suspension then did the earliest methods used. It is also sadly the reason why the people suspended earliest in the science’s history, are far less likely to ever be successfully revived, and why most scientists and cryobiologists believe any attempts at future revivals will be done on a last in, first out basis. Not because a longer period of suspension would be any more detrimental to the revival efforts, but because earlier subjects were not preserved using the methods now known to prevent crystallization during suspension and therefore have much less chance of being revived without fatally catastrophic physical damage being done to the body.

Now that we know what cryonics is, what it hopes to accomplish, and how a subject is prepared, my next article will focus on the process of vitrification and the storage of the subjects. The third article in this series will detail the storage facilities themselves, as well as the future of nanotechnology and how it is expected to revolutionize the prospect of revivals. The forth and possibly final article in this series, will recap what we know, highlight any other potential future breakthroughs in the science or the technology that drives it, and divulge when the first human revivals might be realistically expected. I hope you’ll join me for each of them as we explore this fascinating science and what miraculous possibilities successful cryonics could unleash for mankind.

About the author..

Dorian Lassiter
Dorian Lassiter is the author of numerous articles, short stories and suspense novels. He is the divorced, 39-year-old father of 3, and…