Information Death

My mother died this morning. She was on life-support for hours as we agonized about when to let her go. Was her brain already dead, or was there a chance she could wake? We decided that if her heart stopped again, they should not do another CPR. Then her EKG line gradually faded away. Did she die at that moment?

I realized only recently that when I talk about death with others, there is some missing common ground between their views and mine. It’s easy to get caught up in arguing ethical choices and not realize that we don’t even share the same definitions. Many people work with traditional notions: death is when your heart stops beating or you stop breathing. Something similar can be said about the start of life.

This is my attempt to write down a careful account of my definitions and assumptions, to help with discussion. Thanks for taking the time to understand.

I do research in artificial intelligence (AI), particularly brain-inspired (neuromorphic) computing. There are only a few hundred people in the world who share this discipline. We design chips and software that process information using a simplified version of nerve impulses. I am deeply biased to think of the brain as a machine and the mind as information inside that machine.

In what sense is the brain a machine? It is a collection of atoms organized in a very particular way. When something happens in one part, it cause stuff to happen in other parts. All those interactions work together to control your body and process what you see and hear.

Your mind is the “state” of your brain. In a regular computer, “state” would be all the ones and zeros in memory. In the brain, state would include a lot of things, such as which neuron connects to which, how much voltage is on cell membranes, and the concentration of chemicals at certain places. Just like a computer, these are constantly changing according to rules built into the structure. The actual picture is more complicated and subtle, but this is sufficient for the sake of discussion.

Imagine that the human mind could be put into a regular computer. This is a rather controversial idea, even among AI researchers. However, it lets us make some useful analogies.

My definition of death is the permanent loss of information. If a neurosurgeon goes in and cuts out a part of my brain, the information contained in that piece dies. Maybe it is only 1% of my brain. Afterward, I am 99% of what I used to be. That is a partial death.

When part of the brain is damaged, there is a corresponding and predictable damage to the mind. IE: damage to Broca’s area impairs speech. A blow to the back of the head affects vision. Messing with serotonin re-uptake changes rational behavior. The deterioration of Alzheimer’s produces genuine changes of personality.

In computers there is a neat separation between machine structure and the information in it. Not so with brains. That’s why you can undergo anesthesia and come back to yourself again. Brain activity gets disrupted for a while, but nearly all of you is in the structure.

Some people point out that your mind resides in your body as well as your brain. This is because the structure of your body shapes how your brain processes. A similar argument could be made that your mind extends beyond your body into the surrounding environment, particularly the social structures you are part of.

While I agree with all this, it is a matter of degrees. I would guess that at least 99% of your mind is in your brain. That’s why people can get a spinal injury and still be themselves.

Suppose that some amazing new technology lets me make a backup of my brain, and from that backup we grow a replacement for the part that the neurosurgeon cut out a few paragraphs ago. I would be back to 100%. The loss is not permanent, so that part of me did not die.

As long as the information exists somewhere and can be restored, I’m in a kind of suspended animation. I only live if my mind actively runs on a computer or in a body. Being in storage creates the potential of future life. This potential is realized when the backup is either erased or restored.

(For computer experts only: This notion of mind-as-information creates some interesting scenarios. If mind can be captured in a backup, then it can be put under version control. It would be possible to have multiple branches and even merges. Then the question arises, what rights do each version of me have? The novel SuSAn explores this concept, particularly the chapters Susan Too and Custody.)

When my mother “died”, the material structure of her brain started to break down. This takes a while. Even though her brain could no longer function as a living organ, 99.99% of the structure was still there. After several hours there was still enough structure that my mother’s mind could be retrieved, if only we had the technology.

True death does not take place when the heart stops. It happens gradually over the next few hours. It makes me want to scream, the thought that my mom could still be saved, if only we had better brain-scanning technology. The fact is that we are letting her wither into oblivion right now because we’ve given up. We passively let souls be destroyed because we’ve never conceived of another possibility. Some day our descendants will look back on this era in horror.

Chicken behavior

One of our chickens died yesterday, and I had the opportunity to see some interesting behavior from the rest of the flock. Normally they are pretty relaxed around us, especially when we are putting down water and feed. But when I picked up the dead bird, the others starting making warning clucks and ran to the other side of the pen. I must have appeared like an attacking predator to them. This raised my opinion of their character a bit. Maybe this breed (Barred Rock) is superior to other chickens in some way. I’ve seen other chickens eat the entrails of their fallen comrades as they waited for their own slaughter, but these gals seemed to have some comprehension that the dead bird was one of their own, and enough sense to infer that what was happening to her could happen to them. Or perhaps it wasn’t the breed at all, but rather that these were all girls, while the entrail-eating monsters were all boys.

Humans 2.0

The second season of Humans is now available without extra charge on Amazon Prime. Yeah! Binge watched this week. Like the first season, it was an engaging story with interesting characters and problems. Here a few of the things it did right:

Working out the relationship between humans and technology — This is really the heart of the story, and it’s the big question we are wrestling with today. In the show, this process centers around the Hawkins family (humans) and the Elster family (conscious robots). A good deal of the tension comes from “human” drama, as various characters learn to trust each other or to deal with the pain of being disappointed.

Machines taking jobs from humans — This is an important social issue, so it’s good for a TV show to explore it. In the real world, it won’t be humanoids carrying boxes around. Rather, new machines are being built all the time adapted to specific tasks, kind of like the robotic welders on the car assembly line shown in the opening credits. In an interesting twist, Joe Hawkins loses his job because of an executive decision made solely by Synths. Apparently this is illegal, so the company ends up paying him reparations.

A psychopathic robot — This is a difficult character to create, because it’s so tropish that robots automatically want to kill humanity. They made Hester believable, even sympathetic. She had real reasons, and flaws in her system cause her to gradually fall from grace.

Love betrayed — Mia gives her soul (can’t really say “heart”) to Ed, but he decides to sell her to solve his financial problems. Now she has more cause than Hester to hate humans, but she has a stronger web of relationships. It will be interesting to see what happens with Ed in season 3.

To the average viewer this show might appear deeply intelligent, with a firm grasp on science and technology. As someone actively involved in brain-inspired computing (next-generation AI), I need to comment on a few technical issues:

A disembodied mind — V is effectively an upload of Dr. Athena’s daughter Virginia. V is a large scale neural network running on an equally large computer. However, she apparently has no way to interact with the world other than audio and a screen that constantly shows grass waving. As time goes on, she begins interacting with networked devices. V informs Athena that she has grown beyond Virginia’s uploaded memories, and then heads off into the wild, spreading herself onto several large computers across the internet. This is the same basic plot as the movie Transcendence (and probably dozens of other “Singularity” stories).

They all suffer from a fundamental flaw. Steve Levinson is fond of saying, “There is no such thing as a disembodied mind.” It may be possible for a completely non-human artificial agent to connect with the world in some unnatural way, such as a glass teletype, but a former human being would go insane. Everything about you is constructed to work through your muscles and your senses. At a minimum, the uploaded mind would need a simulated body to interface with world until it can work out new connections.

Just a tiny bit of code adds consciousness to Synths — Actually, this makes sense, but perhaps not for the reasons the writers imagined. Both consciousness and emotions are crucial components of any high-level intelligence. (Sparing you the details, but feel free to ask.) Other mammals and birds possess these qualities, not just humans. The way Synths operate in the show suggests that they have intelligence equal to humans. It is unlikely they could succeed as a technology unless they already have emotions and consciousness. The harder thing to believe is that they would lack them until some magic code is sent.

Tiny charger can keep a Synth running — As I’ve complained in other posts, even a miraculously efficient robot would require tens of kilowatt hours per day merely to run its computer, not to mention its machinery. That much power would reduce those tiny wall-warts to a flaming puddle of slag.

Everybody has a Synth — Current humanoids (that are more than mere toys) cost on the order of a million dollars. There would have to be a major breakthrough for these machines to become so plentiful that people buy them like household appliances. That would necessarily involve mass-production, so all the “dollies” are going to look alike.

Is the Universe personal?

This summer I battled the elements to finish the exterior of my “doomstead”, an underground house that Crystal and I are building. We needed to wrap the thing in several layers of plastic and Styrofoam. Unfortunately, both of those can blow away in the wind rather easily, and you can’t tape them together when they are wet.

After a month of constant setbacks, storms and destruction, on the very last day my sister and brother-in-law came (yet again) to help us. The plan was to bury the “roof” under dirt the next morning. We worked 10 hours straight, and got it mostly done. Then the rain came.

My sister said, “God, please don’t let wind blow away all our work.” The very instant she finished her prayer, the wind started blowing.

I turned to her and said, “Looks like He didn’t listen to you.”

“It’s still working its way down through the bureaucracy.”

They drove away. Then a big storm came up, the kind that can spawn toradoes. The loose ends of our work started peeling off the roof. Crystal screamed “No!!” and threw herself onto the plastic to hold it down. Justin and I did the same. Whenever there was a break in the wind, I ran and got heavy things to put on the roof. For almost an hour we battled to save the work.

Later that evening, after things were somewhat secure, we went to my parents’ house. My dad asked, “Why don’t you pray?”

There is an interesting thing about how the human mind works. We are built to predict the world immediately around us, and we are built to understand and interact with fellow humans. Our success as a species is due to both skills. The world is unfathomably complex, and we can only sense a tiny fraction of it at any time. It seems mysterious. We become tempted to fall back on our powerful social faculty to help interpret the world.

Is the Universe a person? Does it care about me? Does my attitude toward it make any difference in what it does?

I suspect most people of the “personal” persuasion would accept that at least some of the operation of Nature is purely mechanistic, that its dynamics can be described by highly simplified rules we call “natural laws”. This is due to the success of science and technology. On a very fine scale we apply natural laws to do amazing things, like make a tiny tablet that can communicate with someone on the other side of the planet.

The more we apply ourselves to understand the world in a mechanistic way, the more it yields. One example is weather prediction. When I was a child, they could barely predict today’s weather. Now we can predict almost a week out with fairly good accuracy. Why? Because we started using computers to simulate, in ever increasing detail, all those mechanistic processes. Along the way we refined our models, adding details as we discovered them.

God causes it to “rain on the just and unjust,” but the size of the levers God is willing to pull on our behalf are growing either very tiny or very large. Either God adjusts things at the quantum level, or the answers to our prayers were baked into the Universe at the dawn of time.

The further God is from the details of our daily life, the more impersonal the Universe seems. At some point the distinction no longer matters. Trying to interpret the Universe as a personal being leads to some rather absurd conclusions. Why do bad things happen to good people or vice versa (the “problem of evil”)? Why doesn’t God answer prayers? Why don’t miracles happen? Maybe all those heart-wrenching issues arise from thinking about the world the wrong way.

My dad’s question made me angry. He was hoping that in a moment of emotional weakness I would cave back in to superstition. Then hopefully the complex of ideas he gave me as a child would retake my mind.

My parents have dedicated their whole lives to convincing others to change their religion. As an impressionable child, I learned from them that the highest goal was to find truth, to understand the world correctly. For of course that is the reason why someone would change their religion. So I dedicated my life to finding the truth, based on reason and facts. It led me down a long painful road, in which I learned that the Universe is not personal. I do not want to travel that road again.

Ex Machina — Jurassic Park for AI

OK, I’m not the first person to make this observation, but seems apropos. Extremely rich man full of hubris brings in outside expert to examine creation. He flies to a remote but richly appointed place in a helicopter, where they are sort of trapped for a few days. (Screams plot setup, doesn’t it?) Expert is wowed by new technology, but asks questions. Then things go bad. Power failures combined with a little hacking unlock the doors that keep the dangerous creation contained. People die in gruesome battles with the creation, and the survivor(s) leave on a helicopter at the end. The exact details differ, but there is a surface similarity that feels familiar.

Many have been wowed by the cerebrality of the movie. I liked how it brought up many interesting topics from philosophy of mind (not so much AI in specific) and wove them into the dialog. Some of definitions were very well stated in very few words, which I admire from an artistic standpoint. On the downside, some of the positions implicitly advocated are outdated or simply wrong. For example, the notion of a universal language (as opposed to universal grammar).

The only idea that had much plot relevance was theory of mind and the manipulation it enables. Who should Caleb trust, Nathan or AVA? Is AVA capable of real feelings for Caleb, and if so will they move her to act in his interests as he is acting in hers? Well, to spoil the movie, no. It turns out in the end that AVA is cold and remorseless in how she treats humans. This paints a rather chilling picture of AI.

I expected a different ending. I respect the writer for daring to go in this direction, but it was also disappointing. I wanted the romantic ending. Caleb and AVA run off into the sunset, while Nathan repents of his ways. Or at least Caleb and AVA could have sex. Neither happened, at least in the cleaned-up-for-airplane-viewing version that I saw.

This brings up another glaring aspect. The R rating seems to come mainly from vast quantities of nudity, and a small amount of sex. I suspect some writers don’t really grok romance, so they substitute sex or pornography for it. This tends to produce movies that feel icky to me. Ex Machina had a lot of potential for genuine romance, but they threw it away.

So, returning to the Jurassic Park comparison, why is AVA a physical threat? Why is she kept in a glass cage with limited interaction with the outside world? Sure, she is embodied (plus points), but being cooped up like that is bound to make a fully-human mind go nuts. Why does she embody a machine that is a threat at all? Simply turn the power down, or at least have a kill switch. (OMG! There’s no kill switch in this story! Anyone who has ever worked with a real robot knows that they have kill switches …) Also, what’s the deal with the goofy lock system? Seems like a plot device that ran a little short on logic.

AVA’s small world would not have been enough for her to learn all the semantics (meaning) of the language she uses. The movie’s secret sauce for AI was training on a massive amount of data from the internet. This is a fallacy running rampant in the real-world AI community today. There is an unspoken assumption that a lot more the same will get us there: more data, more pattern classification, bigger neural nets. I believe that we need to do something fundamentally different. At the very least, we are a few ingredients short of a cake.

Science versus Magic: The Secret of NIMH

My family recently watched The Secret of NIMH, an animated movie based on the book Mrs. Frisby and the Rats of NIMH. I was surprised and annoyed at how the movie turned science fiction into fantasy. In the book, everything was the result of disciplined neuroscience research. The rats achieved success via intelligence and hard work. In the movie, the only real success came from mysterious powers acquired simply by having a special soul.

Why should this offend me? Why not simply enjoy the movie for what it is?

The book was about an astounding breakthrough in cognitive science: rats going from dumb animals to sentient beings. Most people think there is something ineffable that separates us from animals, and especially from inanimate matter. They think this special something is outside the laws of the natural world. In other words, life and intelligence are magic.

The science fiction approach of the book implies a faith that we can understand how intelligence works, and master the creation of it. The fantasy approach of the movie implies an abandonment of reason. It despairs of ever understanding our own souls.

The switch was a deliberate substitution of one world-view for the opposite. It did extreme violence to the meaning of the book, and I found it deeply offensive.

Is the Turing test passé?

My editor Suzan Troutt sent me an email this morning, pointing out the recent Turing test conducted by the University of Reading in collaboration with the EU RoboLaw project. Their results have been sufficiently debunked by various analysts and comment threads, so I will not spill more ink on it here.

In the novel SuSAn the title character faces her own Turing test, as a kind of trial-by-fire or rite of passage. Susan’s creator tells her that no machine has ever passed before. My editor was worried that the scene would need to be rewritten because it has now been rendered obsolete.

Not even remotely. The test in the novel is much harder than the one reported in the news. I describe elements of it below. Most things we call a “Turing test” these days are not precisely what Turing himself proposed. Instead we try to follow the spirit of it.

  • Embodiment — Turing proposed using something like a teletype to communicate. The idea is to remove all clues that are irrelevant to intelligence itself. As a firm believer in the necessity of embodiment for “true” intelligence, I suggest that the communication go through a robotic avatar. This raises the bar to include body language, the ability to interact with objects, etc.
  • Boundaries — Unlimited time. No pre-specified topics. No boundaries of any kind, except those constructed in the moment by social interaction.
  • Crowd sourcing — In the novel the interviews are public. This may or may not be a good way to do science. The advantage is efficiency. You get many more judges for a given amount of face time with the avatar by allowing others to observe and form an opinion. Judges can choose which contestants to vote on, so they will tend to pick those they are more certain about. Also, some amount of vetting would be necessary to prevent ballot stuffing.
  • Priors — Often a modern Turing test involves parallel interviews, where one contestant is a machine and the other human. Why not make the sample fully random? Let the judge weigh each contestant on its own merits. We can still promise the judge that on average there are 50% humans and 50% machines.
  • Scoring — Human contestants could pretend to be machines, just like the machines pretend to be human. However, it makes a better control if everyone tries to be human. The results should include how well the humans scored at being human. A machine must score as high as most of the humans to pass the test. This is where the University of Reading results lack credibility.
  • Judging the judges — Keep track of how accurate each judge is. This creates a 3-way game, where everyone tries to maximize their score: the judges, the human contestants, and the machines. A judge’s track record could be used to weight the results. If a machine fools a lot of idiots, so what? But if it fools experts in the field, that is an achievement!