Review: Digital Rapture, ed. James Patrick Kelly/John Kessel

The Singularity Anthology. Had to get that one.

The Singularity Theory is not much talked about outside of science fiction and futurist circles, and that’s a shame. It’s a very real entity, not just some fabulist’s dream. It’s a way of life for some people, even. We should be discussing it because it will impact us all sooner or later. One way or another eventually we’ll all be dealing with the components of this nutty theory.

This anthology, aptly entitled “The Digital Rapture” has no new information for the regular Singularity people who are already familiar with Kurzweil, Vinge, and Stross. However, with it’s mix of fact and fiction this book rocks for those who are looking for a primer on the subject. The facts explain the thinking, the fiction tries to figure out what all that thinking could mean. The strength of the book lies in this mashing together of the theorists’ theories and content creators creations.

Many people date the onset of Singularity Theory with Vernor Vinge’s essay, “The Coming Technological Singularity” published in 1993, but the inclusion here of stories by Isaac Asimov, Olaf Stapledon, and Frederik Pohl indicates how long these ideas have been brewing. The first section, “The End of the Human Era,” proves this nicely. The works from Asimov, J.D. Bernal, and Pohl do not deal with the Singularity specifically, but envision its precepts: the sentient universe, the downloadable mind, existence in virtual reality.

The second section deals with “The Posthumans.” Posthumans being the humans that will exist after the “Human Era.” A chapter from Olaf Stapledon’s Odd John shows how prescient that writer was. This section also includes Vinge’s seminal piece mentioned above, as well as stories by Rudy Rucker/Eileen Gunn and Bruce Sterling.

The third section, “Across the Event Horizon,” has chapter one of Ray Kurzweill’s soon to be classic The Singularity is Near, and stories by Greg Egan, David D. Levine, Vernor Vinge, and Justina Robson. The event horizon, being the actual point of singularity, rests nicely in this book between The Posthumans and the Other. Which is a bit surprising because I assumed we would be posthumans after the event horizon, but apparently that’s not true. Not here anyway. Apparently I need to brush up on my theory.

The final section, “The Other,” deals with beings, non-human for the most part, on the other side of the event horizon. This is the realm for science fictionists only. No facts in this section. One of the assumptions of the Singularity is that we don’t know what life will be like on the other side, so no one who deals in fact is going to make a prediction. But not for nothing was the science fiction author invented. In this section we have projections from Charles Stross, Robert Reed, Cory Doctorow/Benjamin Rosenbaum, Hannu Rajaniemi, and Elizabeth Bear.

There’s also Rudy Rucker’s non-fiction piece “The Great Awakening” in the final section. Rucker gives clear-headed analysis on post-Singularity possibilities and questions outlandish ideas like how nanorobots will one day ingest and restructure entire planets (Earth for example) for their own ends. Here’s his take: “My guess is that this could never happen. Every existing plant, animal, fungus, and protozoan already aspires to world domination. There’s nothing more ruthless than viruses and bacteria—the grizzled homies who’ve thrived by keeping it real for some three billion years.”

“Homies.” Don’t you just love it?

Anyway, I’ll not comment on every piece in the book, but I do have some favorite moments to share.

Starting out the book with Isaac Asimov’s classic “The Last Question,” was both brilliant and dumb in my opinion. Dumb because the story is so great that everything after it sort of pales in comparison. Brilliant because if anyone thinks the ideas of the Singularity are anything less than deeply profound, this story sets them straight. And this story can be enjoyed by anyone: sf fan, Singularist, romance reader, or other. It’s short and has an optimistic ending reminiscent of that Eastern religion that says the universe is born, exhausted, and then reincarnated cyclically forever. I almost understand infinity when I think about that. And Asimov’s ending hits like a ton of bricks. Beautiful.

Reading J.D. Bernal’s story, “The Flesh,” I get the feeling the most fanatical Singularists stumbled on the text a few decades ago. In this story, the mind alone is the seat of humanity. We have no need for our bodies. Any ol’ tin can will do. Surely this is the original inspiration for the ultimate promise of the Singularity: immortality.

Bernal says this:

“Instead of the present body structure we should have the whole framework of some very rigid material, probably not metal but one of the new fibrous substances.* In shape it might well be rather a short cylinder. Inside the cylinder, and supported very carefully to prevent shock, is the brain with its nerve connections, immersed in a liquid of the nature of cerebro-spinal fluid, kept circulating over it at a uniform temperature. The brain and nerve cells are kept supplied with fresh oxygenated blood which connect outside the cylinder to the artificial heart-lung digestive system—an elaborate, automatic contrivance. This might in large part be made from living organs, although these would have to be carefully arranged so that no failure on their part would endanger the blood supply to the brain (only a fraction of the body’s present requirements) and so that they could be inter-changed and repaired without disturbing its functions.”

I’ll bet it would take as much energy to run this “body” as it would take to run a regular human body. Considering mechanical parts don’t usually last as long as a human body’s does, there’d be much inter-changing and repairing. That takes energy and money right there.

Bernal does offers this advantage:

“…the locomotor organs would not be much used because the extension of the sense organs would tend to take their place.”

So presumably we could do away with those wasteful, energy-sapping organs. A savings right there. I’m guessing this contraption will not be too good at racquetball, though. Presumably Mr. Bernal doesn’t care about such things.

People are paranoid when it comes to powerful AI. We’re all worried that robots with superior intelligence—superior to humans’ that is—will eventually have no use for us lesser thans. They’ll dispose of us tout de suite. I’m guessing Olaf Stapledon’s “Thought and Action” has been the inspiration for that fear. In this particular story, a young boy, John, is a being with superior intelligence (not a robot at this point, more like a posthuman). John has a disarming lack of compassion for other humans as a result of his apparently engineered intelligence. The inference is that having a big brain means you no longer have feelings, you become inhumane. Apparently you don’t care about others unless you’re dumb.

I don’t think intelligence necessarily results in pathological people. I’m pretty sure compassion has to be taught to dullards as well as the super smart. Another thing: John has an inordinate curiosity. There again, I don’t believe it’s because he has been engineered to be bright. I have a horse with an inordinate curiosity. Kittens are inordinately curious. It’s a trait of childhood, I think, not necessarily increased brain power. I might have missed the point, though. Maybe super smart people are a danger only when they’re young and curious. And because they are super smart it’s hard to teach them compassion. Maybe.

An interesting phenomenon occurred with Rudy Rucker’s work in this book. I didn’t particularly care for the Rucker/Gunn story, “Hive Mind Man.” I kept searching for a mechanism for the hive mind to work. Reading this story, I discovered what it is I don’t like about much of Singularity fiction. I can never grasp how the fantastic phenomenon works. I discovered later while reading Rucker’s essay “The Great Awakening,” an explanation for some of the Singularity tropes and gimmicks and terms. The earlier story doesn’t bother me so much now. That is the way with science fiction. You have to know the secret handshakes. The fans consume everything that’s out there. They’re familiar with all the tropes, theories, and gimmicks. They don’t need plausible explanations. To enjoy much of sf, you have to be a rabid fan. I get that now.

For Bruce Sterling’s story, “Sunken Gardens,” no knowledge of a secret handshake is required. Here a competition between gardeners terraforming Mars will determine who gets to move up the social ladder in the oligarchic culture. I’m not sure I buy the premise that competition, rather than cooperation, will conquer the galaxy. For instance,

“The Regals themselves had started as dissidents and defectors. Their Posthumanist philosophy had given them the moral power and the bland assurance to dominate and absorb factions from the fringes of humanity. And they had the support of the Investors, who had vast wealth and the secret techniques of star travel.”

Posthumanist philosophy endowing moral power is an important idea. No problem with believing that. But the idea of the techniques of star travel being kept a secret… Okay, sure. Like how we kept the formula for the A-bomb a secret.

And then there’s Brother Ray. Ray Kurzweil. What can I say that I haven’t said in countless other reviews and blog posts? Nothing. Let’s move on.

Okay, I will say this: I took Brother Ray’s health test at the website he set up to hawk his life-extending pills and regimes. The test suggested I start on fish oil pills. I did that. They worked. The man knows something, I’ll give him that. Don’t just read the chapter in this anthology. Go get his whole book. Read it. It’s scary and eye-opening.

I really liked David D. Levine’s story, “Firewall.” I put lots of notes in the margins of that one. I have questions. Since Mr. Levine is soon to be a BVC member, I’ll use the secret BVC passcode to get in touch with him and get those questions answered.

Justina Robson’s “Cracklegrackle” includes a method for traveling to the past. I have to get in touch with her for that information as well. Unfortunately she’s not slated to be a member, so BVC’s secret passcode does me no good there. I’ll have to figure out a different route for satisfaction on that one.

I truly enjoyed Robert Reed’s Coelocanths and will reread it now that I’ve looked the word up and know what it means.

I fell asleep during “True Names” by Doctorow and Rosenbaum and didn’t wake up until it was over. Just in time for Hannu Rajaniewmi’s “The Server and the Dragon.” A beautiful tale that illustrates just what can be done with the Singularity if you’re creative. The same is true of Elizabeth Bear’s “The Inevitable Heat Death of the Universe.” There’s more to be done with the Singularity tropes than having entire planets consumed by nanorobots. Good job guys and gals!

Now if you’ll excuse me, a signal is lighting up on my artificial heart-lung digestive system. It’s an elaborate, automatic contrivance you know, and I need to see to its needs. The fact that it’s automatic should preclude any need for interference on my part, but there’s that “elaborate” part of its nature. You know how Murphy loves elaborate.

Thank’s for reading.

Sue Lange
The trailer for Sue Lange’s “Princess Dancer” is now available for your viewing pleasure.

* I’d suggest Spanx.

Author

Share

Comments

Review: Digital Rapture, ed. James Patrick Kelly/John Kessel — 11 Comments

  1. Alrighty then! Don’t mind if I do. First question.

    Right off the bat in the second paragraph:
    “I sat in my darkened office, surrounded by glowing screens. Usually the screens were filled with the tools of my job–system status displays, network traffic monitors, hardware health summaries, and the faces of my subs–but for now I’d pushed most of those to one side in favor the the news.”

    What are your “subs?” Substitute yous?

    Sue

    P.S. I love access to authors.

  2. The “subs,” short for “virtual subordinates,” are semi-intelligent software that helps the main character do his work. Each sub is the user interface to a particular function or system of the computers he manages, represented on the screen as a human figure with a distinct face, voice, and personality. As the name “virtual subordinate” suggests, they take the place of human subordinates, each managing its subsystem as directed but with a certain amount of independence.

  3. I think I figured that out about a quarter of the way in, but I wanted to make sure.

    At one point in the story after they’ve discovered the communications problems in Harbin, Beijing, and Shanghai, the moon workers decide they need to check their escape craft that will return them to Earth if necessary. Someone says, “Assuming there’s anyone there to meet us.”

    Why would there not be anyone on Earth just because there’s no more computer/communications? People can exist fine without tech. Well, maybe not “fine,” but they can certainly exist.

    Or does that statement simply mean there wouldn’t be anyone to go pick them up from the ocean once they splashed down?

  4. At this point in the story they don’t know what the outbreak is capable of, only that nobody affected by it is communicating in any way. When Sochima says this, she is admitting her fear that this fast-spreading technological unknown may take out the world’s population before they reach Earth.

    This is a world in which the Singularity, or some kind of technological apocalypse, is a known threat — like nuclear armageddon was when I was a kid. Of course, even nuclear armageddon would not kill everyone, but we talked about it as though it would.

  5. So she’s assuming the technology is going to be at the point where advanced AI will decide humans are obsolete and so they’ll do what? Gas them all I guess.

    Later in the story Dan “can’t power down the computer room and leave life support functioning.”

    Is that logical that life support systems would not have a back up system that is totally self-supported? Maybe run by a computer again, but a wholly separate entity?

  6. Well, they don’t know what or how is happening inside the affected cities, so they assume the worst.

    In the story it’s explained that the whole central core is on one physical circuit, with detailed control of power to specific subsystems controlled by software. Is this a smart way to design life-critical systems? I can answer that in three ways: one, people sometimes make mistakes in design for what seem like good reasons; two, the system was designed to withstand software and hardware failure but not actively malicious and intelligent software; and three, sometimes you have to put in something that’s not exactly plausible, or handwave around it, or there’s no story. (Why doesn’t Gandalf just ask the Eagles to take the Ring to Mordor?)

  7. That’s what I was wondering: do these kinds of mistakes still happen when we’re capable of designing grand things. I think back to my days in the nuke plan when every back up system had a back up system. That was because of TMI mostly. But still things go wrong, of course, but the really big possibilities (like disaster involving life support in a harsh environment) seem to be thought out.

    I like the idea that software can’t be designed to withstand intelligent software attacks. We can’t anticipate how intelligent software is going to look at the world. We can probably protect ourselves from malicious humans because we think like humans, but who knows what a malicious AI would be like. Personally I don’t think “malicious” and “AI” belong in the same sentence. I don’t think AI will ever have the same sorts of needs, wants, and judgments as a human. They won’t be malicious in our greedy, puerile way. But maybe they’ll have some other maladjusted thought process that we can’t define and therefore protect against.

    Here’s something else. It’s related to a question earlier:

    “The few people who’d been retrieved from the affected areas were comatose or incoherent. Even dogs and cats were affected.”

    What biological event are you imagining is happening here?

    • I think that as long as people are people, mistakes will still happen, sometimes big ones. When systems are made idiot-proof, Nature makes bigger idiots; when intelligent software is used to prevent errors, it will sometimes magnify those errors.

      I agree that AI is unlikely to really be malicious, but it can be inimical — its needs and priorities will be different from ours and its actions might be harmful even if there’s no ill intent. It doesn’t even have to be “maladjusted” as such to do harm, it could be functioning exactly as designed and still have unexpected consequences. Of course, because this software is derived from software built by humans, it may contain human failings either accidentally or by design.

      To be frank, I doubt we will ever see true AI. We already have software that can pass the Turing Test, at least under some circumstances, but it’s not truly “intelligent” as we define the term and I doubt it ever will be either intelligent or self-aware. However, we may reach the point that the question of intelligence and awareness becomes a purely philosophical one. Is a software system that behaves exactly like a self-aware mind truly self-aware, or not?

      Even though I don’t believe that true AI, or the Singularity, is likely to come to pass, I took them as the premise of this story because I was writing it for a Singularity anthology and I wanted to sell the story (also to play with the idea, whether or not I think it’ll ever happen in real life).

      The mechanism I had in mind for the singularity in the story was to affect people’s minds by pulsing electrical systems near them to create eddy currents in their brains. Of course, the human brain is a bioelectrical machine, not purely electrical, so this might not actually work, but I’m positing that the AI is very smart, very fast, and has access to all human knowledge. And, as I said above, it’s just a story.

  8. “pulsing electrical systems near them to create eddy currents in their brains” cool idea! I’m going to research that.

    Last question: The ending of the story is sad. But if we ever found ourselves in that situation it would not be. We’d be finding peace everlasting. Did you intend to create the same sort of situation as “being saved” by a religious group? That’s what it looks like to me.

  9. I don’t think the ending is sad at all. I think this is a story of the beginning of a global utopia, told from the perspective of one person who resists the change for a while. He’s sad because he lost his wife, but now he gets to go to a technological heaven where he can be with her all the time and she can be with her new sweetie all the time too. In heaven, everything is fine… you’ve got your good thing, and I’ve got mine.

    In the story, this isn’t like being “saved” by a religious group. Being “saved” by any extant religion means you are still the same person with the same real-world problems as before (though you may now have better mental tools to cope with those problems and maybe a better community to help you with them). In this case it is a real transition to a new and entirely better world. The point of the Singularity is that we cannot understand what lies on the other side of it, so I’m declaring that, from the perspective of those inside it, it’s a wonderful and flawless utopia.

    Do I think that it’s likely to happen this way, ever, in the real world? Not in the least! But this is fiction, and we can have happily-ever-after endings.