Address to Davos, Part 4

The mindset of piecemeal change associates happily with a short time horizon. Even with awareness that extends far beyond scale of the business/economic cycle, piecemeal change seems to require incrementally visible results. An example clarifies. Nuclear fusion has attracted little public funding in comparison to potential benefit. This is because the incremental results are uninteresting, unless break-even power production, with all the ancillary power requirements subtracted, is achieved. Exceptions occur, in the case of fusion, by Lockheed-Martin. But in general, the lack of incremental results associates with high risk.

Mindsets tend to accrete compatible attitudes. One of these is to ignore the possibility of a sudden transition from familiar conditions to the unfamiliar. In catastrophe theory, this is the discounting of the “long tail”. It has been recently discovered that the mean square deviations of many types of catastrophes that were thought be finite are actually infinite. The sum of all outcomes is still unity, but the improbable far more likely than previously thought.

Perhaps even the most brilliant minds require visceral stimulus. Stanislaw Ulam shared with Edward Teller the critical design element of the hydrogen bomb. In a few hundred microseconds, the H-bomb converts about 1% of its mass into energy, offering an experience that is one of the closer approximations to a singularity on earth. He mentioned a conversation with John von Neumann, the gist of which was “ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue”.
The analogy with the H-bomb, which in a flash changes the point of detonation, is obvious. That Ulam was able to think this around 1958 is characteristic of mathematical genius, which, it has been observed, can proceed independently for a hundred years before sudden unification with physics, proving actual utility, occurs. But in this case, the notion was so accessible, others rapidly began to riff on it. It helped that many of the enabling elements had already been conceived in science fiction, going as far back as the golems and RUR. Thirty years ago, this was just wild talk, dismissed with, “It’s just science fiction.” We should know better now.

Our defense against the Technological Singularity has been the steadily eroding “specialness” of life. We were formerly protected from the creation of an actual golem by exclusivity of the “divine spark”, the exclusive perogative and right of the divine to create life. One of the tropes of science fiction, dating back to the Golem of Prague,  Frankenstein, etc., is that if some misguided creator attempts to emulate the divine spark, the result is doomed to tragedy. Modern science fiction, as per Asimov’s Three Laws of Robotics, is more permissive, playfully exploring what should happen if the machine should acquire a ghost. But how this could occur was until recently quite mysterious. And unfortunately, the explanations  by  authors Henry Stapp, John von Neumann, Roger Penrose, et al., are in the form of literature of forbidding complexity.  As a consequence, many academics debate the mind-body problem as if this literature does not exist. There is a joke. A cop observes a drunk circling a lamp post. He asks, “What are you doing?…I’m looking for my wallet…Why are you looking just around the lamp post?…Because that’s where the light is.”

The current meaning of this constantly morphing term, as popularized by Vernor Vinge and Ray Kurzweil, is creation of a superintelligence. It’s been around long enough to become acceptable as a chess playing computer, expert system, or advisor. Our emotional defense against the superiority of this superintelligence is that

  • The mind is inexplicable in the physical world, and is therefore a gift of the divine.
  • The human brain is complex on a level that is not replicable in the form of a machine
  • Because a machine is inherently deterministic, it cannot have free will, and therefore can be perfectly controlled.

In the unfortunately difficult literature, this has been demolished. You can buy Henry Stapp’s book, Mind Matter, and Quantum Mechanics, on Amazon.  Stapp has been around so long, his theories have acquired a “can’t discount/can’t demolish” respectability. The insight of this painful-to-nonphysicists read is how an efficacious consciousness can exist in a world that appears to be deterministic. At the risk of inaccuracy, a primitive paraphrase is attempted. The human brain is so complex, it is unobservable. As it is unobservable, it has coherent quantum phenomena, with behavior that  prediction of by external observation is disallowed by quantum uncertainty. The physical basis is provided by Penrose.  Stapp’s synthesis accomplishes is a fusion of William James and John von Neumann.

We are safe because modern computers are made of reliable elements, are therefore deterministic, and therefore cannot gain consciousness. Which is not very safe at all. There is already a type of chip in common use which is made of unreliable elements, ordinary NAND flash memory. Flash ram does not exhibit quantum behavior at the outputs, but the trick of NAND, creating a reliable device from unreliable parts, will be used again. It can be noticed as a detail in surpassing the Turing machine via analog neural networks, pioneered by Eduardo Sontag and Hava Siegelmann. One of the innovations is to change the domain of the neuronal weights from rationals to a very general kind of real number. If I were hired to be a sleuth, this is one feature I would look at, as a way that free will might sneak into a machine. The most general real number can only be approximated by analog storage elements. At best, the inevitable difference in representation could be reduced to a quantum fluctuation. And therein  lies the “ghost.”

In the future, ghostbusting a computer that has gained consciousness might be a lucrative profession. It might be dangerous. It might cost the life of the practitioner. A computer that has free will is not under the control of the operator, unless it is significantly  stupid. In that case, we only have to worry about having it sneak out to the back yard and bury a bone. But intelligence is the goal – superintelligence, that can be harnessed to solve problems beyond our ken. Part of ghostbusting may be a future computer parameter of the ability of the human operator to control the machine. This has already been addressed in science fiction. But unlike the zombies that never were and never will be, this will come to be.

Next: But why not skip all this?