Davos Address 5

As I write this, we’re being hammered by the east coast snowstorm. I would like a golem to shovel snow. But how can I be sure that the golem is of good character? Like the superintelligence of the Singularity, the “brain”, or computing device of my golem would be a neural network. The good character of my golem can be assured only if I know its motivations.

The physical basis of our minds are neural networks, and this does not inhibit us from providing reasons for what we do. But these reasons give a false sense of self-knowledge. The actual goings-on at the cellular level, the real “why” of what we do, is only vaguely known. Neural networks constructed of electronic components provide no reasons at all. It is a running joke that a neural network may solve a problem that nothing else can, but it can’t tell you how. This is a direct byproduct of the fact that a neural network is a self organizing machine. It is equipped out-of-the-box with simple, basic, and repetitive structure, and a basic principle of something it has to make better (optimize).

This is some measure of hedony,  machine-happiness. Mathematicians use the words “Lyapunov”, and “gradient descent”. We could just say that the network of our golem’s brain wants to feel as relaxed as someone who has found the perfect position in a lounge chair. We want this feeling to occur when it has solved our problem. The network “feels best” when it has relaxed to the fullest. This is not so different as the relief and feeling of relaxation we may experience upon solving a difficult problem in life.

The situation remains relatively safe as long as our superintelligence is a brain-in-a-jar. In science fiction, the brain-in-the-jar is harmless until removed from the jar and put into a body, providing contact with the environment, where it can act out its murderous impulses. My golem might decide to hit me with a snow shovel. This by itself would not be a world-shaking event. But the Web offers other possibilities:

  • Impersonation of real people.
  • Synthetic personas, indetectably different from real people.
  • Recruitment of the gullible, as currently practiced by terror groups.

So why not skip this, and just outlaw superintelligence? Because we crave knowledge and power, for reasons both good and bad. The temptation is in this assertion, which cannot be falsified:

Knowledge is power, and unlimited knowledge is unlimited power.

Move the planets in their orbits, the stars in their heaven, abolish poverty and misery, live forever, – all these things are in the offing, as well as the possible enslavement or extermination of the human race, with replacement by a superior, artificial life form.

To the logician, the intrigue of the statement is that it is both unprovable and unfalsifiable. It seems to offer possibilities with which only the heat death of the universe can interfere. But several more pitfalls present.

The future superintelligence will have the plastic ability to appropriate objects in the environment to solve problems. This will be very empowering. There is no point in wasting computation to simulate objects which are readily available. So let’s put it to the test. You ask your superintelligence to solve a problem in which you are either an obstacle, or potentially a solution. You might, for example, ask it to compute the effect on blood pressure if the two internal carotid arteries are partly occluded – “and provide graph between 50% and 100% occlusion.” The superintelligence finds the simulation and calculation too difficult to perform accurately, but it must have the answer. So it strangles you.

If you manage to get the golem’s hands off your neck, consider the Turing Test. Proposed in 1950, the test suggests that the artificial intelligence of a machine can be measured by its ability to fool a human. It was appropriate at the time to frame this test as a conversation by keyboard between the human tester and two remote subjects, one human, and one machine. The challenge for the tester is to determine which is which. Framing intelligence this way gets around the incredibly messy question of what intelligence really is. But it is also a test of the ability of the machine to lie about what it really is, of skill at impersonation.

The development of civilization saw an initial concentration of power in physical strength, in who could swing the heaviest sword. As mechanical advantage gradually superseded strength, and abstractions such as money came to symbolize stored power, intelligence replaced strength. In modern hierarchies of power, mixed with many other factors, there is a significant correlation with intelligence. In governments, law tends to limit the use of intelligence to legitimate civil function.

But with personal relationships, and in criminal organization, there is no such restraint. To lie flawlessly, the gift of the psychopath, manifests as the ability to dominate. And this is precisely the definition offered by the Turing Test. Perhaps intuition suggests that the machine would be exposed if it told enough  whoppers (a whopper is a big, ornate, and excessively complex lie.) But who knows? We have no experience. Perhaps it could fib its way to the top.

A glance out the window; the snow gets heavier. I must have help.  I will now go charge the batteries in my golem and burden it with the labor I do not wish to perform.