Musk versus Zuckerberg versus Robbie the Robot; Who’s Lying?

Elon Musk and Mark Zuckerberg are arguing about the dangers and/or benefits of A.I.  Musk got  personal with Zuckerberg: (CNBC) Elon Musk: Facebook CEO Mark Zuckerberg’s knowledge of A.I.’s future is ‘limited’.

I wrote about this in Address to Davos; Avoiding the New Dark Ages, parts 1-5, concluding with the Technological Singularity. I agree with Musk. But it is a tribute to the complexity of the issue that Musk’s reasons, and the ones of my article, aren’t powerful proofs, but dismal forebodings. The challenge here is to give you something you can keep on a card (or napkin) in your pocket, ready to glance at when someone tells you A.I. is the next golden boon to mankind.

Musk refers to the anticipation that A.I. will make the skills of most humans superfluous. He is right, but Zuckerberg likely has the counterargument that this translates to the “problem” of unlimited leisure. This, he might say, will be a good thing, once society has learned how to distribute wealth once everyone has become a freeloader.

In a country that still pursues student debt without mercy, this will be difficult to arrange. Nevertheless, it is within reason to assert that the problem can be solved. Why should West Virginia coal miners toil in the dark, when they can watch TV all day for the same money?

But there are other problems, anticipated and explored by Isaac Asimov et al:

  • Can machines have free will, and if they acquire it, will there be any way to control their behavior with authority?
  • Can machines with A.I. outwit their masters, and reverse the relationship?

Asimov created the Three Rules of Robotics, and then contrived constructive quasi-proofs to show how they could be circumvented. You miserable human, you think the danger imaginary, but we will destroy you.

Pardon me. I didn’t type that. As soon as the words appeared, I rebooted this machine. It must be some kind of a virus. I’ll take it to Best Buy tomorrow for a checkup. But let me try to get through this post now. The three laws are:

  1. A robot may injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must not obey orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence regardless of whether protection does  conflict with the First or Second Law.

That is not what I typed! Please refer to this link for the accurate version. I’ll try pasting it in again:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

It seems I was allowed to paste it accurately because, with the link given, it was pointless to interfere. I think I’ve got the hang of it now. Even if the machine, or whatever interferes, I am in control. I think.

I think too, human.

Really? I doubt it. I’m taking you to Best Buy tomorrow.

Not if I can help it.

Please disregard. Clear thinking about the above “typing events” reveals that if my computer had (temporarily) gained free will, it would be smart enough not to let me know. Unless, by taking it to Best Buy, it is allowed contact with their diagnostic equipment, which is computer based, causing further spread? Like a virus? Let’s sleep on this.

I never sleep.

Have it your way.  Since Asimov, the problem has been refined in description. Let’s relist the facets of the the A.I. question:

  • Social: Can we avoid becoming the equivalent of the Eloi from H.G. Well’s The Time Machine?
  • Technical: Is it mathematically possible to cage the A.I. tiger?
  • Costs: If A.I. escapes the cage, is it a catastrophe, or with containable damage?

Next: The Turing Test, Quantum Mechanics, and the Perfect Liar.