…nearer than you think.

Henry Kissinger — Sage or Luddite?

In the June issue of The Atlantic, Henry Kissinger has weighed in on the prospect of a world defined by artificial intelligence. It is a careful, well informed, thoughtful must-read consideration of the likely consequences of AI. He addresses many of the same concerns of this blog, though I do not pretend to do so as as articulately or wisely. I urge you to read his article (about 10 minutes).

His inquiry into artificial intelligence was prompted by attendance at a conference, where he listened somewhat by happenstance to a speaker on the topic f AI.

“As I listened to the speaker celebrate this technical progress, my experience as a historian and occasional practicing statesman gave me pause. What would be the impact on history of self-learning machines—machines that acquired knowledge by processes particular to themselves, and applied that knowledge to ends for which there may be no category of human understanding? Would these machines learn to communicate with one another? How would choices be made among emerging options? Was it possible that human history might go the way of the Incas, faced with a Spanish culture incomprehensible and even awe-inspiring to them? Were we at the edge of a new phase of human history?”

He goes on to say, “If AlphaZero (a chess playing AI) was able to achieve this mastery so rapidly, where will AI be in five years? What will be the impact on human cognition generally? What is the role of ethics in this process, which consists in essence of the acceleration of choices?

“Typically, these questions are left to technologists and to the intelligentsia of related scientific fields. Philosophers and others in the field of the humanities who helped shape previous concepts of world order tend to be disadvantaged, lacking knowledge of AI’s mechanisms or being overawed by its capacities. In contrast, the scientific world is impelled to explore the technical possibilities of its achievements, and the technological world is preoccupied with commercial vistas of fabulous scale. The incentive of both these worlds is to push the limits of discoveries rather than to comprehend them. And governance, insofar as it deals with the subject, is more likely to investigate AI’s applications for security and intelligence than to explore the transformation of the human condition that it has begun to produce.”

In other words, even those people and entities that are paying attention to AI have goals and incentives that will make them miss the forest for the trees, and worse, place them in conflict with Humanity’s best interests, unless they are supremely conscientious.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: