…nearer than you think.

Conscience and Conflict [of Interest] — #2

The opinion of TroubleisNear is that those parties developing technologies with serious to catastrophic risk downsides — most especially the major players, but even those developers whose applications seem remotely related to the risks — bear the responsibility for protecting against those downsides, or what economists call externalities.  This view is consistent with the recommendations of the Asilomar Beneficial AI Conference held in 2017 that included 23 principles to guide beneficial AI development. (These principles, though exemplary in so many ways, in my opinion do not go far enough. Post to come.  Video and presentation documents from the top people in the field at the 2017 Asilomar BAI — an excellent resource trove —are available at the FLI website).

The U.S. government was largely responsible for developing nuclear technology and has more or less been responsible for managing the externalities (though the private sector nuclear energy industry has taken on a fair portion of the responsibility).  I will not defend the track record, and to the extent it is flawed, it bears witness to the opinions expressed here.

Every single party to the development of these technologies has conflicts of interest relative to the best interests of Humanity.  And no, I will not pretend to fully know or articulate what represents the best interests. But a good place to start is the previously cited list of Asilomar AI Principles (https://futureoflife.org/ai-principles/). Accountability to these principles and more is a major objective of TroubleisNear.com.

So, let’s start with the major players.


Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: