Michael Anissimov on Mitigating the Risks of Artificial Superintelligence

Michael Anissimov ranks among the voices most prominent and effective in discussing the issue of existential risk, along with other issues related to the Singularity and the future of humanity and technology. Currently the Media Director for the Singularity Institute, as well as a Board member of Humanity+, Michael is Co-Organizer of the Singularity Summit and a member of the Center for Responsible Nanotechnology’s Global Task Force. His blog Accelerating Future is deservedly popular, featuring in-depth discussion of many important issues related to transhumanism.

The following quote summarizes some of Michael’s high-level views on existential risk:

I cannot emphasize this enough. If an existential disaster occurs, not only will the possibilities of extreme life extension, sophisticated nanotechnology, intelligence enhancement, and space expansion never bear fruit, but everyone will be dead, never to come back. This would be awful. Because we have so much to lose, existential risk is worth worrying about even if our estimated probability of occurrence is extremely low.

Existential risk creates a ‘loafer problem’ — we always expect someone else to handle it. I assert that this is a dangerous strategy and should be discarded in favor of making prevention of such risks a central focus.

Coherent Aggregated Volition
– CAV wants to get at the core of real, current human values, as manifested in real human life

“Coherent Extrapolated Volition”

– CEV wants to get at the core of “what humans would like their values to be”, as manifested in what we would like our life to be if we were all better people who were smarter and knew more

Goal –

Create a human-friendly superintelligence. The arguments for why this is a good idea have been laid out numerous times, and is the focus of Nick Bostrom’s essay “Ethical Issues in Advanced Artificial Intelligence”. An increasing majority of transhumanists are adopting this view.

The current thinking on Friendly AI is not to create an AI that sticks around forever, but merely a stepping stone to a process that embodies humanity’s wishes. The AI is just an “initial dynamic” that sticks around long enough to determine the coherence between humanity’s goals and implements it.

The idea is to create an AI that you actually trust. Giving control over the world to a Nanny AI would be a mistake, because you might never be able to get rid of it. I’d rather have an AI that is designed to get rid of itself once its job is done. Creating superintelligence is extremely dangerous, something you only want to do once. Get it right the first time.

If we had a really benevolent human and an uploading machine, would we ask them to just kickstart the Singularity, or have them be a Nanny first? I would presume the former, so why would we ask an AI to be a nanny? If we trust the AI like a human, it can do everything a human can do, and it’s the best available entity to do this, so why not let it go ahead and enhance its own intelligence in an open-ended fashion? If we can trust a human then we can trust an intelligently built friendly AGI even more.

I suspect that by the time we have an AI smart enough to be a nanny, it would be able to build itself MNT computers the size of the Hoover Dam, and solve the problem of post-Nanny AI.

Steve Omohundro feel differently. Steve feels that a trustable community might be easier to create than a trustable “singleton” mind.

Nick Bostrom covered this in “What is a Singleton?”:

In set theory, a singleton is a set with only one member, but as I introduced the notion, the term refers to a world order in which there is a single decision-making agency at the highest level. Among its powers would be (1) the ability to prevent any threats (internal or external) to its own existence and supremacy, and (2) the ability to exert effective control over major features of its domain (including taxation and territorial allocation). …

A democratic world republic could be a kind of singleton, as could a world dictatorship. A friendly superintelligent machine could be another kind of singleton, assuming it was powerful enough that no other entity could threaten its existence or thwart its plans. A “transcending upload” that achieves world domination would be another example.

The idea is around a single decision-making agency. That agency could be made up of trillions of sub-agents, as long as they demonstrated harmony on making the highest level decisions, and prevented Tragedies of the Commons. Thus, a democratic world republic could be a singleton.

If you liked this article, please give it a quick review on ycombinator or StumbleUpon. Thanks