Mainstreaming of Artificial Intelligence and Intelligence Explosion Risks

The New York Times reported on the Association for the Advancement of Artificial Intelligence 2008-2009 Presidential Panel on Long-Term AI Futures conference.

Co-chairs: Eric Horvitz and Bart Selman

Panel: Margaret Boden, Craig Boutilier, Greg Cooper, Tom Dean, Tom Dietterich, Oren Etzioni, Barbara Grosz, Eric Horvitz, Toru Ishida, Sarit Kraus, Alan Mackworth, David McAllester, Sheila McIlraith, Tom Mitchell, Andrew Ng, David Parkes, Edwina Rissland, Bart Selman, Diana Spears, Peter Stone, Milind Tambe, Sebastian Thrun, Manuela Veloso, David Waltz, Michael Wellman

The AAAI President has commissioned a study to explore and address potential long-term societal influences of AI research and development. The panel will consider the nature and timing of potential AI successes, and will define and address societal challenges and opportunities in light of these potential successes. On reflecting about the long term, panelists will review expectations and uncertainties about the development of increasingly competent machine intelligences, including the prospect that computational systems will achieve “human-level” abilities along a variety of dimensions, or surpass human intelligence in a variety of ways. The panel will appraise societal and technical issues that would likely come to the fore with the rise of competent machine intelligence. For example, how might AI successes in multiple realms and venues lead to significant or perhaps even disruptive societal changes?

The committee’s deliberation will include a review and response to concerns about the potential for loss of human control of computer-based intelligences and, more generally, the possibility for foundational changes in the world stemming from developments in AI. Beyond concerns about control, the committee will reflect about potential socioeconomic, legal, and ethical issues that may come with the rise of competent intelligent computation, the changes in perceptions about machine intelligence, and likely changes in human-computer relationships.

* The panelists generally discounted the possibility of highly centralized superintelligences and the idea that intelligence might spring spontaneously from the Internet.

* But they agreed that robots that can kill autonomously are either already here or will be soon. [tracking old and not so significant news: are robot weapons more significant than cluster bombs or fuel air explosions ?]

* They focused particular attention on the specter that criminals could exploit artificial intelligence systems as soon as they were developed. What could a criminal do with a speech synthesis system that could masquerade as a human being? What happens if artificial intelligence technology is used to mine personal information from smart phones? [advanced spam]

* The researchers also discussed possible threats to human jobs, like self-driving cars, software-based personal assistants and service robots in the home.

It is my view that self driving cars would be a good thing. There are 1.2 million deaths worldwide each year from traffic accidents. Robotic driving should be introduced in a way that eliminates or greatly reduces traffic accidents. Also, by not having to have the driver concentrate on driving during a commute then it would be possible to free up 1-2 hours a day for productivity which would increase economic growth.

Just as robotic automation has eliminated jobs in the past but then increased overall wealth and created more jobs than were destroyed.

Electric powered, exclusive robotic car urban transportation zones could accelerate the deployment of robotic cars.

There is a lot of progress towards robotic cars on city wide scales.