Singularity summit – how good at we at predicting AGI #SS12

stewart armstrong talked about how we are predicting AGI (he worked with Kaj Sotala)

Bottom line – we are bad at predicting AGI. (artificial general intelligence)
He just says we should expand error bars. instead of 2040 say 2017 to 2113.

I think we need to work better at defining what we mean by AGI. Decompose more sub-goals and parts of the problem. Decompose the benefits and downsides that we can and should expect. Do the same thing for other large scale technological possibilities.

what performance should we expect?
what do we get?

Fields arranged by predictions

Less pure to more pure

AGI Predictors, historians, sociologists, economist, psychologists, biologists, chemists, physics, math

Expert Opinion, Past Examples, scientific method, deductive

Real objective criteria

James Shanteau – competence in experts, research on experts

Good expert predictors Bad predictors

Experts agree on stimuli disagree
feedback available no feedback
problem decomposable not decomposable

Grind is easy, insight hard

How long to do something grinding along

Moore’s law hence AGI
Moore’s law is grind

257 AGI related predictions (Singularity Institute)
95 are timeline
Tranformed into median predictor

Experts, non-experts
7 predictions past 2100

Maes Garreau law – happens just before you die
Not based on expected lifespan at time of prediction

15-25 years. one third prdict 15-25
Not soon, not too far.

No evidence have any predictive advantage

spread uncertainty
Current best timeline predictions, whole brain emulations
Very decomposed, justified grind, clear assumptions and scenarios
integrated new data, multiple paths to get there

No overhang, 1 overhang, 2 overhang.
uncertainty over the century.

Simplified Omohundro-Yudkowsky thesis
Behaving dangerously …

Many AGI designs have the potential for unexpected dangerous behavior

AGI programmers should demonstrate to moderate skepitcs that their design is safe

Is the thesis wrong, in your opinion ?

Our own opinions are not strong evidence
Philosophy has some useful things to stay
AGI timeline predictins are problematic.

If you liked this article, please give it a quick review on ycombinator or StumbleUpon. Thanks