What did you learn about the singularity today?

by Anders Sandberg
http://www.aleph.se/andart/archives/2010/10/what_did_you_learn_about_the_singularity_today.html

TensegrityAnna Salamon gave a talk on “How intelligible is intelligence?” The key question is whether there exists simple principles that could produce powerful optimizers that work on many domains and targets, and whether they are easy to figure out. This has obvious implications for the above questions, such as how likely sudden breakthroughs are compared to slow trial-and-error and complete failure (c.f. my and Carl’s talk). While she did not have a precise and formal definition of intelligible concepts or things it is pretty intuitive: one can extend intelligible systems from principles or simpler versions, they are not domain specific, they can be implemented in many substrates, aliens would likely be able to come up with the concept. Unintelligible systems are just arbitrary accumulations of parts or everything-connected-to-everything jumbles.

Looking through evidence in theoretical computer science, real computer science, biology and human communities her conclusion was that intelligence is at least somewhat intelligible – it doesn’t seem to rely just on accumulating domain-specific tricks, it seems to have a few general and likely relatively simple modules that are extendible. Overall, a good start. As she said, now we need to look at whether we can formally quantify the question and gather more evidence. It actually looks possible.
She made the point that many growth curves (in technology) look continuous rather than stair step-like, which suggests they are due to progress on unintelligible systems (an accumulation of many small hacks). It might also be that systems have an intelligibility spectrum: modules on different levels are differently difficult, and while there might be smooth progress on one level other levels might be resistant (for example, neurons are easier to figure out than cortical microcircuits). This again has bearing on the WBE problem: at what level does intelligibility, the ability to get the necessary data and having enough computer power intersect first? Depending on where, the result might be very different (Joscha Bach and me had a big discussion on whether ‘generic brains’/brain-based AI (Joscha’s view) or individual brains (my view) would be the first outcome of WBE, with professor Günther Palm arguing for brain-inspired AI).
Joscha Bach argued that there were four preconditions for reaching an AI singularity:

  1. Perceptual/cognitive access. The AIs need be able to sense and represent the environment, universal representations and general enough intelligence.
  2. Operational access. They need to be able to act upon the outside environment: they need write access, feedback, that they can reach critical environment and (in the case of self-improvement) access to their own substrate.
  3. Directed behaviour. They need to autonomously pursue behaviour that include reaching the singularity. This requires a motivational system or some functional equivalent, agency (directed behavior), autonomy (the ability to set own goals), and a tendency set goals that increase abilities and survivability.
  4. Resource sufficiency. There has to be enough resources for them to all this.

His key claim was that these functional requirements are orthogonal to architecture of actual implementations, and hence AI singularity is not automatically a consequence of having AI.
I think this claim is problematic: 1 (and maybe parts of 3) is essentially implied by any real progress in AI. But I think he clarified a set of important assumptions, and if all these preconditions are necessary it is enough that one of them is not met for an AI singularity not to happen. Refining this a bit further might be really useful.
He also made a very important point that is often overlooked: the threat/promise lies not in the implementation but is a functional one. We should worry about intelligent agents pursuing a non-human agenda that are self improving and self-extending. Many organisations come close. Just because they are composed of humans doesn’t mean they work for the interest of those humans. I think he is right on the money that we should watch for the possibility of an organisational singularity, especially since AI or other technology might provide further enhancement of the preconditions above even when the AI itself is not enough to go singular.
Kaj Sotala talked about factors that give a system high “optimization power”/intelligence. Calling it optimization power has the benefit of discouraging anthropomorphizing, but it might miss some of the creative aspects of intelligence. He categorised them into: 1) Hardware advantages: faster serial processing, faster parallel processing, superior working memory equivalent. 2) Self improvement and architectural advantages: the ability to modify itself, overcome biased reasoning, algorithms for formally collect reasoning and adding new modules such as fully integrating complex models. 3) Software advantages: copyability, improved communication bandwidth, speed etc. Meanwhile humans have various handicaps, ranging from our clunky hardware to our tendency to model others by modelling on ourselves. So there are good reasons to think an artificial intelligence could achieve various optimization/intelligence advantages over humans relatively simply if it came into existence. Given the previous talk, his list is also interestingly functional rather than substrate-based. He concluded: “If you are building an AI, please be careful, please try to know what you are doing”.

Intelligence explosion dynamics

Stephen Kaas presented an endogenous growth model of the economic impact of AI. It was extremely minimal, just Romer’s model with the assumption that beyond a certain technology level new physical capital also produces human capital (an AI or upload is after all physical capital that has ‘human’ capital). The result is a finite time singularity, of course. This is generic behaviour even when there are decreasing margins. The virtue of the model is that it is a minuscule extension of the standard model: no complex assumptions, just AI doing what it is supposed to do. Very neat. Carl Shulman and me gave a talk about hardware vs. software as the bottleneck for intelligence explosions. See previous posting for details. Basically we argued that if hardware is the limiting factor we should see earlier but softer intelligence explosions than if software is hard to do, in which case we should see later, less expected and harder takeoffs.

What did I learn?

The big thing was that it actually looks like one could create a field of “acceleration studies”, dealing with Amnon’s questions in a intellectually responsible manner. Previously we have seen plenty of handwaving, but some of that handwaving has now been distilled through internal debate and some helpful outside action to the stage where real hypotheses can be stated, evidence collected and models constructed. It is still a pre-paradigmatic field, which might be a great opportunity – we do not have any hardened consensus on How Things Are, and plenty of potentially useful disagreements.
Intelligibility seems to be a great target for study, together with a more general theory of how technological fields can progress. I got some ideas that are so good I will not write about them until I have tried to turn them into a paper (or blog post).
One minor realization was to recall Amdahl’s law, which had really slipped my mind but seems quite relevant for updates of our paper. Overall, the taxonomies of preconditions and optimization power increases, as well as Carl’s analysis of ‘serial’ versus ‘parallel’ parts of technology development, suggests that this kind of analysis could be extended to look for bottlenecks in singularities: they will at the very least be dominated by the least ‘parallelisable’ element of the system. Which might very well be humans in society-wide changes.

If you liked this article, please give it a quick review on ycombinator, or Reddit, or StumbleUpon. Thanks

Featured articles

Ocean Floor Gold and Copper
   Ocean Floor Mining Company

Thank You