“Super-Turing” AI gets development funding

University of Massachusetts Amherst computer scientist Hava Siegelmann has received funding to develop the first ever “Super-Turing” computer. Based on analog recurrent neural networks, Siegelmann says the device will usher in a level of intelligence not seen before in artificial computation.

Alan Turing set out the basis for the digital computers we use today back in the 1930s. In 1948, he proposed a concept for a different type of computing machine that would use what he called “adaptive inference” in its computations.

Siegelmann, an expert in neural networks, wants to evolve Turing’s ideas into reality. She and research colleague Jeremie Cabessa are working at putting what she has dubbed Super-Turing computation into an adaptable computational system that learns and evolves. She says the system will use input from the environment in a way quite different from classic Turing-type digital computers.

Neural Computation – The Computational Power of Interactive Recurrent Neural Networks, 2012

In classical computation, rational- and real-weighted recurrent neural networks were shown to be respectively equivalent to and strictly more powerful than the standard Turing machine model. Here, we study the computational power of recurrent neural networks in a more biologically oriented computational framework, capturing the aspects of sequential interactivity and persistence of memory. In this context, we prove that so-called interactive rational- and real-weighted neural networks show the same computational powers as interactive Turing machines and interactive Turing machines with advice, respectively. A mathematical characterization of each of these computational powers is also provided. It follows from these results that interactive real-weighted neural networks can perform uncountably many more translations of information than interactive Turing machines, making them capable of super-Turing capabilities.

Super-Turing Computation: A Case Study Analysis (61 pages by Keith Douglas, 2003)

Wikipedia on Hypercomputation or super-Turing computation

Hypercomputation or super-Turing computation refers to models of computation that go beyond, or are incomparable to, Turing computability. This includes various hypothetical methods for the computation of non-Turing-computable functions, following super-recursive algorithms (see also supertask). The term “super-Turing computation” appeared in a 1995 Science paper by Hava Siegelmann. The term “hypercomputation” was introduced in 1999 by Jack Copeland and Diane Proudfoot.

The terms are not quite synonymous: “super-Turing computation” usually implies that the proposed model is supposed to be physically realizable, while “hypercomputation” does not.

Technical arguments against the physical realizability of hypercomputations have been presented.

Each step in Siegelmann’s Super-Turing model starts with a new Turing machine that computes once and then adapts. The size of the set of natural numbers is represented by the notation aleph-zero, ℵ0, representing also the number of different infinite calculations possible by classical Turing machines in a real-world environment on continuously arriving inputs. Siegelmann says her most recent analysis shows that Super-Turing computation has 2ℵ0, possible behaviors. “If the Turing machine had 300 behaviors, the Super-Turing would have 2300, more than the number of atoms in the observable universe,” she enthused.

Siegelmann claims the Super-Turing machine will not only be flexible and adaptable, but also economical. When presented with a visual problem, for example, it will act more like our human brains and choose salient features in the environment on which to focus, rather than using its power to visually sample the entire scene as a camera does. This economy of effort, she contends, is another hallmark of high artificial intelligence.

Taxonomy of “super-recursive” computation methodologies

Burgin has collected a list of what he calls “super-recursive algorithms” (from Burgin 2005):

limiting recursive functions and limiting partial recursive functions (E. M. Gold)
trial and error predicates (Hilary Putnam)
inductive inference machines (Carl Herbert Smith)
inductive Turing machines (one of Burgin’s own models)
limit Turing machines (another of Burgin’s models)
trial-and-error machines (Ja. Hintikka and A. Mutanen)
general Turing machines (J. Schmidhuber)
Internet machines (van Leeuwen, J. and Wiedermann, J.)
evolutionary computers, which use DNA to produce the value of a function (Darko Roglic])
fuzzy computation (Jiří Wiedermann)
evolutionary Turing machines (Eugene Eberbach)

In the same book, he presents also a list of “algorithmic schemes”:

Turing machines with arbitrary oracles (Alan Turing)
Transrecursive operators (Borodyanskii and Burgin)
machines that compute with real numbers (L. Blum, F. Cucker, M. Shub, and S. Smale)
neural networks based on real numbers (Hava Siegelmann)

The expressive power of analog recurrent neural networks on infinite input streams (2012)

We consider analog recurrent neural networks working on infinite input streams, provide a complete topological characterization of their expressive power, and compare it to the expressive power of classical infinite word reading abstract machines. More precisely, we consider analog recurrent neural networks as language recognizers over the Cantor space, and prove that the classes of Ï-languages recognized by deterministic and non-deterministic analog networks correspond precisely to the respective classes of -sets and -sets of the Cantor space. Furthermore, we show that the result can be generalized to more expressive analog networks equipped with any kind of Borel accepting condition. Therefore, in the deterministic case, the expressive power of analog neural nets turns out to be comparable to the expressive power of any kind of Büchi abstract machine, whereas in the non-deterministic case, analog recurrent networks turn out to be strictly more expressive than any other kind of Büchi or Muller abstract machine, including the main cases of classical automata, 1-counter automata, k-counter automata, pushdown automata, and Turing machines.

Discontinuities in Recurrent Neural Network (1998 co-authored by Hava Siegelmann)

If you liked this article, please give it a quick review on ycombinator or StumbleUpon. Thanks