Pages

August 27, 2007

Are quantum computers needed for AGI ?

Dwave systems, CTO Geordie Rose, theorizes about the usefulness and possible necessity of quantum computers for human level and higher artificial intelligence

Summarizing: Are humans somehow getting good approximate solutions to problems that are NP hard ? If so, would quantum computers be useful to develop alternative ways to get as good or better than some innate human capability ?

What got me thinking about intelligence in the first place was the observation that many of the tasks that seem to be difficult for computers, but relatively easy for biological brains, are most naturally thought of as NP-hard optimization problems. Basically anything that involves complex pattern matching–recognizing speech, inference, relational database search, vision, and learning, for example.

Another thing that seems interesting is this: take any algorithm that scales linearly with input size. For the problem this algorithm solves, can you think of a single example where a human could beat a computer? I can’t think of one.

Finally: biological brains operate with small amounts of input data (five senses). For example if we look at a photograph the total data we receive is quite small.

Is it possible that the notion of complexity classes is important for asking the right questions about intelligence? Here’s a rough outline of an idea.

1. Categorize all of the problems that biological brains have to solve as well-posed computational problems.

2. The subset of these problems that can be solved with algorithms that scale polynomially can always be done better with silicon than with bio-brains. Note that this isn’t true in general–it requires the observation that the input problem instance size is small for bio-brains (# of pixels in a photograph eg.).

3. There are problems that have survival value to solve that are harder than P.

4. Brains evolved excellent heuristics to provide quick approximate solutions to the problems harder than P, and currently it is that subset where bio-brains beat silicon.

5. In a hierarchy of difficulty, some problems will be too hard for bio-brains to evolve heuristics for. This means that the primary differentiator in this picture of things will be the “easiest hard problems”. The hard hard problems are too hard for evolution to create hardware heuristics for.

6. The easiest hard problems (the group most likely to have good hardware heuristics and bad software heuristics) are NP-hard optimization problems.

I tend to think now that breakthroughs in machine intelligence are going to come from algorithms–either new classical algorithms, or the capability to run quantum algorithms on quantum hardware, and not on Moore’s Law advances.

4 comments:

Dezakin said...

This is ridiculously obviously not required for human level AI. The brain doesn't do any managing of decoherence or anything similar.

bw said...

I think the theory is that humans have evolved some intuitive approximation capabilities that are used to solve certain NP hard things. Because we have not determined the algorithms that we can implement to approximate the good approximations, then a better and possibly easier to discover alternative solution is to implement quantum computer algorithms.

Are humans somehow getting good approximate solutions to problems that are NP hard ?

AI guy said...

Dezakin: You must have missed the "reading comprehension" class last year in grade five...we forgive you... what he said was that every major field in AI contains problems that are NP-hard (which is true). It's quite likely that the pattern matching of complex sequences that underlies how the brain probably works is performed using an evolved heuristic. These types of problems (average case NP-hard optimization problems) might be solved (possibly exponentially) faster with quantum computers than the way the brain does it. Next time read the post before you spout off fool.

bw said...

There is no need to get worked up about this or to start a flame war. We are just discussing an early proposal of a scientific theory.

I know that dezakin often has very insiteful comments especially around engineering and energy.

I appreciate your input as well ai guy