Interview of Noah Goodman of the Cognitive Science Group at MIT by Sander Olson

Ad Support : Nano Technology   Netbook    Technology News &nbsp  Computer Software

Here is the Noah Goodman interview. Dr. Goodman is a research scientist at the Computational Cognitive Science Group at MIT. The Computational Cognitive Science Group examines the computational abilities of the human brain. Dr. Goodman believes that artificial general intelligence may be only twenty years away.




Question: MIT appears to have a number of AI programs operating simultaneously. How are these programs coordinated?


Answer: MIT has an interlocking system of departments and labs. The lab I work at, called the Computational Cognitive Science Group (CCSG), is part of the Computer Science and Artificial Intelligence Lab and the Brain and Cognitive Sciences Department at MIT. The main focus of the CCSG is focused on building computational models of human thinking and human reasoning. We do research by combining psychological experiments on the cognitive science side and machine learning on the AI side.


Question: What is the ultimate aim of the CCSG?


Answer: The ultimate goal is to derive a computational understanding of the nature of intelligence. Unlike a pure AI group or a pure psychology group, we seek to understand both how natural intelligence works computationally, and also to understand the principals needed for engineering intelligence.


Question: The Computational Cognitive Science Group at MIT seeks to understand the “computational basis of learning and inference”. How much is brain activity analogous to computation?


Answer: The short answer is that we really don’t know. The guiding principal of cognitive science is that the mind derives from information processing in the brain. The brain is clearly an information processor, but is this processing system similar to that of a modern computer? Probably not. But we are making some pretty exciting progress at understanding what kind of information processing is needed to understand human intelligence.

Question: Tell us about your research into Bayesian statistics and probability theory. What role is this playing in AI research?

Answer: Bayesian statistics and probability theory is all about reasoning in the presence of uncertainty, based on inference and complex knowledge. For instance, a botanist looking for certain types of leaves can use several tactics. One way is simply to measure the features of many leaves, and extract simple statistics. Another approach is to build a model that describes how to construct a leaf based not on a deterministic program but rather based on a probabilistic program. This probabilistic program will each time generate a new leaf by including random choices. I believe that the latter is how a human does, and an AI must, go about it.

Question: What is your assessment of Henry Markram’s Blue Brain project?

Answer: The Blue Brain Project is an attempt to reverse engineer the brain based on understanding functional principals of its components. The project is intriguing, and we will unquestionably learn about the human brain because of it. But I don’t know if what we learn will be positive – that these theories regarding the brain are largely correct, or negative – that these theories are fundamentally flawed.

Question: How important will reverse-engineering be to understanding the brain’s behavior?


Answer: We believe that the best way to understand intelligence is to reverse engineer the mind. We are trying to reverse-engineer the mind by experimenting on people. We give people different kinds of input, ask them to draw conclusions, and examine the output. A parallel project at MIT is to reverse the brain, but if we simply try to reverse-engineer the brain we will come up with a schematic of the human brain that is essentially a jumble of wires. So both approaches have merit, and are complementary. We will probably need to reverse-engineer at both of those levels.

Question: How long will it take to do this reverse-engineering?

Answer: I hope that within the next decade, we will garner a broad outline of the brain and the mind’s operating principals. Within twenty years we will begin to fill in the details. The pace of AI research appears to be accelerating, and the field of AI could experience rapid growth in a manner similar to the biogenetics field. But we still don’t know if, in the end, our current theories will actually work or not.


Question: What is the Church probabilistic programming language?


Answer: Church is a formal programming language that is designed to operate with uncertain knowledge. It represents structured knowledge of how the world might work. When you run the program, you can engage in uncertain or probabilistic reasoning over this representation. So if you observe that the grass is wet, you can engage in backward probabilistic reasoning to conclude that either the sprinkler was turned on or it rained. Church provides a formal language for representing and reasoning with probabilistic knowledge.


Question: Is it possible to program sentience into a computer?


Answer: That question is difficult to answer given that there is widespread disagreement on the nature of sentience. The key question pertains to whether sentience is in the algorithm or in the implementation. If it is a property of the matter running the program, then I am not optimistic. If it is a property of the algorithm running, then sentience should emerge.

Question: Is the field of AI held back more from hardware or software limitations?


Answer: At this point software is more of a limiter than hardware. Hardware is increasing exponentially, yet we can’t simply wait for Moore’s law to give us the hardware capacity of the human brain. We don’t understand the underlying principles of intelligence. But once we do understand those principles, we are going to want faster computers to implement them.


Question: What will be the biggest challenge in creating artificial general intelligence?


Answer: It is always the “unknown unknowns” that present the biggest challenges. The “Known unknowns” are formidable, but with enough time and resources we can solve them. But the problems that you aren’t even aware of, that can’t be anticipated, are the most disruptive.



Question: Many AI researchers are claiming that the pace of AI research is quickening. Do you agree?



Answer: It does seem that way, but I do wonder if AI researchers back in 1960 or 1980 wouldn’t have thought the same thing. It definitely seems there was an “AI winter” that has ended. On the other hand, it is said that once something works, it is no longer considered AI….

Question: When is your best guess as to when artificial general intelligence in computers will be achieved?



Answer: I would like to think that human-level AI could be achieved within the next two decades. If we don’t run into any “unknown unknowns” then it is feasible. But I do anticipate that these unforeseeable stumbling blocks will emerge, and that it could take decades to solve them. But I’m still optimistic that with proper funding and a sufficiently large number of smart people working on this, we could achieve human-level AI within 30 or 40 years


http://cs.byu.edu/colloquia_files/2009Winter/Using%20Hierarchical%20Bayesian%20Models%20to%20Learn%20Dynamical%20Systems%20-%20David%20Wingate%201-15-09.mov



If you liked this article, please give it a quick review on Reddit, or StumbleUpon. Thanks



Supporting Advertising



Trading Futures  
 
Nano Technology  
 
Netbook    
Technology News  
 
Computer Software
   
Future Predictions



Thank You