Baidu hires Google Andrew Ng and plans to compete with Deep Learning Artificial Intelligence and have built a $20000 simulated brain with one billion connections

[Technology Review]Andrew Ng is the newly appointed chief scientist at Baidu, China’s dominant search company. He has plans to advance deep learning, a powerful new approach to artificial intelligence loosely modeled on the way the brain works. It has already made computers vastly better at recognizing speech, translating languages, and identifying images—and Ng’s work at Google and Stanford University, where he was a professor of computer science, is behind some of the biggest breakthroughs.

Often called China’s Google, Baidu plans to invest $300 million in the new lab and a development office on the same floor over the next five years. Ng (it is pronounced “Eng”) aims to hire 70 artificial-intelligence researchers and computer systems engineers to work in the new lab by the end of 2015. “It will really target fundamental technology,” says Kai Yu, the director of Baidu’s Beijing deep-learning lab, a friend of Ng’s who urged him to join the company.

Ng’s work on artificial intelligence has shaken up a major search company before. He is best known for a project referred to as the Google Brain, which he helped set up inside the secretive Google X research lab in 2011. The project was designed to test the potential of deep learning, which involves feeding data through networks of simulated brain cells to mimic the electrical activity of real neurons in the neocortex, the seat of thought and perception. Such software can learn to identify patterns in images, sounds, and other sensory data. In one now-famous experiment, the researchers built a “brain” with one billion connections among its virtual neurons; it ran on 1,000 computers with 16 processors apiece. By processing 10 million images taken from YouTube videos, it learned to recognize cats, human faces, and other objects without any human help. The result validated deep learning as a practical way to make software that was smarter than anything possible with established approaches to machine learning. It led Google to invest heavily in the technology—quickly moving the Google Brain software into some of its products, hiring experts in the technique, and acquiring startups.

Ng, who calls deep learning a “superpower,” will build a new generation of such systems at Baidu. Services that may result remain in the brainstorming stage, but he will hint at what they may be. He dreams of a truly intelligent personal digital assistant that puts Apple’s Siri to shame, for example. Looking further ahead, the technology could transform robotics, a pet subject for Ng—his engagement photos were taken in a robotics lab—and make autonomous cars and unmanned aerial vehicles much more capable. “We’re going to do some cool things here,” he says with a grin.

First, however, the Baidu lab in Silicon Valley will try to make it easier to test out deep-learning software, which requires enormous computing power. Training a new speech recognition model can take a week or more, a period Ng would like to cut in half. Last year Coates led a Stanford team to a breakthrough that makes that goal realistic. They built a neural network that roughly matched the Google Brain system for a 50th of the cost—only $20,000—using off-the-shelf graphics chips from Nvidia. That approach could help Baidu get powerful deep-learning infrastructure running at relatively low cost. And it fits well with the company’s existing work in Beijing, where simpler clusters of graphics chips have already been used to train deep-learning systems for image and speech recognition.

If you liked this article, please give it a quick review on ycombinator or StumbleUpon. Thanks