Neurala has highly parallel GPU based neural networks for better AI and self driving robotics

U.S. Patent Office in February 2014 issued to Neurala, Inc. a patent that covers brain-based computational models, often called Artificial Neural Networks (ANN) running on graphic processing units (GPU). The invention is seen as an important foundation for real-time artificial intelligence and robotics applications.

Humans outperform computers in many natural tasks, including vision and language processing, because the brain efficiently processes many inputs, learns, and recognizes patterns. Computers, however, process only one input at a time on each CPU core and then make sequential calculations. Therefore, even fast CPUs cannot match the power of the human brain.

Neurala’s breakthrough, which dates back to 2006, was to see that GPUs, which were originally designed for computer games and 3D graphics, could be used to process multiple inputs simultaneously and to simulate neural networks. Cutting-edge artificial intelligence and ANNs are dramatically accelerated on GPUs, which can handle many more instructions per clock cycle than a computer’s central processing unit (CPU). As a result, ANNs that can perform interesting tasks can be written to run in real-time using a low-cost graphic processing card found in many consumer products.

“Our invention makes it possible for robots and other devices to use artificial intelligence in situations in which execution time is critical. It will be fundamental for our effort to build brains for robots that interact with the world and with humans in real-time,” said Massimiliano Versace, CEO and co-founder of Neurala.

The robot’s brain processes visual information in real time, enabling it to do more than simply navigate from one spot to another. This means robots could one day be trusted to make their own decisions when navigating changing terrain on Mars. The Neurala GPU networks are already ten times faster than regular CPU based networks.

* self driving flying drones
* self driving cars
* mostly self guided ground robots

Applied for navigating robots on Mars

Surface exploration of planetary environments with current robotic technologies relies heavily on human control and power-hungry active sensors to perform even the most elementary low-level functions. Ideally, a robot should be capable of autonomously exploring and interacting within an unknown environment without relying on human input or suboptimal sensors. Behaviors such as exploration of unknown environments, memorizing locations of obstacles or objects, building and updating a representation of the environment, and returning to a safe location, are all tasks that constitute typical activities efficiently performed by animals on a daily basis. Phase I of this NASA STTR focused on design of an adaptive robotic multi-component neural system that captures the behavior of several brain areas responsible for perceptual, cognitive, emotional, and motor behaviors. This system makes use of passive, potentially unreliable sensors (analogous to animal visual and vestibular systems) to learn while navigating unknown environments as well as build usable and correctable representations of these environments without requiring a Global Navigation Satellite System (GNSS). In Phase I, Neurala and the Boston University Neuromorphics Lab, constructed a virtual robot, or animat, to be developed and tested in an extraterrestrial virtual environment. The animat used passive sensors to perform a spatial exploration task. The animat started exploring from a recharging base, autonomously planned where to go based on past exploration and its current motivation, developed and corrected an internal map of the environment with the locations of obstacles, selected the shortest path of return to its recharging base before battery depletion, then extracted the resulting explored map into a human-readable format.

Cheaper robots and space flight will mean we cannot have a team constantly monitoring a rover to ensure it is safe

New Scientist – Mark Motter of NASA’s Langley Research Center in Hampton, Virginia, says Neurala’s approach highlights the difference between automation – in which a detailed, prescribed plan is executed by a machine – and autonomy, where a machine is free to make its own decisions on how to reach its goal. Neurala aims to mimic how human brains recognise objects, accumulate experience and make judgements, Motter says. “This is an interesting approach to autonomy.”

Neurala’s robot vision system also mimics a trick the human eye uses, called foveation. The fovea is a region of the retina that is thickly populated with visual receptors, and gives you very clear central vision. In a similar way, the robot’s vision system focuses on specific points of the scene to build up a picture of its environment instead of trying to process everything it sees through the camera all at once. This still results in a large amount of data but reduces the overall load on the robot’s brain.

That ability to process visual information in real time means Neurala’s brain can do more than just guide a robot from one spot to another. On its way, it could be carrying out basic science, classifying the rocks it sees and flagging unusual ones for further investigation, for example, or searching for signs of water and minerals.

There is a reason why the most advanced rover currently in existence, Curiosity, is rigidly controlled by NASA operators: space equipment in general is fiendishly expensive, so the agency has to be sure the chances of anything going wrong are tiny.

“If I’m going to send a rocket that costs billions of dollars, I want to be sure that every millimetre I travel is gonna be safe,” says Max Versace, Neurala’s CEO and head of Boston’s Neuromorphics Lab. But as commercial space flight heats up and costs go down, he thinks the future lies in swarms of low-cost robots (see “Termite robots build castles with no human help”). “We need similar machinery to the biological brain to do that,” Versace says.

The patent, which is Neurala’s first, is number 8,648,867 and is entitled “Graphic Processor Based Accelerator System and Method”. It covers hardware and software “controllers” that handle most of the primitive operations needed to set up and control ANNs on a GPU.

“One way to think about the significance of GPU processing for robots is to consider how quickly you may like to move 600 people from Boston to London. If all you had was a single fast fighter jet, you could get a single person to London very quickly. But, it would take 600 sequential trips to move all of them and process them through customs. On the other hand, it you had a Boeing 747-8, the trip would be slower, but all 600 people would arrive and be processed faster,” said Roger Matus, vice-president of products and markets for Neurala. “Computers operate much in the same way as the fighter jet in this example. It is fast, but cannot do many things. Biological brains, however, operate more like GPUs. They use slower processors called neurons, but are massively parallel to do more at once.”

Earlier this year, the Federal Aviation Authority (FAA) announced the creation of six UAV testing areas around the country, where researchers will develop the kinds of systems that will allow autonomous craft to join the national airspace, safely sensing and avoiding other aircraft. Under FAA regulations, civilian drones in the US are currently permitted to fly only under 400 feet (122 metres).

“You have to be able to tell the FAA that you can put this thing up in the air at 15,000 feet and guarantee, even if you put a million of them in the air, that you will not hit a passenger aircraft,” says Jeremy Wurbs, a graduate student in Versace’s lab, who is working on adapting Neurala’s system to this task.

In a few years, smartphone and tablet GPUs will be able to run Neurala software. This will enable robots with tablets or smartphone heads to self-navigate our environment.

Future of robotics
Massimiliano Versace, director of the Boston University Neuromorphics Lab and CEO of Neurala, Inc., talks about the future of robotics. He specifically talks about the robot body, brain and mind. These three components are advancing rapidly to form the future of robots.

If you liked this article, please give it a quick review on ycombinator or StumbleUpon. Thanks