When predicting the motions a robot must take to reach its goal without injury, damage, or failure, there are dozens of variables that often require complex computation to resolve.
The ability to perform these calculations, which is called motion planning, will be critical to creating a new generation of robots that, unlike the industrial robots of today, can act in a world that has not been meticulously pre-prepared for them.
Despite the natural ease with which humans perform motion planning, robots aren’t as naturally talented as we are, and the computational resources required are large. In most cases, state-of-the-art motion planners for machines can take as long as several seconds to plan a movement.
The core challenge is collision detection: As the robot generates possible paths, it must check whether or not it would collide with objects in the world. Modern motion planners generate thousands or even millions of short motions that together form a complete movement and test them for collision one at a time.
The human brain, however, rarely does things one by one. Rather, it can perform massively parallel processing – that is, it can use its vast number of neurons to do many, many things simultaneously (in parallel) – to solve hard computational problems.
The approach most likely to solve this dilemma involves building a vast number of circuits, rather than neurons, that can operate in parallel. The processor’s circuits perform massively parallel collision detection.
General-purpose computer processors, such as the ones in your laptop or smartphone, achieve remarkable performance on a wide range of tasks, but are not good enough for motion planning.
These processors consist of circuits that do the calculations that software programs instruct. They can execute instructions quickly, but only a few at a time.
This limitation is sensible and economical because typical software doesn’t have many instructions that can be done without waiting for previous instructions to finish. (This is just like in the real world, where you can’t start drying your laundry until after it has finished in the washing machine.)
Unlike typical software, the work performed for motion planning has lots of opportunities to make calculations independently of each other. In collision detection, every motion in the roadmap can be checked against every obstacle simultaneously, in parallel.
A robot in your home could one day be able to make you breakfast, even if the milk isn’t always in exactly the same place, and even if it is in a refrigerator the robot’s designers have never seen before.
And autonomous cars could avoid suddenly appearing obstacles – like a box falling off the back of a truck – while taking into consideration all the possible future movements of the other cars on the road. Robot factories, which are now extremely expensive because they have to be very carefully built to ensure precise predictability, could, in the future, be designed to produce a wider variety of much cheaper goods.
It may turn out that the robots of the future aren’t machines with a single very powerful computer in their heads – they’ll be more like machines with several, special-purpose circuits in their heads, optimized to solve the hard computational work of sensing and acting. Just like the brain.