SHREWBOT: Inspired by the tiny, nocturnal Etruscan shrew, this robot has a whisker array that comes close to the real thing. BRISTOL ROBOTICS LAB

While a graduate student at the University of Sheffield in 1992, cognitive neuroscientist Tony Prescott attended a robotics conference in Hawaii. There he saw innumerable animal-mimicking robots. Hexapod robots showed off their insect-like mobility by roaming the halls during meeting breaks. One robot even sported a compound eye composed of about a dozen individual lenses and sensors, just like the more numerous ommatidia of an insect eye.

Up until that point, Prescott had focused purely on simulations, using computers to model how brains process information. But after interacting with all the robots at that conference, “I really got interested in building physical robots instead,” he says. “That, to me, was very exciting—this idea that you could actually build a physical robot that could copy aspects...

So he returned to the United Kingdom and started doing just that. Now a professor at Sheffield, Prescott develops robots in collaboration with a University of Bristol robotics lab to understand how rodents explore their environment using sensory feedback from their whiskers. First off the assembly line was Whiskerbot, a robot with only a handful of whiskers that were controlled by a type of artificial muscle. Then there was SCRATCHbot (Spatial Cognition and Representation through Active TouCH bot), which boasted 18 whiskers and a more agile head and neck. And finally, the researchers designed Shrewbot, whose whisker array is the most biologically realistic yet.

Most recently, Prescott and research associate Nathan Lepora have been developing a more realistic algorithm by which the rodent-inspired robots can process the information they receive from their whiskers. Most existing robots simply collect the data for a set amount of time, then make a “decision” based on the accumulated evidence. But this type of calculation is “not motivated by the biology,” Lepora says. An animal, he explains, is much more likely to weigh the evidence as it comes in, and make a decision as soon as it feels confident enough to do so.

This idea—known as sequential analysis—dates back to the Second World War, when English mathematician Alan Turing and his comrades used it to break the German Enigma code, says neuroscientist and neurologist Michael Shadlen of Seattle’s University of Washington School of Medicine. “That was really the inspiration for the idea that the brain might be doing something like that—accumulating evidence in units and stopping when a critical level had been reached.”

Shadlen and his colleagues have shown that sequential analysis algorithms accurately describe the neural processing employed by some rhesus monkeys. Shadlen’s monkeys watch dots drifting across a screen, heading in one of four directions—up, down, left, or right. Not all the dots are moving in the same direction, but the monkeys are trained to choose the direction in which the majority is moving. Recording from the parietal cortex, the researchers found that there are neurons that correspond to each of these four directions. As activity ramps up in the “left” neurons, for example, and reaches a sufficiently high level of evidence, the monkey will answer “left.”

Thinking this evidence-based principle might help their whiskered robots make decisions more accurately and efficiently, Prescott, Lepora, and their colleagues rigged up a Roomba self-navigating vacuum with artificial whiskers to detect deflections against different textural surfaces—rough carpet, smooth carpet, tarmac, and vinyl. Sure enough, using a computer model to analyze the data collected from a robot’s whiskers, the researchers found that sequential analysis was a quicker and more unerring path to a decision than the traditional method of simply collecting data for a set period of time before choosing (J Roy Soc Interface, doi:10.1098/rsif.2011.0750, 2012).

The Roomba robot was only collecting information, however, not responding to it. But Prescott’s team now plans to apply this new algorithm to Shrewbot and SCRATCHbot to see if it will improve their navigation abilities. The ultimate goal is to build a bot that can negotiate areas where visual information is not readily available, such as a smoke-filled warehouse during a fire or a crumbled building following an earthquake. “Rats are very good at dark and closed environments,” Lepora says. “To give those capabilities to a robot could really change how [it] interacts with the world around it.”

Building more-realistic robots may also provide researchers with a framework in which to test hypotheses about rodent cognition, Lepora adds. In addition to “developing better ways for robots to perceive the world,” he says, “the research is partly to test these theories of biology by embodying them in the robot.”

Interested in reading more?

Magaizne Cover

Become a Member of

Receive full access to digital editions of The Scientist, as well as TS Digest, feature stories, more than 35 years of archives, and much more!
Already a member?