OSB archive
OSB archive

Robots get sense of 'déjà vu'

Pete Wilton

Software that gives robots a sense of ‘déjà vu’ is the key to them operating effectively in unfamiliar environments, as New Scientist reports in an article on the work of Oxford engineers.

For decades engineers have wanted robots to do jobs that are too dangerous for humans – such as entering disaster zones or exploring other planets. Yet, all too often, once these robots leave the confines of the lab or factory floor they run into one big problem: they get lost.

‘To start with it’s just a small error, say turning 89 degrees instead of 90 degrees, but pretty soon this small error is compounded until the robot is nowhere near where it believes itself to be’ said Mark Cummins of the Oxford Mobile Robotics Group, who has been researching this navigation problem with Paul Newman, who leads the group.

‘Humans too can make these kind of errors but, unlike robots, we can spot when we have been somewhere before and readjust our mental map accordingly. We are giving robots this same sense of ‘déjà vu’ so that, just by taking cues from their environment, they can readjust their sense of where they are and correct their own ‘mental’ maps.’

It may sound a simple task but in fact this ‘where am I?’ question has proved one of the most intractable problems in robotics. At present many autonomous robots rely on pre-produced maps or GPS to find their way around, but GPS isn't available indoors, near tall buildings, under foliage, underwater, underground, or on other planets such a Mars – all places we might want a robot to operate.

Oxford engineers have spent years addressing a key part of the ‘where am I?’ question – figuring out when a vehicle has returned to a previously visited place (known as the "loop closing problem"). To tackle it they've created The FABMAP algorithm that, through a combination of machine learning and probabilistic inference, is able to compare a current view of a scene with impressions of all the places it has been before.

Crucially it does this with great precision and rapidly – fast enough for a robot to realise it is retracing its steps and adjust its route [see images above and below: the green and red circles show parts that were matched/unmatched between the two images].

‘At the moment it can recognise and label different elements of its surroundings – making distinctions between, for instance, gravel paths and roads, stone walls and doorways, even different building types,’ Paul tells us. ‘This sort of ‘semantic exploration’ is the first step towards not just mapping its surroundings but starting to understand them as a human would.’

Mark adds: ‘Another motivation is that this kind of vision research is building towards robots that have some richer understanding of their environment, rather than the bare position information you get from GPS.’

‘In the future we want people just to be thinking about the task they want a robot to perform, and how it can help, rather than worrying about how it finds its way around or gathers useful information about its surroundings. ‘Where am I?’ is an important question, and being able to answer it accurately is central to the future of robot technology.’

Dr Paul Newman and Mark Cummins are based at Oxford's Department of Engineering Science.