Urban search and rescue is something I'm really passionate about. And it's an area in which robots can really make a difference – reducing the risks to the lives of human workers. And incidents such as the Fukushima nuclear disaster and Hurricane Katrina drive home the fact we need to find new ways to explore post-disaster areas without putting rescuers in harm's way.
In the aftermath of a disaster, such as an earthquake or mudslide, rescuers need to explore the post-disaster area, assess damage and find and rescue survivors. This task can be very dangerous when they're venturing into buildings that have partially collapsed or been contaminated.
So what's currently done to ensure rescuers avoid exploring dangerous areas?
For many years, the robotics community has proposed using robots to explore and map these areas – producing maps that human rescuers can use to assess damage and find survivors. But these environments are very challenging for robot systems to map and explore due to uneven terrain, smoke, little light and environment variability.
Currently, robots use actuators, such as wheels, to move in the environment and sensors, eg cameras and distance sensors, to perceive and map their environment. These include:
But both of these sources of information have a tendency to be inaccurate, so we use the estimate of robot motion to improve our map estimate and vice versa. This concept is the basis of a very popular family of algorithms called simultaneous localisation and mapping. Mathematically combining those two pieces of information can yield good results, but better accuracy can often mean higher complexity – making these algorithms more computationally demanding.
How can we improve?
I developed a novel way to improve the accuracy of maps produced by robots – using a method that increases accuracy but doesn't increase computational requirements.
My team and I used architectural drawings, or floorplans, of the building that we wanted to explore to automatically extract information about wall locations. We then converted this information into a robot-friendly, optimised format and used it as a starting point for our mapping. The result? It allowed us to get a more accurate estimate of the robot's starting position, and a more accurate map of the area by giving us a 'head start'. Because a rescuer can place the robot in a starting location known in the floorplan, and information about what we expect to see in the building is available, we have a better chance of creating an accurate map.
There's great potential to save lives with search and rescue robots, and we're reaching the point at which they can be helpful members of a rescue team – rather than data-gathering bystanders. We can help them map their environment more accurately by giving them a head start and shape a future in which robots and humans work together to save lives.
Want to find out more about this research? You can download the paper published at the International Journal of Robotics Research which outlines how we can convert architectural drawings to a robot-friendly format and use them to improve performance. And if you're interested in learning more about next generation robotics, check out the report we produced for the United Nations on disruptive technologies.