CVSSP stars at International Conference on Robotics and Automation
Researchers from the University’s acclaimed Centre for Vision, Speech and Signal Processing (CVSSP) have been invited to present four papers at the prestigious International Conference on Robotics and Automation (ICRA).
Held in Xi’an, China, this annual showcase is the Institute of Electrical and Electronics Engineers’ (IEEE) flagship event for research in robotics and automation. This year, it features a strong virtual component to allow participants from across the globe to take part. And that includes academics from Surrey.
“What’s particularly exciting about the work we’ll be sharing at ICRA is how these pieces of research fit together and share a strong theme in Robotic Perception.”
Showcasing CVSSP research
The four pieces of research our academics will be presenting all have applications for autonomous vehicles. But these aren’t the only uses they have.
The presentations are:
- Multi-spectral mapping for autonomous vehicles: Image alignment is a hugely important area that enables everything from panoramic image creation to motion estimation and virtual reality. Normally, research focuses on details specific to the visual spectrum of red, green, blue (RGB). Our technique can align images from different spectra not visible to the human eye, including infrared or ultraviolent. This allows us to use sensors like infrared cameras and align these image inputs with RGB images in autonomous driving.
- Birds-Eye-View image prediction for autonomous vehicle: This involves the creation of a Birds-Eye-View (BEV) model that can look at an image head-on and, using machine learning, predict an aerial view of that scene in real time. Incredibly useful for autonomous vehicles, the BEV can be used for tasks such as navigation, collision avoidance and future prediction.
- An AI-powered positioning system for GPS-denied areas: By combining AI with good robotics principles, we have created a hybrid neural network that can estimate the position of a robot with nothing but localised images on its environment. This approach is a drop-in replacement for GPS in indoor and other GPS-denied environments, particularly in areas such as car parks that are traditionally difficult to negotiate.
- MERLIN: A navigation system that thinks like a human: Robots need to be able to work in various environments. Even when performing similar tasks, different behaviour should be deployed to best fit the current environment. MERLIN is a Multi-environment Reinforcement-Learning for Navigation approach. Using navigation as a multi-task learning problem, the robot adopts different strategies in visual navigation tasks for different environments. This program operates in a way that is closer to the way humans think about navigation.
“Publishing at ICRA gives the Robot Lab, CVSSP and the University of Surrey a stronger reputation in the field of robotics,” adds Dr Mendez. “We hope this will allow us to position ourselves as a key player in Robotic Perception, which we hope will generate funding.
“Our latest research outperforms the best alternatives in the marketplace and, in several cases, establishes new benchmarks in specific areas.
“This type of cutting-edge work brings us one step closer to creating a hub of Robotics and Artificial Intelligence activity at CVSSP and the University.”
Learn more about our Centre for Vision, Speech and Signal Processing.