In 2018 I obtained a PhD in Robot Vision, and have continued to work in Robotics Research. I am currently a Research Fellow at the University of Surrey. I am interested in the fields of Robotics, Computer Vision and Deep Learning. I’ve spent many years developing and building real-time robotic systems that can leverage the advances of Deep Learning to perform difficult computer vision tasks such as SLAM, 3D Reconstruction and Multi-View Geometry. I am also interested in collaboration and automation between robotic agents, specifically emergent behaviours that are not hard-coded into systems.
Areas of specialism
University roles and responsibilities
- Lecturer for EEE1035 (Programming in C)
- Lecturer for EEE3043 (Robotics)
Affiliations and memberships
I am interested in the fields of Robotics, Computer Vision and Deep Learning. I’ve spent many years developing and building real-time robotic systems that can leverage the advances of Deep Learning to perform difficult computer vision tasks such as SLAM, 3D Reconstruction and Multi-View Geometry. I am also interested in collaboration and automation between robotic agents, specifically emergent behaviours that are not hard-coded into systems.
As autonomous cars start to become a reality, one of the unanswered questions remains – where and how will those cars park?
This consortium’s “Autonomous Valet Parking” project seeks to develop Highly Autonomous Driving maps to support indoor navigation and localisation.
Autonomous Valet Parking (AVP) is functionality which allows a driver to be dropped off in a multi-storey car park or their final destination and the vehicle to then park itself autonomously.
Estimating the vehicle’s current position is more difficult in multi-storey carparks where GPS signals cannot be received, which means that the vehicle must rely on other sensors and localisation based on visual objects and features present in maps. This is an open problem in the automotive industry which must be solved to enable SAE Level 4 AVP deployment.
This consortium’s key objective is to identify obstacles to full deployment of AVP through the development of a technology demonstrator. It aims to achieve this goal by:
- Developing automotive-grade indoor parking maps required for autonomous vehicles to localise and navigate within a multi-storey car park.
- Developing the associated localisation algorithms – targeting a minimal sensor set of cameras, ultrasonic sensors and inertial measurement units – that make best use of these maps.
- Demonstrating this self-parking technology in a variety of car parks.
- Developing the safety case and prepare for in-car-park trials.
- Engaging with stakeholders to evaluate perceptions around AVP technology.
Indicators of esteem
Sullivan Thesis Prize Winner (2018)
Courses I teach on
Accurate extrinsic sensor calibration is essential for both autonomous vehicles and robots. Traditionally this is an involved process requiring calibration targets, known fiducial markers and is generally performed in a lab. Moreover, even a small change in the sensor layout requires recalibration. With the anticipated arrival of consumer autonomous vehicles, there is demand for a system which can do this automatically, after deployment and without specialist human expertise.
To solve these limitations, we propose a flexible framework which can estimate extrinsic parameters without an explicit calibration stage, even for sensors with unknown scale. Our first contribution builds upon standard hand-eye calibration by jointly recovering scale. Our second contribution is that our system is made robust to imperfect and degenerate sensor data, by collecting independent sets of poses and automatically selecting those which are most ideal.
We show that our approach’s robustness is essential for the target scenario. Unlike previous approaches, ours runs in real time and constantly estimates the extrinsic transform. For both an ideal experimental setup and a real use case, comparison against these approaches shows that we outperform the state-of-the-art. Furthermore, we demonstrate that the recovered scale may be applied to the full trajectory, circumventing the need for scale estimation via sensor fusion.
The use of human-level semantic information to aid robotic tasks has recently become an important area for both Computer Vision and Robotics. This has been enabled by advances in Deep Learning that allow consistent and robust semantic understanding. Leveraging this semantic vision of the world has allowed human-level understanding to naturally emerge from many different approaches. Particularly, the use of semantic information to aid in localisation and reconstruction has been at the forefront of both fields.
Like robots, humans also require the ability to localise within a structure. To aid this, humans have designed highlevel semantic maps of our structures called floorplans. We are extremely good at localising in them, even with limited access to the depth information used by robots. This is because we focus on the distribution of semantic elements, rather than geometric ones. Evidence of this is that humans are normally able to localise in a floorplan that has not been scaled properly. In order to grant this ability to robots, it is necessary to use localisation approaches that leverage the same semantic information humans use.
In this paper, we present a novel method for semantically enabled global localisation. Our approach relies on the semantic labels present in the floorplan. Deep Learning is leveraged to extract semantic labels from RGB images, which are compared to the floorplan for localisation. While our approach is able to use range measurements if available, we demonstrate that they are unnecessary as we can achieve results comparable to state-of-the-art without them.
Mendez Maldonado, Oscar (2018). Collaborative strategies for autonomous localisation, 3D reconstruction and pathplanning. Doctoral thesis, University of Surrey. Sullivan Thesis Prize Winner (2018).