11am - 12 noon

Wednesday 29 September 2021

Methods for autonomous calibration, mapping and navigation of vehicles

PhD presentation for Celyn Walters. This is an internal CVSSP event.

Free

Online
back to all events

This event has passed

Speakers


Abstract

Celyn Walters

The promise of autonomous vehicles is a popular and exciting concept that has seen considerable investment over the last decade. However, most of this investment is in the automotive sector with far less investment in autonomous maritime. This thesis explores the application of computer vision and robotics to autonomous navigation and docking in a maritime domain.

Sensor fusion is a core robotics concept allowing the outputs from multiple complementary sensors to be combined for robust localisation and perception. An essential first step of which is extrinsic sensor calibration, which determines the relative position and orientation of those sensors. However, once they are mounted to a vehicle it is usually difficult and unwieldy to carry out. As the first main contribution, this thesis proposes a method to automatically perform calibration of arbitrary sensor types during normal operation. In addition to enabling a non-specialist to perform extrinsic calibration, it can also provide automatic updates in the case of sensor failure. Furthermore, it allows the relative scale factor to be inferred, facilitating the inclusion of non-metric sensors into the fusion framework without an explicit metric scaling step.

Autonomous vehicles can only navigate unseen environments if they are able to build a map from which they can perform localisation. This is a common robotics problem traditionally solved with depth sensors and/or 3D reconstruction. However, the operational range of many depth sensors is insufficient for maritime scenarios. The second contribution of this thesis proposes a bird's-eye-view mapping approach using only monocular cameras. By projecting and accumulating semantic information, a probabilistic map is constructed that represents navigable and non-navigable areas. The advantages of this approach versus depth sensors are demonstrated in the marine, robotic, and automotive domains.

Any perception system with a vision-based backbone is vulnerable to adverse lighting conditions, whether that is low light, reflections or fog. These conditions are especially prevalent in the maritime domain. Other spectra, such as infra-red, are less affected but rarely used in automotive. The third contribution solves the stereo correspondence problem between visible, near-infrared, and thermal images. By learning a common representation between unaligned stereo pairs in a self-supervised manner, the proposed approach provides stereo cues between different arbitrary spectra. This also allows annotations, which are often easier to obtain for the visible spectrum, to be transferred to other sensors easing the collection of multispectral datasets.

The final chapter brings the contributions in automated sensor calibration, and semantic ground plane mapping together with path planning to demonstrate autonomous functionality. The presented experiments show significant performance improvements over what is currently available in a real-world practical application.

This thesis advances the field of robotics and autonomous vehicles by exploring approaches which cater to both the maritime and automotive domains. It also shows the viability of cost effective monocular cameras as a sensor.