
George Alcolado Nuthall
Academic and research departments
Centre for Vision, Speech and Signal Processing (CVSSP), Faculty of Engineering and Physical Sciences.About
My research project
Robotic Guide Dog for the Visually ImpairedPeople who are visually impaired navigate complex and dynamic environments on a regular basis to carry out tasks such as collecting groceries, going to work and taking their children to and from school. Guide dogs have long been recommended to visually impaired people as they have been shown to successfully support individuals in these kinds of environments. For some, a working dog is not an appropriate solution as an individual may be unable or unwilling to take care of a dog. However, in the cases that a guide dog is the desired solution, the process of obtaining a working dog can be lengthy one due to the process in producing a guide dog and the overall demand. This project proposes to develop robust localisation and mapping technologies that would enable a mobile robot to safely guide its visually impaired user through dynamic environments such as crowds. It also aims to create AI-based path-planning strategies that can effectively navigate a robot dog and its user in an efficient and collision-free manner across complex urban environments. The research will focus on being able to function on heterogeneous mobile robots, however, a live demonstrator will be made using the Boston Dynamics quadruped, SPOT.
Supervisors
People who are visually impaired navigate complex and dynamic environments on a regular basis to carry out tasks such as collecting groceries, going to work and taking their children to and from school. Guide dogs have long been recommended to visually impaired people as they have been shown to successfully support individuals in these kinds of environments. For some, a working dog is not an appropriate solution as an individual may be unable or unwilling to take care of a dog. However, in the cases that a guide dog is the desired solution, the process of obtaining a working dog can be lengthy one due to the process in producing a guide dog and the overall demand. This project proposes to develop robust localisation and mapping technologies that would enable a mobile robot to safely guide its visually impaired user through dynamic environments such as crowds. It also aims to create AI-based path-planning strategies that can effectively navigate a robot dog and its user in an efficient and collision-free manner across complex urban environments. The research will focus on being able to function on heterogeneous mobile robots, however, a live demonstrator will be made using the Boston Dynamics quadruped, SPOT.
Publications
As robots increasingly coexist with humans, they must navigate complex, dynamic environments rich in visual information and implicit social dynamics, like when to yield or move through crowds. Addressing these challenges requires significant advances in vision-based sensing and a deeper understanding of socio-dynamic factors, particularly in tasks like navigation. To facilitate this, robotics researchers need advanced simulation platforms offering dynamic, photorealistic environments with realistic actors. Unfortunately, most existing simulators fall short, prioritizing geometric accuracy over visual fidelity, and employing unrealistic agents with fixed trajectories and low-quality visuals. To overcome these limitations, we developed a simulator that incorporates three essential elements: (1) photorealistic neural rendering of environments, (2) neurally animated human entities with behaviour management, and (3) an ego-centric robotic agent providing multi-sensor output. By utilizing advanced neural rendering techniques in a dual-NeRF simulator, our system produces high-fidelity, photorealistic renderings of both environments and human entities. Additionally, it integrates a state-of-the-art Social Force Model (SoFM) to model dynamic human-human and human-robot interactions, creating the first photorealistic and accessible human-robot simulation system powered by neural rendering.