11am - 12 noon

Monday 22 March 2021

Machine learning for robotic grasping

PhD Open Viva Presentation by Rebecca Allday.

All are welcome!

Free

Online
back to all events

This event has passed

Speakers


Abstract

Rebecca Allday

Grasping is a fundamental element of robotics which has seen great advances in hardware and engineering over the last few decades. Despite this, most current approaches struggle to generalise to the diverse environments and challenges seen in robotic grasping. This thesis looks at how data-driven deep learning methods can be used to provide this generalisation in various aspects of the pick and place pipeline. Specifically, it explores the static process of detecting repeated objects to be grasped and the dynamic process of grasping a located object.

Engineered solutions offer great accuracy and confidence in grasping technology, but are often brittle and fail as soon as the environment or task changes. Deep learning approaches have been shown to learn representations that provide superior performance over a wide variety of tasks, without the need for hand-engineering. Reinforcement Learning (RL) has the potential to transfer deep learning methods to dynamic problems by learning policies which map from an environment’s state to an action to be taken. However, learning these methods end-to-end requires large volumes of training data, which is often impractical and sometimes impossible to collect in robotics applications.

This thesis offers three main contributions. The first contribution presents a method for learning vision based policies, using separate but connected perception and control feedback during training. The proposed Auto-Perceptive Reinforcement Learning (APRiL) method allows the perception system to learn the necessary features to interpret the state of the environment, whilst allowing the control policy to focus on how to complete the task at hand using all available state information during training.

In goal driven tasks, with multiple potential targets, a method is required to determine which target to select. In warehouse picking, the type of target object is available at deployment, but is not always known during training and there are often multiple instances available to be selected. To address this, the second contribution proposes a zero-shot repeated object detection method, which can locate instances of similar objects in an image given a conditioning object. This method can also be used with different definitions of similarity, including the ability to distinguish between pickable and unpickable instances for a given policy.

In the final contribution, this thesis shows how these methods can be brought together in a single framework within the Robot Operating System (ROS) to complete a warehouse style picking task, where a given type of object can be picked using a learnt target selection method and grasping policy.

Attend the seminar

You can join the seminar via Zoom.