10am - 11am GMT

Friday 10 December 2021

Deep learning with noisy samples

PhD viva open presentation by Tianyuan Yu. All welcome!

Free

Online
back to all events

This event has passed

Abstract

The remarkable success of deep learning is largely attributed to the collection of large datasets with human-annotated labels. However, it is extremely expensive and time-consuming to label extensive data with high-quality annotations. In other words, noisy samples are inevitable in large datasets. In this thesis, we consider two types of noisy samples: outlier and label noise samples in two specific problems: person re-identification (ReID) and few-shot learning (FSL). Since the noisy samples are often far from the clean samples of the same class in the input space, the trained model has to sacrifice inter-class separability, leading to performance degradation. To address the problem, we use the stochastic neural network to represent each image as a Gaussian distribution rather than a vector. The variance is encouraged to be larger for the noisy samples. This extra dimension provided by the stochastic neural network allows the model to focus more on the clean inliers rather than overfitting to noisy samples, resulting in better class separability and better generalisation to test data. The stochastic neural networks used to identify the noisy samples are further extended to more practical settings including network pruning, adversarial defence, and model calibration. For the FSL, the stochastic neural network still works, though the improvement is limited since only few training samples are provided for each class. Therefore, we use graph neural networks to pass the message across the training samples to minimise the negative effects of the noisy samples and re-adjust the class distributions so that each class’ distribution is compact and further apart from each other. This thesis is organized into three chapters. Each chapter focuses on one problem and is detailed as follows:

Chapter 3 A robust architecture modeling feature uncertainty is designed for person re-identification (ReID) with noisy samples. We aim to learn deep ReID models that are robust against noisy training data. Two types of noise are prevalent in practice: (1) label noise caused by human annotator errors and (2) data outliers caused by person detector errors or occlusion. Both types of noise pose serious problems for training ReID models, yet have been largely ignored so far. In this chapter, we propose a novel deep network termed DistributionNet for robust ReID. Instead of representing each person image as a feature vector, DistributionNet models it as a Gaussian distribution with its variance representing the uncertainty of the extracted features. Two carefully designed losses are formulated in DistributionNet to unevenly allocate uncertainty across training samples. Consequently, noisy samples are assigned large variance/uncertainty, which effectively alleviates their negative impacts on model fitting. Extensive experiments demonstrate that our model is more effective than alternative noise-robust deep models.

Chapter 4 Inspired by the ability of modelling uncertainty, we further extend the Distri- butionNet as stochastic neural networks to more practical applications. Stochastic neural networks (SNNs) are currently topical, with several paradigms being actively investigated including dropout, Bayesian neural networks, variational information bottleneck (VIB) and noise regularised learning. These neural network variants impact several major considera- tions, including generalisation, network compression, robustness against adversarial attack, and model calibration. However, many existing networks are complicated and expensive to train, and/or only address one or two of these practical considerations. In this chapter, we propose a simple and effective stochastic neural network (SE-SNN) architecture for discriminative learning by directly modelling activation uncertainty and encouraging high activation variability. Compared to existing SNNs, our SE-SNN is simpler to implement, faster to train, and produces state of the art results on network compression by pruning, adversarial defence, learning with label noise, and model calibration.

Chapter 5 Graph neural networks (GNNs) are utilized to reduce the intra-class variance and increase inter-class distance in few-shot learning (FSL). GNNs are intrinsically suitable for FSL due to their ability to aggregate knowledge by message passing on graph, especially under a transductive setting. However, under the more practical and popular inductive setting, existing GNN based methods are less competitive. This is because they use an instance GNN as a label propagation/classification module, which is jointly meta-learned with a feature embedding network. This design is problematic because the classifier needs to adapt quickly to new tasks while the embedding does not. To overcome this problem, in this chapter we propose a novel hybrid GNN (HGNN) model consisting of two GNNs, an instance GNN and a prototype GNN. Instead of label propagation, they act as feature embedding adaptation modules for quick adaptation of the meta-learned feature embedding to new tasks. Importantly they are designed to deal with a fundamental challenge in FSL, that is, with only a handful of shots per class, any few-shot classifier would be sensitive to badly sampled shots which are either outliers or can cause inter-class distribution overlapping. Extensive experiments show that our HGNN obtains new state-of-the-art on three FSL benchmarks.

Attend the event

This is a free online event open to everyone. You can attend via Zoom