
Jinghao Zhang
About
My research project
Towards high performance image recognitionThis project aims to improve image classification accuracy by using different machine-learning methods.
Supervisors
This project aims to improve image classification accuracy by using different machine-learning methods.
ResearchResearch interests
My research interest focuses on machine learning, adversarial learning, and image classification.
Research interests
My research interest focuses on machine learning, adversarial learning, and image classification.
Publications
Long-tailed data distribution is a common issue in many practical learning-based approaches, causing Deep Neural Networks (DNNs) to under-fit minority classes. Although this biased problem has been extensively studied by the research community, the existing approaches mainly focus on the class-wise (inter-class) imbalance problem. In contrast, this paper considers both inter-class and intra-class data imbalance problems for network training.To this end, we present Adversarial Feature Re-calibration (AFR), a method that improves the standard accuracy of a trained deep network by adding adversarial perturbations to the majority samples of each class. To be specific, an adversarial attack model is fine-tuned to perturb the majority samples by injecting the features from their corresponding intra-class long-tailed minority samples. This procedure makes the dataset more evenly distributed from both the inter- and intra-class perspectives, thus encouraging DNNs to learn better representations. The experimental results obtained on CIFAR-100-LT demonstrate the effectiveness and superiority of the proposed AFR method over the state-of-the-art long-tailed learning methods.
Recently, Adversarial Propagation (AdvProp) improves the standard accuracy of a trained model on clean samples. However, the training speed of AdvProp is much slower than vanilla training. Also, we argue that the use of adversarial samples in AdvProp is too drastic for robust feature learning of clean samples. This paper presents Mixup Propagation (MixProp) to further increase the standard accuracy on clean samples and reduce the training cost of AdvProp. The key idea of MixProp is to use mixup to generate samples for the auxiliary batch normalisation layer. This approach provides a moderate dataset as compared with adversarial samples and saves the time used for adversarial sample generation. The experimental results obtained on several datasets demonstrate the merits and superiority of the proposed method.