Jinghao Zhang

Jinghao Zhang


Postgraduate Research Student

Academic and research departments

Computer Science Research Centre.

About

My research project

Research

Research interests

Publications

Jinghao Zhang, Zhenhua Feng, Yaochu Jin (2024)Robust Long-Tailed Image Classification via Adversarial Feature Re-Calibration, In: Proceedings of the 19th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications2pp. 213-220

Long-tailed data distribution is a common issue in many practical learning-based approaches, causing Deep Neural Networks (DNNs) to under-fit minority classes. Although this biased problem has been extensively studied by the research community, the existing approaches mainly focus on the class-wise (inter-class) imbalance problem. In contrast, this paper considers both inter-class and intra-class data imbalance problems for network training.To this end, we present Adversarial Feature Re-calibration (AFR), a method that improves the standard accuracy of a trained deep network by adding adversarial perturbations to the majority samples of each class. To be specific, an adversarial attack model is fine-tuned to perturb the majority samples by injecting the features from their corresponding intra-class long-tailed minority samples. This procedure makes the dataset more evenly distributed from both the inter- and intra-class perspectives, thus encouraging DNNs to learn better representations. The experimental results obtained on CIFAR-100-LT demonstrate the effectiveness and superiority of the proposed AFR method over the state-of-the-art long-tailed learning methods.

Jinghao Zhang, Zhenhua Feng, Guosheng Hu, Changbin Shao, Yaochu Jin (2022)MixProp: Towards High-Performance Image Recognition via Dual Batch Normalisation

Recently, Adversarial Propagation (AdvProp) improves the standard accuracy of a trained model on clean samples. However, the training speed of AdvProp is much slower than vanilla training. Also, we argue that the use of adversarial samples in AdvProp is too drastic for robust feature learning of clean samples. This paper presents Mixup Propagation (MixProp) to further increase the standard accuracy on clean samples and reduce the training cost of AdvProp. The key idea of MixProp is to use mixup to generate samples for the auxiliary batch normalisation layer. This approach provides a moderate dataset as compared with adversarial samples and saves the time used for adversarial sample generation. The experimental results obtained on several datasets demonstrate the merits and superiority of the proposed method.