My research project


Xubo Liu, Haohe Liu, Qiuqiang Kong, Xinhao Mei, Jinzheng Zhao, Qiushi Huang, Mark D. Plumbley, Wenwu Wang (2022)Separate What You Describe: Language-Queried Audio Source Separation, In: Interspeech 2022pp. 1801-1805

In this paper, we introduce the task of language-queried audio source separation (LASS), which aims to separate a target source from an audio mixture based on a natural language query of the target source (e.g., “a man tells a joke followed by people laughing”). A unique challenge in LASS is associated with the complexity of natural language description and its relation with the audio sources. To address this issue, we proposed LASSNet, an end-to-end neural network that is learned to jointly process acoustic and linguistic information, and separate the target source that is consistent with the language query from an audio mixture. We evaluate the performance of our proposed system with a dataset created from the AudioCaps dataset. Experimental results show that LASS-Net achieves considerable improvements over baseline methods. Furthermore, we observe that LASS-Net achieves promising generalization results when using diverse human-annotated descriptions as queries, indicating its potential use in real-world scenarios. The separated audio samples and source code are available at https://liuxubo717.github.io/LASS-demopage.

Y Cao, Zhili Sun, Ning Wang, Maryam Riaz, Haitham Cruickshank, X Liu (2015)Geographic-Based Spray-and-Relay (GSaR): An efficient routing scheme for DTNs, In: IEEE Transactions on Vehicular Technology64(4)pp. 1548-1564 IEEE

In this paper, we design and evaluate the proposed geographic-based spray-and-relay (GSaR) routing scheme in delay/disruption-tolerant networks. To the best of our knowledge, GSaR is the first spray-based geographic routing scheme using historical geographic information for making a routing decision. Here, the term spray means that only a limited number of message copies are allowed for replication in the network. By estimating a movement range of destination via the historical geographic information, GSaR expedites the message being sprayed toward this range, meanwhile prevents that away from and postpones that out of this range. As such, the combination of them intends to fast and efficiently spray the limited number of message copies toward this range and effectively spray them within range, to reduce the delivery delay and increase the delivery ratio. Furthermore, GSaR exploits delegation forwarding to enhance the reliability of the routing decision and handle the local maximum problem, which is considered to be the challenges for applying the geographic routing scheme in sparse networks. We evaluate GSaR under three city scenarios abstracted from real world, with other routing schemes for comparison. Results show that GSaR is reliable for delivering messages before the expiration deadline and efficient for achieving low routing overhead ratio. Further observation indicates that GSaR is also efficient in terms of a low and fair energy consumption over the nodes in the network.

Xiaoran Liu, Xiaoying Zhang, Lei Zhang, Pei Xiao, Jibo Wei, Haijun Zhang, Victor C.M Leung (2020)PAPR Reduction Using Iterative Clipping/Filtering and ADMM Approaches for OFDM-Based Mixed-Numerology Systems, In: IEEE Transactions on Wireless Communications

Mixed-numerology transmission is proposed to support a variety of communication scenarios with diverse requirements. However, as the orthogonal frequency division multiplexing (OFDM) remains as the basic waveform, the peak-to average power ratio (PAPR) problem is still cumbersome. In this paper, based on the iterative clipping and filtering (ICF) and optimization methods, we investigate the PAPR reduction in the mixed numerology systems.We first illustrate that the direct extension of classical ICF brings about the accumulation of inter-numerology interference (INI) due to the repeated execution. By exploiting the clipping noise rather than the clipped signal, the noiseshaped ICF (NS-ICF) method is then proposed without increasing the INI. Next, we address the in-band distortion minimization problem subject to the PAPR constraint. By reformulation, the resulting model is separable in both the objective function and the constraints, and well suited for the alternating direction method of multipliers (ADMM) approach. The ADMM-based algorithms are then developed to split the original problem into several subproblems which can be easily solved with closedform solutions. Furthermore, the applications of the proposed PAPR reduction methods combined with filtering and windowing techniques are also shown to be effective.

X Liu, BG Evans, K Moessner (2013)Comparison of reliability, delay and complexity for standalone cognitive radio spectrum sensing schemes, In: IET COMMUNICATIONS7(9)pp. 799-807 INST ENGINEERING TECHNOLOGY-IET
X Liu, P Barnaghi, B Cheng, L Wan, Y Yang (2015)OMI-DL: An Ontology Matching Framework, In: Services Computing, IEEE Transactions onPP99pp. 1-1
X Liu, BG Evans, K Moessner (2015)Energy-Efficient Sensor Scheduling Algorithm in Cognitive Radio Networks Employing Heterogeneous Sensors, In: IEEE Transactions on Vehicular Technology64(3)pp. 1243-1249 Institute of Electrical and Electronics Engineers (IEEE)

We consider, in this paper, the maximization of throughput in a dense network of collaborative cognitive radio (CR) sensors with limited energy supply. In our case, the sensors are mixed varieties (heterogeneous) and are battery powered. We propose an ant colony-based energy-efficient sensor scheduling algorithm (ACO-ESSP) to optimally schedule the activities of the sensors to provide the required sensing performance and increase the overall secondary system throughput. The proposed algorithm is an improved version of the conventional ant colony optimization (ACO) algorithm, specifically tailored to the formulated sensor scheduling problem. We also use a more realistic sensor energy consumption model and consider CR networks employing heterogeneous sensors (CRNHSs). Simulations demonstrate that our approach improves the system throughput efficiently and effectively compared with other algorithms.

C Han, M Dianati, R Tafazolli, X Liu, X Shen (2012)A Novel Distributed Asynchronous Multichannel MAC Scheme for Large-Scale Vehicular Ad Hoc Networks., In: IEEE T. Vehicular Technology617pp. 3125-3138
Y Wu, Y Jin, X Liu (2015)A directed search strategy for evolutionary dynamic multiobjective optimization, In: SOFT COMPUTING19(11)pp. 3221-3235 SPRINGER
X. Liu, Y. L. Guan, S.N. Koh, D. Teo, Zilong Liu (2018)On the Performance of Single-Channel Source Separation of Two Co-Frequency PSK Signals with Carrier Frequency Offsets, In: Proceedings of IEEE MILCOM 2018 Institute of Electrical and Electronics Engineers (IEEE)

Continuously learning new classes without catastrophic forgetting is a challenging problem for on-device environmental sound classification given the restrictions on computation resources (e.g., model size, running memory). To address this issue, we propose a simple and efficient continual learning method. Our method selects the historical data for the training by measuring the per-sample classification uncertainty. Specifically, we measure the uncertainty by observing how the classification probability of data fluctuates against the parallel perturbations added to the classifier embedding. In this way, the computation cost can be significantly reduced compared with adding perturbation to the raw data. Experimental results on the DCASE 2019 Task 1 and ESC-50 dataset show that our proposed method outperforms baseline continual learning methods on classification accuracy and computational efficiency, indicating our method can efficiently and incrementally learn new classes without the catastrophic forgetting problem for on-device environmental sound classification.


Automated audio captioning aims to use natural language to describe the content of audio data. This paper presents an audio captioning system with an encoder-decoder architecture, where the decoder predicts words based on audio features extracted by the encoder. To improve the proposed system, transfer learning from either an upstream audio-related task or a large in-domain dataset is introduced to mitigate the problem induced by data scarcity. Moreover, evaluation metrics are incorporated into the optimization of the model with reinforcement learning, which helps address the problem of " exposure bias " induced by " teacher forcing " training strategy and the mismatch between the evaluation metrics and the loss function. The resulting system was ranked 3rd in DCASE 2021 Task 6. Abla-tion studies are carried out to investigate how much each component in the proposed system can contribute to final performance. The results show that the proposed techniques significantly improve the scores of the evaluation metrics, however, reinforcement learning may impact adversely on the quality of the generated captions.


Audio captioning aims to automatically generate a natural language description of an audio clip. Most captioning models follow an encoder-decoder architecture, where the decoder predicts words based on the audio features extracted by the encoder. Convolutional neural networks (CNNs) and recurrent neural networks (RNNs) are often used as the audio encoder. However, CNNs can be limited in modelling temporal relationships among the time frames in an audio signal, while RNNs can be limited in modelling the long-range dependencies among the time frames. In this paper, we propose an Audio Captioning Transformer (ACT), which is a full Transformer network based on an encoder-decoder architecture and is totally convolution-free. The proposed method has a better ability to model the global information within an audio signal as well as capture temporal relationships between audio events. We evaluate our model on AudioCaps, which is the largest audio captioning dataset publicly available. Our model shows competitive performance compared to other state-of-the-art approaches.


—Audio captioning aims at using language to describe the content of an audio clip. Existing audio captioning systems are generally based on an encoder-decoder architecture, in which acoustic information is extracted by an audio encoder and then a language decoder is used to generate the captions. Training an audio captioning system often encounters the problem of data scarcity. Transferring knowledge from pre-trained audio models such as Pre-trained Audio Neural Networks (PANNs) have recently emerged as a useful method to mitigate this issue. However, there is less attention on exploiting pre-trained language models for the decoder, compared with the encoder. BERT is a pre-trained language model that has been extensively used in natural language processing tasks. Nevertheless, the potential of using BERT as the language decoder for audio captioning has not been investigated. In this study, we demonstrate the efficacy of the pre-trained BERT model for audio captioning. Specifically, we apply PANNs as the encoder and initialize the decoder from the publicly available pre-trained BERT models. We conduct an empirical study on the use of these BERT models for the decoder in the audio captioning model. Our models achieve competitive results with the existing audio captioning methods on the AudioCaps dataset.

JIANYUAN SUN, XUBO LIU, XINHAO MEI, JINZHENG ZHAO, Mark D. PLUMBLEY, Volkan Kılıc, WENWU WANG (2022)Deep Neural Decision Forest for Acoustic Scene Classification

—Acoustic scene classification (ASC) aims to classify an audio clip based on the characteristic of the recording environment. In this regard, deep learning based approaches have emerged as a useful tool for ASC problems. Conventional approaches to improving the classification accuracy include integrating auxiliary methods such as attention mechanism, pre-trained models and ensemble multiple sub-networks. However, due to the complexity of audio clips captured from different environments, it is difficult to distinguish their categories without using any auxiliary methods for existing deep learning models using only a single classifier. In this paper, we propose a novel approach for ASC using deep neural decision forest (DNDF). DNDF combines a fixed number of convolutional layers and a decision forest as the final classifier. The decision forest consists of a fixed number of decision tree classifiers, which have been shown to offer better classification performance than a single classifier in some datasets. In particular, the decision forest differs substantially from traditional random forests as it is stochastic, differentiable, and capable of using the back-propagation to update and learn feature representations in neural network. Experimental results on the DCASE2019 and ESC-50 datasets demonstrate that our proposed DNDF method improves the ASC performance in terms of classification accuracy and shows competitive performance as compared with state-of-the-art baselines.

JINZHENG ZHAO, PEIPEI WU, SHIDROKH GOUDARZI, XUBO LIU, JIANYUAN SUN, Yong Xu, WENWU WANG (2022)Visually Assisted Self-supervised Audio Speaker Localization and Tracking

—Training a robust tracker of objects (such as vehicles and people) using audio and visual information often needs a large amount of labelled data, which is difficult to obtain as manual annotation is expensive and time-consuming. The natural synchronization of the audio and visual modalities enables the object tracker to be trained in a self-supervised manner. In this work, we propose to localize an audio source (i.e., speaker) using a teacher-student paradigm, where the visual network teaches the audio network by knowledge distillation to localize speakers. The introduction of multi-task learning, by training the audio network to perform source localization and semantic segmentation jointly, further improves the model performance. Experimental results show that the audio localization network can learn from visual information and achieve competitive tracking performance as compared to the baseline methods that are based on the audio-only measurements. The proposed method can provide more reliable measurements for tracking than the traditional sound source localization methods, and the generated audio features aid in visual tracking.

Xinhao Mei, Xubo Liu, Mark D. Plumbley, Wenwu Wang (2022)Automated Audio Captioning: An Overview of Recent Progress and New Challenges, In: EURASIP journal on audio, speech, and music processing2022(Recent advances in computational sound scene analysis)26 Springer Open

Automated audio captioning is a cross-modal translation task that aims to generate natural language descriptions for given audio clips. This task has received increasing attention with the release of freely available datasets in recent years. The problem has been addressed predominantly with deep learning techniques. Numerous approaches have been proposed, such as investigating different neural network architectures, exploiting auxiliary information such as keywords or sentence information to guide caption generation, and employing different training strategies, which have greatly facilitated the development of this field. In this paper, we present a comprehensive review of the published contributions in automated audio captioning, from a variety of existing approaches to evaluation metrics and datasets. We also discuss open challenges and envisage possible future research directions.

Ke Wang, XUBO LIU, Chien-Ming Chen, Saru Kumari, MOHAMMAD SHOJAFAR, Mohammed Alamgir Hossain (2020)Voice-Transfer Attacking on Industrial Voice Control Systems in 5G-Aided IIoT Domain, In: IEEE transactions on industrial informaticspp. 1-1 IEEE

At present, specific voice control has gradually become an important means for 5G-IoT-aided industrial control systems. However, the security of specific voice control system needs to be improved, because the voice cloning technology may lead to industrial accidents and other potential security risks. In this paper, we propose a transductive voice transfer learning method to learn the predictive function from the source domain and fine-tune in the target domain adaptively. The target learning task and source learning task are both synthesizing speech signals from the given audio while the data sets of both domains are different. By adding different penalty values to each instances and minimizing the expected risk, an optimal precise model can be learned. Many details of the experimental results show that our method can effectively synthesize the speech of the target speaker with small samples.

Xiaolan Liu, Jiadong Yu, Jian Wang, Yue Gao (2020)Resource Allocation With Edge Computing in IoT Networks via Machine Learning, In: IEEE internet of things journal7(4)pp. 3415-3426 IEEE

In this article, we investigate resource allocation with edge computing in Internet-of-Things (IoT) networks via machine learning approaches. Edge computing is playing a promising role in IoT networks by providing computing capabilities close to users. However, the massive number of users in IoT networks requires sufficient spectrum resource to transmit their computation tasks to an edge server, while the IoT users were developed to have more powerful computation ability recently, which makes it possible for them to execute some tasks locally. Then, the design of computation task offloading policies for such IoT edge computing systems remains challenging. In this article, centralized user clustering is explored to group the IoT users into different clusters according to users' priorities. The cluster with the highest priority is assigned to offload computation tasks and executed at the edge server, while the lowest priority cluster executes computation tasks locally. For the other clusters, the design of distributed task offloading policies for the IoT users is modeled by a Markov decision process, where each IoT user is considered as an agent which makes a series of decisions on task offloading by minimizing the system cost based on the environment dynamics. To deal with the curse of high dimensionality, we use a deep Q -network to learn the optimal policy in which deep neural network is used to approximate the Q -function in Q -learning. Simulations show that users are grouped into clusters with optimal number of clusters. Moreover, our proposed computation offloading algorithm outperforms the other baseline schemes under the same system costs.

Jiadong Yu, Xiaolan Liu, Yue Gao, Xuemin Shen (2020)3D Channel Tracking for UAV-Satellite Communications in Space-Air-Ground Integrated Networks, In: IEEE journal on selected areas in communications38(12)pp. 2810-2823 IEEE

The space-air-ground integrated network (SAGIN) aims to provide seamless wide-area connections, high throughput and strong resilience for 5G and beyond communications. Acting as a crucial link segment of the SAGIN, unmanned aerial vehicle (UAV)-satellite communication has drawn much attention. However, it is a key challenge to track dynamic channel information due to the low earth orbit (LEO) satellite orbiting and three-dimensional (3D) UAV trajectory. In this paper, we explore the 3D channel tracking for a Ka-band UAV-satellite communication system. We firstly propose a statistical dynamic channel model called 3D two-dimensional Markov model (3D-2D-MM) for the UAV-satellite communication system by exploiting the probabilistic insight relationship of both hidden value vector and joint hidden support vector. Specifically, for the joint hidden support vector, we consider a more realistic 3D support vector in both azimuth and elevation direction. Moreover, the spatial sparsity structure and the time-varying probabilistic relationship between degree patterns named the spatial and temporal correlation, respectively, are studied for each direction. Furthermore, we derive a novel 3D dynamic turbo approximate message passing (3D-DTAMP) algorithm to recursively track the dynamic channel with the 3D-2D-MM priors. Numerical results show that our proposed algorithm achieves superior channel tracking performance to the state-of-the-art algorithms with lower pilot overhead and comparable complexity.

Xinhao Mei, Xubo Liu, Jianyuan Sun, Mark D. Plumbley, Wenwu Wang (2022)On Metric Learning for Audio-Text Cross-Modal Retrieval

Audio-text retrieval aims at retrieving a target audio clip or caption from a pool of candidates given a query in another modality. Solving such cross-modal retrieval task is challenging because it not only requires learning robust feature representations for both modalities, but also requires capturing the fine-grained alignment between these two modalities. Existing cross-modal retrieval models are mostly optimized by metric learning objectives as both of them attempt to map data to an embedding space, where similar data are close together and dissimilar data are far apart. Unlike other cross-modal retrieval tasks such as image-text and video-text retrievals, audio-text retrieval is still an unexplored task. In this work, we aim to study the impact of different metric learning objectives on the audio-text retrieval task. We present an extensive evaluation of popular metric learning objectives on the AudioCaps and Clotho datasets. We demonstrate that NT-Xent loss adapted from self-supervised learning shows stable performance across different datasets and training settings, and outperforms the popular triplet-based losses. Our code is available at https://github.com/XinhaoMei/ audio-text_retrieval.

L Lever, Y Hu, M Myronov, X Liu, N Owens, FY Gardes, IP Marko, SJ Sweeney, Z Ikonić, DR Leadley, GT Reed, RW Kelsall (2011)Strain engineering of the electroabsorption response in Ge/SiGe multiple quantum well heterostructures, In: 8th IEEE International Conference on Group IV Photonicspp. 107-108

Many fibre-optic telecommunications systems exploit the spectral `window' at 1310 nm, which corresponds to zero dispersion in standard single-mode fibres (SMFs). In particular, several passive optical network (PON) architectures use 1310 nm for upstream signals,1 and so compact, low-cost and low-power modulators operating at 1310 nm that can be integrated into Si electronic-photonic integrated circuits would be extremely desireable for future fibre-to-the-home (FTTH) applications.

P Barnaghi, X Liu, K Moessner, J Liao (2012)Using Concept and Structure Similarities for Ontology Integration, In: P Shvaiko, J Euzenat, F Giunchiglia, H Stuckenschmidt, M Mao, I Cruz (eds.), CEUR Workshop Proceedings: Proceedings of the 5th International Workshop on Ontology Matching689

We propose a method to align different ontologies in similar domains and then define correspondence between concepts in two different ontologies using the SKOS model.


Audio captioning aims at generating natural language descriptions for audio clips automatically. Existing audio captioning models have shown promising improvement in recent years. However, these models are mostly trained via maximum likelihood estimation (MLE), which tends to make captions generic, simple and deterministic. As different people may describe an audio clip from different aspects using distinct words and grammars, we argue that an audio captioning system should have the ability to generate diverse captions for a fixed audio clip and across similar audio clips. To address this problem, we propose an adversarial training framework for audio captioning based on a conditional generative adversarial network (C-GAN), which aims at improving the naturalness and diversity of generated captions. Unlike processing data of continuous values in a classical GAN, a sentence is composed of discrete tokens and the discrete sampling process is non-differentiable. To address this issue, policy gradient, a reinforcement learning technique, is used to back-propagate the reward to the generator. The results show that our proposed model can generate more diverse captions, as compared to state-of-the-art methods.

Xubo Liu, Turab Iqbal, Jinzheng Zhao, Qiushi Huang, Mark D. Plumbley, Wenwu Wang (2021)Conditional Sound Generation Using Neural Discrete Time-Frequency Representation Learningpp. 25-28

Deep generative models have recently achieved impressive performance in speech and music synthesis. However, compared to the generation of those domain-specific sounds, generating general sounds (such as siren, gunshots) has received less attention , despite their wide applications. In previous work, the SampleRNN method was considered for sound generation in the time domain. However, SampleRNN is potentially limited in capturing long-range dependencies within sounds as it only back-propagates through a limited number of samples. In this work, we propose a method for generating sounds via neural discrete time-frequency representation learning, conditioned on sound classes. This offers an advantage in efficiently modelling long-range dependencies and retaining local fine-grained structures within sound clips. We evaluate our approach on the UrbanSound8K dataset, compared to SampleRNN, with the performance metrics measuring the quality and diversity of generated sounds. Experimental results show that our method offers comparable performance in quality and significantly better performance in diversity.


Automated Audio captioning (AAC) is a cross-modal translation task that aims to use natural language to describe the content of an audio clip. As shown in the submissions received for Task 6 of the DCASE 2021 Challenges, this problem has received increasing interest in the community. The existing AAC systems are usually based on an encoder-decoder architecture, where the audio signal is encoded into a latent representation, and aligned with its corresponding text descriptions, then a decoder is used to generate the captions. However, training of an AAC system often encounters the problem of data scarcity, which may lead to inaccurate representation and audio-text alignment. To address this problem, we propose a novel encoder-decoder framework called Contrastive Loss for Audio Captioning (CL4AC). In CL4AC, the self-supervision signals derived from the original audio-text paired data are used to exploit the correspondences between audio and texts by contrasting samples, which can improve the quality of latent representation and the alignment between audio and texts, while trained with limited data. Experiments are performed on the Clotho dataset to show the effectiveness of our proposed approach.