In this paper, a novel unsupervised deep learning approach is proposed to tackle the multiuser frequency synchronization problem inherent in orthogonal frequency-division multiple-access (OFDMA) uplink communications. The key idea lies in the use of the feed-forward deep neural network (FF-DNN) for multiuser interference (MUI) cancellation taking advantage of their strong classification capability. Basically, the proposed FF-DNN consists of two essential functional layers. One is called carrier-frequency-offsets (CFOs) classification layer that is responsible for identifying the users’ CFO range, and another is called MUI-cancellation layer responsible for joint multiuser detection (MUD) and frequency synchronization. By such means, the proposed FF-DNN approach showcases remarkable MUIcancellation performances without the need of multiuser CFO estimation. In addition, we also exhibit an interesting phenomenon occurred at the CFO-classification stage, where the CFO-classification performance get improved exponentially with the increase of the number of users. This is called multiuser diversity gain in the CFO-classification stage, which is carefully studied in this paper.
In this paper, a novel spatially non-stationary channel model is proposed for link-level computer simulations of massive multiple-input multiple-output (mMIMO) with extremely large aperture array (ELAA). The proposed channel model allows a mix of none line-of-sight (NLoS) and LoS links between a user and service antennas. The NLoS/LoS state of each link is characterized by a binary random variable, which obeys a correlated Bernoulli distribution. The correlation is described in the form of an exponentially decaying window. In addition, the proposed model incorporates shadowing effects which are non-identical for NLoS and LoS states. It is demonstrated, through computer emulation, that the proposed model can capture almost all spatially non-stationary fading behaviors of the ELAA-mMIMO channel. Moreover, it has a low implementational complexity. With the proposed channel model, Monte-Carlo simulations are carried out to evaluate the channel capacity of ELAAmMIMO. It is shown that the ELAA-mMIMO channel capacity has considerably different stochastic characteristics from the conventional mMIMO due to the presence of channel spatial non-stationarity.
The aim of this paper is to handle the multifrequency synchronization problem inherent in orthogonal frequency-division multiple access (OFDMA) uplink communications, where the carrier frequency offset (CFO) for each user may be different, and they can be hardly compensated at the receiver side. Our major contribution lies in the development of a novel OFDM receiver that is resilient to unknown random CFO thanks to the use of a CFO-compensator bank. Specifically, the whole CFO range is evenly divided into a set of sub-ranges, with each being supported by a dedicated CFO compensator. Given the optimization for CFO compensator a NP-hard problem, a machine deep-learning approach is proposed to yield a good sub-optimal solution. It is shown that the proposed receiver is able to offer inter-carrier interference free performance for OFDMA systems operating at a wide range of SNRs.
In this paper, an orthogonal stochastic gradient descent (O-SGD) based learning approach is proposed to tackle the wireless channel over-training problem inherent in artificial neural network (ANN)-assisted MIMO signal detection. Our basic idea lies in the discovery and exploitation of the training-sample orthogonality between the current training epoch and past training epochs. Unlike the conventional SGD that updates the neural network simply based upon current training samples, O-SGD discovers the correlation between current training samples and historical training data, and then updates the neural network with those uncorrelated components. The network updating occurs only in those identified null subspaces. By such means, the neural network can understand and memorize uncorrelated components between different wireless channels, and thus is more robust to wireless channel variations. This hypothesis is confirmed through our extensive computer simulations as well as performance comparison with the conventional SGD approach.
Deep learning is driving a radical paradigm shift in wireless communications, all the way from the application layer down to the physical layer. Despite this, there is an ongoing debate as to what additional values artificial intelligence (or machine learning) could bring to us, particularly on the physical layer design; and what penalties there may have? These questions motivate a fundamental rethinking of the wireless modem design in the artificial intelligence era. Through several physical-layer case studies, we argue for a significant role that machine learning could play, for instance in parallel error-control coding and decoding, channel equalization, interference cancellation, as well as multiuser and multiantenna detection. In addition, we will also discuss the fundamental bottlenecks of machine learning as well as their potential solutions in this paper.
In this paper, a novel end-to-end learning approach, namely JTRD-Net, is proposed for uplink multiuser single-input multiple-output (MU-SIMO) joint transmitter and non-coherent receiver design (JTRD) in fading channels. The basic idea lies in the use of artificial neural networks (ANNs) to replace traditional communication modules at both transmitter and receiver sides. More specifically, the transmitter side is modeled as a group of parallel linear layers, which are responsible for multiuser waveform design; and the non-coherent receiver is formed by a deep feed-forward neural network (DFNN) so as to provide multiuser detection (MUD) capabilities. The entire JTRD-Net can be trained from end to end to adapt to channel statistics through deep learning. After training, JTRD-Net can work efficiently in a non-coherent manner without requiring any levels of channel state information (CSI). In addition to the network architecture, a novel weight-initialization method, namely symmetrical-interval initialization, is proposed for JTRD-Net. It is shown that the symmetrical-interval initialization outperforms the conventional method (e.g. Xavier initialization) in terms of well-balanced convergence-rate among users. Simulation results show that the proposed JTRD-Net approach takes significant advantages in terms of reliability and scalability over baseline schemes on both i.i.d. complex Gaussian channels and spatially-correlated channels.
In this paper, unsupervised deep learning solutions for multiuser single-input multiple-output (MU-SIMO) coherent detection are extensively investigated. According to the ways of utilizing the channel state information at the receiver side (CSIR), deep learning solutions are divided into two groups. One group is called equalization and learning, which utilizes the CSIR for channel equalization and then employ deep learning for multiuser detection (MUD). The other is called direct learning, which directly feeds the CSIR, together with the received signal, into deep neural networks (DNN) to conduct the MUD. It is found that the direct learning solutions outperform the equalizationand- learning solutions due to their better exploitation of the sequence detection gain. On the other hand, the direct learning solutions are not scalable to the size of SIMO networks, as current DNN architectures cannot efficiently handle many cochannel interferences. Motivated by this observation, we propose a novel direct learning approach, which can combine the merits of feedforward DNN and parallel interference cancellation. It is shown that the proposed approach trades off the complexity for the learning scalability, and the complexity can be managed due to the parallel network architecture.
Quantization is the characterization of analogueto- digital converters (ADC) in massive MIMO systems. The design of quantization function or quantization thresholds is found to relate to quantization step, which is the factor that adapts with the changing of transmit power and noise variance. With the objective of utilizing low-resolution ADC is reducing the cost of massive MIMO, we propose an idea as if it is necessary to have adaptive-threshold quantization function. It is found that when maximum-likelihood (ML) is employed as the detection method, having quantization thresholds fixed for low-resolution ADCs will not cause significant performance loss. Moreover, such fixed-threshold quantization function does not require any information of signal power which can reduce the hardware cost of ADCs. Simulations have been carried out in this paper to make comparisons between fixed-threshold and adaptive-threshold quantization regarding various factors.
This paper presents a parallel computing approach that is employed to reconstruct original information bits from a non-recursive convolutional codeword in noise, with the goal of reducing the decoding latency without compromising the performance. This goal is achieved by means of cutting a received codeword into a number of sub-codewords (SCWs) and feeding them into a two-stage decoder. At the first stage, SCWs are decoded in parallel using the Viterbi algorithm or equivalently the brute force algorithm. Major challenge arises when determining the initial state of the trellis diagram for each SCW, which is uncertain except for the first one; and such results in multiple decoding outcomes for every SCW. To eliminate or more precisely exploit the uncertainty, an Euclidean-distance minimization algorithm is employed to merge neighboring SCWs; and this is called the merging stage, which can also run in parallel. Our work reveals that the proposed two-stage decoder is optimal and has its latency growing logarithmically, instead of linearly as for the Viterbi algorithm, with respect to the codeword length. Moreover, it is shown that the decoding latency can be further reduced by employing artificial neural networks for the SCW decoding. Computer simulations are conducted for two typical convolutional codes, and the results confirm our theoretical analysis.
This work aims to handle the joint transmitter and noncoherent receiver optimization for multiuser single-input multiple-output (MU-SIMO) communications through unsupervised deep learning. It is shown that MU-SIMO can be modeled as a deep neural network with three essential layers, which include a partially-connected linear layer for joint multiuser waveform design at the transmitter side, and two nonlinear layers for the noncoherent signal detection. The proposed approach demonstrates remarkable MU-SIMO noncoherent communication performance in Rayleigh fading channels.