Dr. Atta ul Quddus received the MSc degree in Satellite Communications and PhD degree in Mobile Cellular Communications, both from University of Surrey UK in 2000 and 2005, respectively. He is currently a Lecturer in Wireless Communications in the Institute of Communications (ICS), Department of Electrical and Electronic Engineering, University of Surrey UK. During his research career, he has led several successful UK national and international research projects (EU FP7 BeFEMTO, iJOIN), and currently is an active part in the 5GIC research programme of ICS.Dr. Quddus is also principal developer of a professional link-level PHY simulator which has been used in industry by both cellular network operators and chip manufacturers over the years. In 2004, he won the Centre for Communications Systems (CCSR) Research Excellence Prize sponsored by Vodafone for his research on Adaptive Filtering algorithms. His current research interests include Machine Type Communication, Full Duplexing systems, Cloud Radio Access Networks, and Device to Device Communications.
—Network coverage is an increasing concern for the Quality of Service (QoS) targets of new mobile technologies. New solutions designed to fulfill the requirements of the existing fifth-generation (5G) and upcoming sixth-generation (6G) emerging scenarios are based on deploying a high number of network access points (APs), which tend to considerably degrade coverage and cell-edge performance due to added interference and increase the energy consumption of cellular systems. In this paper, we present new results on our recently proposed novel concept of cell-sweeping that aims to minimize the coverage dead-spots and improve cell-edge user performance. More specifically, the concept is explored further in this paper analyzing the impact of different cell-sweeping configurations and evaluating the potential benefits towards achieving energy efficiency. By means of system level computer simulations, it is shown that cell-sweeping provides energy savings of 11% and 26.5% for a similar average and cell-edge user throughput performance, respectively, when compared to the conventional static cell deployment in a typical urban macro cell scenario.
In this paper we present a novel distributed Inter-Cell Interference Coordination (ICIC) scheme for interference-limited heterogeneous cellular networks (HetNet). We reformulate our problem in such a way that it can be decomposed into a number of small sub-problems, which can be solved independently through an iterative subgradient method. The proposed dual decomposition method can also address problems with binary-valued variables. The proposed algorithm is compared with some reference schemes in terms of cell-edge and total cell throughput.
The first generation of femtocells is evolving to the next generation with many more capabilities in terms of better utilisation of radio resources and support of high data rates. It is thus logical to conjecture that with these abilities and their inherent suitability for home environment, they stand out as an ideal enabler for delivery of high efficiency multimedia services. This paper presents a comprehensive vision towards this objective and extends the concept of femtocells from indoor to outdoor environments, and strongly couples femtocells to emergency and safety services. It also presents and identifies relevant issues and challenges that have to be overcome in realization of this vision.
Multiuser selection scheduling concept has been recently proposed in the literature in order to increase the multiuser diversity gain and overcome the significant feedback requirements for the opportunistic scheduling schemes. The main idea is that reducing the feedback overhead saves per-user power that could potentially be added for the data transmission. In this work, we propose to integrate the principle of multiuser selection and the proportional fair scheduling scheme. This is aimed especially at power-limited, multi-device systems in non-identically distributed fading channels. For the performance analysis, we derive closed-form expressions for the outage probabilities and the average system rate of the delay-sensitive and the delay-tolerant systems, respectively, and compare them with the full feedback multiuser diversity schemes. The discrete rate region is analytically presented, where the maximum average system rate can be obtained by properly choosing the number of partial devices. We optimize jointly the number of partial devices and the per-device power saving in order to maximize the average system rate under the power requirement. Through our results, we finally demonstrate that the proposed scheme leveraging the saved feedback power to add for the data transmission can outperform the full feedback multiuser diversity, in non-identical Rayleigh fading of devices’ channels.
The exponential growth of the network elements and data traffic exchange in the last few years elevated the need of network providers for optimized and cost-efficient solutions regarding network management and monitorization. Solutions such as drive-tests (DTs) are becoming extremely expensive with the vast extension and complexity of nowadays mobile networks. Therefore, this paper provides a solution for optimized networkcontext knowledge acquisition, towards the self-organizing networks (SONs) concept. The presented framework incorporates an entire scheme for network Traces processing and positioning, based on network measurements and fingerprinting techniques. This framework enables a series of different use cases for network management and optimization, with real-time data processing capabilities within the network Traces collection interval (15 minutes), and achieving a median positioning error of 90 m.
The fifth-generation (5G) new radio (NR) cellular system promises a significant increase in capacity with reduced latency. However, the 5G NR system will be deployed along with legacy cellular systems such as the long-term evolution (LTE). Scarcity of spectrum resources in low frequency bands motivates adjacent-/co-carrier deployments. This approach comes with a wide range of practical benefits and it improves spectrum utilization by re-using the LTE bands. However, such deployments restrict the 5G NR flexibility in terms of frame allocations to avoid the most critical mutual adjacent-channel interference. This in turns prevents achieving the promised 5G NR latency figures. In this we paper, we tackle this issue by proposing to use the minislot uplink feature of 5G NR to perform uplink acknowledgement and feedback to reduce the frame latency with selective blind retransmission to overcome the effect of interference. Extensive system-level simulations under realistic scenarios show that the proposed solution can reduce the peak frame latency for feedback and acknowledgment up to 33% and for retransmission by up to 25% at a marginal cost of an up to 3% reduction in throughput.
This paper proposes a novel graph-based multicell scheduling framework to efficiently mitigate downlink intercell interference in OFDMA-based small cell networks. We define a graph-based optimization framework based on interference condition between any two users in the network assuming they are served on similar resources. Furthermore, we prove that the proposed framework obtains a tight lower bound for conventional weighted sum-rate maximization problem in practical scenarios. Thereafter, we decompose the optimization problem into dynamic graph-partitioning-based subproblems across different subchannels and provide an optimal solution using branch-and-cut approach. Subsequently, due to high complexity of the solution, we propose heuristic algorithms that display near optimal performance. At the final stage, we apply cluster-based resource allocation per subchannel to find candidate users with maximum total weighted sum-rate. A case study on networked small cells is also presented with simulation results showing a significant improvement over the state-of-the-art multicell scheduling benchmarks in terms of outage probability as well as average cell throughput.
This letter describes the impact of unknown channel access delay on the timeline of Hybrid Automatic Repeat Request (HARQ) process in the 3rd Generation Partnership Project Long Term Evolution (3GPP LTE) system when a Relay Node (RN) is used for coverage extension of Machine Type Communication (MTC) devices. A solution is also proposed for the determination of unknown channel access delay when the RN operates in the unlicensed spectrum band. The proposed mechanism is expected to help MTC operation in typical coverage holes areas such as smart meters located in the basement of buildings.
One major advantage of cloud/centralized radio access network (C-RAN) is the ease of implementation of multicell coordination mechanisms to improve the system spectrum efficiency (SE). Theoretically, large number of cooperative cells lead to a higher SE, however, it may also cause significant delay due to extra channel state information (CSI) feedback and joint processing computational needs at the cloud data center, which is likely to result in performance degradation. In order to investigate the delay impact on the throughput gains, we divide the network into multiple clusters of cooperative small cells and formulate a throughput optimization problem. We model various delay factors and the sum-rate of the network as a function of cluster size, treating it as the main optimization variable. For our analysis, we consider both base stations’ as well as users’ geometric locations as random variables for both linear and planar network deployments. The output SINR (signal-tointerference-plus-noise ratio) and ergodic sum-rate are derived based on the homogenous Poisson point processing (PPP) model. The sum-rate optimization problem in terms of the cluster size is formulated and solved. Simulation results show that the proposed analytical framework can be utilized to accurately evaluate the performance of practical cloud-based small cell networks employing clustered cooperation.
—At 5G and beyond networks, accurate localization services and nanosecond time synchronization are crucial to enabling mission-critical wireless communications technologies and techniques such as autonomous vehicles and distributed multiple-input and multiple-output (MIMO) antenna systems. This paper investigates how to improve wireless time synchronization by studying time correction based on the Real-Time Kinematics (RTK) positioning algorithm. Using the multiple Global Navigation Satellite System (GNSS) receiver references and the proposed binary GNSS satellite formation to reduce the effect of the ionosphere and troposphere delays and recede the measurement phase-range and pseudorange errors. As a result, it improves user equipment's (UE) localization and measures the time difference between the Base Station (BS) and the UE local clocks. The results show that the positioning accuracy has been increased, and a millimetre accuracy has been achieved while attaining the sub-nanosecond time error (TE) between the UE's and BS local clocks.
Flexibly supporting multiple services, each with different communication requirements and frame structure, has been identified as one of the most significant and promising characteristics of next generation and beyond wireless communication systems. However, integrating multiple frame structures with different subcarrier spacing in one radio carrier may result in significant inter-service-band-interference (ISBI). In this paper, a framework for multi-service (MS) systems is established based on subband filtered multi-carrier system. The subband filtering implementations and both asynchronous and generalized synchronous (GS) MS subband filtered multi-carrier (SFMC) systems have been proposed. Based on the GS-MS-SFMC system, the system model with ISBI is derived and a number of properties on ISBI are given. In addition, low-complexity ISBI cancelation algorithms are proposed by precoding the information symbols at the transmitter. For asynchronous MS-SFMC system in the presence of transceiver imperfections including carrier frequency offset, timing offset and phase noise, a complete analytical system model is established in terms of desired signal, intersymbol-interference, inter-carrier-interference, ISBI and noise. Thereafter, new channel equalization algorithms are proposed by considering the errors and imperfections. Numerical analysis shows that the analytical results match the simulation results, and the proposed ISBI cancelation and equalization algorithms can significantly improve the system performance in comparison with the existing algorithms.
This letter proposes a novel graph-based multi-cell scheduling framework to efficiently mitigate downlink inter-cell interference in small cell OFDMA networks. This framework incorporates dynamic clustering combined with channel-aware resource allocation to provide tunable quality of service measures at different levels. Our extensive evaluation study shows that a significant improvement in user's spectral efficiency is achievable, while also maintaining relatively high cell spectral efficiency via empirical tuning of re-use factor across the cells according to the required QoS constraints.
This article presents a comprehensive survey of the literature on self-interference management schemes required to achieve a single frequency full duplex communication in wireless communication networks. A single frequency full duplex system often referred to as in-band full duplex (FD) system has emerged as an interesting solution for the next generation mobile networks where scarcity of available radio spectrum is an important issue. Although studies on the mitigation of self-interference have been documented in the literature, this is the first holistic attempt at presenting not just the various techniques available for handling self-interference that arises when a full duplex device is enabled, as a survey, but it also discusses other system impairments that significantly affect the self-interference management of the system, and not only in terrestrial systems, but also on satellite communication systems. The survey provides a taxonomy of self-interference management schemes and shows by means of comparisons the strengths and limitations of various self-interference management schemes. It also quantifies the amount of self-interference cancellation required for different access schemes from the 1 st generation to the candidate 5 th generation of mobile cellular systems. Importantly, the survey summarises the lessons learnt, identifies and presents open research questions and key research areas for the future. This paper is intended to be a guide and take off point for further work on self-interference management in order to achieve full duplex transmission in mobile networks including heterogeneous cellular networks which is undeniably the network of future wireless systems.
Seamless and ubiquitous coverage are key factors for future cellular networks. Despite capacity and data rates being the main topics under discussion when envisioning the Fifth Generation (5G) and beyond of mobile communications, network coverage remains one of the major issues since coverage quality highly impacts the system performance and end-user experience. The increasing number of base stations and user terminals is anticipated to negatively impact the network coverage due to increasing interference. Furthermore, the "ubiquitous coverage" use cases, including rural and isolated areas, present a significant challenge for mobile communication technologies. This survey presents an overview of the concept of coverage, highlighting the ways it is studied, measured, and how it impacts the network performance. Additionally, an overlook of the most important key performance indicators influenced by coverage, which may affect the envisioned use cases with respect to throughput, latency, and massive connectivity, are discussed. Moreover, the main existing developments and deployments which are expected to augment the network coverage, in order to meet the requirements of the emerging systems, are presented as well as implementation challenges.
Femtocell is becoming a promising solution to face the explosive growth of mobile broadband usage in cellular networks. While each femtocell only covers a small area, a massive deployment is expected in the near future forming networked femtocells. An immediate challenge is to provide seamless mobility support for networked femtocells with minimal support from mobile core networks. In this paper, we propose efficient local mobility management schemes for networked femtocells based on X2 traffic forwarding under the 3GPP Long Term Evolution Advanced (LTE-A) framework. Instead of implementing the path switch operation at core network entity for each handover, a local traffic forwarding chain is constructed to use the existing Internet backhaul and the local path between the local anchor femtocell and the target femtocell for ongoing session communications. Both analytical studies and simulation experiments are conducted to evaluate the proposed schemes and compare them with the original 3GPP scheme. The results indicate that the proposed schemes can significantly reduce the signaling cost and relieve the processing burden of mobile core networks with the reasonable distributed cost for local traffic forwarding. In addition, the proposed schemes can enable fast session recovery to adapt to the self-deployment nature of the femtocells.
The widely accepted OFDMA air interface technology has recently been adopted in most mobile standards by the wireless industry. However, similar to other frequency-time multiplexed systems, their performance is limited by inter-cell interference. To address this performance degradation, interference mitigation can be employed to maximize the potential capacity of such interference-limited systems. This paper surveys key issues in mitigating interference and gives an overview of the recent developments of a promising mitigation technique, namely, interference avoidance through inter-cell interference coordination (ICIC). By using optimization theory, an ICIC problem is formulated in a multi-cell OFDMA-based system and some research directions in simplifying the problem and associated challenges are given. Furthermore, we present the main trends of interference avoidance techniques that can be incorporated in the main ICIC formulation. Although this paper focuses on 3GPP LTE/LTE-A mobile networks in the downlink, a similar framework can be applied for any typical multi-cellular environment based on OFDMA technology. Some promising future directions are identified and, finally, the state-of-the-art interference avoidance techniques are compared under LTE-system parameters.
Being able to accommodate multiple simultaneous transmissions on a single channel, non-orthogonal multiple access (NOMA) appears as an attractive solution to support massive machine type communication (mMTC) that faces a massive number of devices competing to access the limited number of shared radio resources. In this paper, we first analytically study the throughput performance of NOMA-based random access (RA), namely NOMA-RA. We show that while increasing the number of power levels in NOMA-RA leads to a further gain in maximum throughput, the growth of throughput gain is slower than linear. This is due to the higher-power dominance characteristic in power-domain NOMA known in the literature. We explicitly quantify the throughput gain for the very first time in this paper. With our analytical model, we verify the performance advantage of NOMA-RA scheme by comparing with the baseline multi-channel slotted ALOHA (MS-ALOHA), with and without capture effect. Despite the higher-power dominance effect, the maximum throughput of NOMA-RA with four power levels achieves over three times that of the MS-ALOHA. However, our analytical results also reveal the sensitivity of load on the throughput of NOMA-RA. To cope with the potential bursty traffic in mMTC scenarios, we propose adaptive load regulation through a practical user barring algorithm. By estimating the current load based on the observable channel feedback, the algorithm adaptively controls user access to maintain the optimal loading of channels to achieve maximum throughput. When the proposed user barring algorithm is applied, simulations demonstrate that the instantaneous throughput of NOMA-RA always remains close to the maximum throughput confirming the effectiveness of our load regulation.
We investigate the physical layer performance of HSDPA via GEO satellites for use in S-UMTS and SDMB. The impact of large round trip delay on link adaptation is discussed and link-level results are presented on the performance of HARQ for a variable number of retransmissions and different categories of UE in a rich multipath urban environment with three IMRs. It is shown that the N-channel SAW HARQ protocol can significantly increase the average throughput particularly for 16-QAM but the large round trip delay also requires an increase in the number of parallel HARQ channels resulting in high memory requirements at the UE. Receive antenna diversity with varying degrees of antenna correlation is also investigated as a possible performance enhancing method. The results presented here will help in specifying the physical layer of satellite HSDPA. Copyright © 2006 by the American Institute of Aeronautics and Astronautics, Inc. All rights reserved.
High Speed Downlink Packet Access (HSDPA) is the front-line technology within the 3rd Generation Partnership Project (3GPP) and represents mid term evolution of the standard. This paper presents simple equalizer structures based on Minimum Mean Square Error criterion that are suitable for Adaptive Modulation and Coding (AMC), which is one of the key features of HSDPA. Performance of equalizer structures in AMC has been shown to provide significant gain over Rake receiver, in terms of HSDPA throughput, by enabling the use of higher CQI (Channel Quality Indicator) indices whilst showing stability against changing input signal statistics caused by AMC. LMMSE equalizer has been found to roughly double the HSDPA throughput in a variety of radio channels with relatively small increase in complexity. © 2009 IEEE.
In this paper, a novel differential space-time block coded spatial modulation (differential STBC-SM) is proposed for uplink multi-user massive multiple-input multiple-output (MIMO) communications, which combines the concept of differential coding and STBC-SM to enhance the diversity benefits in the absence of the channel state information (CSI). The transmission structure of the proposed system is on a block basis, where each block contains two sub-blocks. More specifically, the first sub-block only conveys amplitude and phase modulation (APM) symbol bits, since its transmit antennas (TAs) obey a pre-designed activation pattern, which do not carry any information bit. For the second sub-block, the input bits are modulated to STBCSM matrices, which are then differentially coded between two adjacent sub-blocks. Moreover, a novel block-by-block based non-coherent detector is presented. Finally, we derive an upper bound on the average bit error probability (ABEP) by using the moment generating function (MGF). Our simulation results show that the proposed differential STBC-SM transmission structure is able to acquire considerable bit error rate (BER) performance improvements compared to both the conventional differential spatial modulation (DSM) and differential Alamouti schemes.
The parameters of Physical (PHY) layer radio frame for 5th Generation (5G) mobile cellular systems are expected to be flexibly configured to cope with diverse requirements of different scenarios and services. This paper presents a frame structure and design which is specifically targeting Internet of Things (IoT) provision in 5G wireless communication systems. We design a suitable radio numerology to support the typical characteristics, that is, massive connection density and small and bursty packet transmissions with the constraint of low cost and low complexity operation of IoT devices. We also elaborate on the design of parameters for Random Access Channel (RACH) enabling massive connection requests by IoT devices to support the required connection density. The proposed design is validated by link level simulation results to show that the proposed numerology can cope with transceiver imperfections and channel impairments. Furthermore, results are also presented to show the impact of different values of guard band on system performance using different subcarrier spacing sizes for data and random access channels, which show the effectiveness of the selected waveform and guard bandwidth. Finally, we present system level simulation results that validate the proposed design under realistic cell deployments and inter-cell interference conditions.
Good network coverage is an important element of Quality of Service (QoS) provision that mobile cellular operators aim to provide. The established requirements for the existing Fifth Generation (5G) and the emerging scenarios for upcoming Sixth Generation (6G) cellular communication technologies highly depend on the coverage quality that the network is able to provide. In addition, some proposed 5G solutions such as densification, are complex, costly, and tend to degrade network coverage due to increased interference which is critical for the cell-edge performance. In this direction, we present a novel concept of cell-sweeping for coverage enhancement in cellular networks. One of the main objectives behind this mechanism relies on overcoming the cell-edge problem which directly translates into better network coverage. In sequence, the concept operation is introduced and compared to the conventional static cell scenarios. These comparisons target mostly the benefits at the cell-edge locations. Additionally, the use of schedulers that take advantage of the sweeping system is expected to extend the cell-edge benefits to the entire network. This is observed when deploying cellsweeping with the Proportional Fair (PF) scheduler. A 5thpercentile improvement of 125% and an average throughput increase of 35% were obtained through system level simulations. The preliminary results presented in this paper suggest that cellsweeping can be adopted as an emerging technology for future Radio Access Network (RAN) deployments.
The book provides a unified view of essential topics, including: fundamental theories, channel coding and modulation, synchronization and parameter estimation, ...
Cross-layer scheduling is a promising solution for improving the efficiency of emerging broadband wireless systems. In this tutorial, various cross-layer design approaches are organized into three main categories namely air interface-centric, user-centric and route-centric and the general characteristics of each are discussed. Thereafter, by focusing on the air interfacecentric approach, it is shown that the resource allocation problem can be formulated as an optimization problem with a certain objective function and some particular constraints. This is illustrated with the aid of a customer-provider model from the field of economics. Furthermore, the possible future evolution of scheduling techniques is described based on the characteristics of traffic and air interface in emerging broadband wireless systems. Finally, some further challenges are identified. © 2009 IEEE.
This paper addresses the problem of joint backhaul and access links optimization in dense small cell networks with special focus on time division duplexing (TDD) mode of operation in backhaul and access links transmission. Here, we propose a framework for joint radio resource management where we systematically decompose the problem in backhaul and access links. To simplify the analysis, the procedure is tackled in two stages. At the first stage, the joint optimization problem is formulated for a point-to-point scenario where each small cell is simply associated to a single user. It is shown that the optimization can be decomposed into separate power and subchannel allocation in both backhaul and access links where a set of rate-balancing parameters in conjunction with duration of transmission governs the coupling across both links. Moreover, a novel algorithm is proposed based on grouping the cells to achieve rate-balancing in different small cells. Next in the second stage, the problem is generalized for multi access small cells. Here, each small cell is associated to multiple users to provide the service. The optimization is similarly decomposed into separate sub-channel and power allocation by employing auxiliary slicing variables. It is shown that similar algorithms as previous stage are applicable by slight change with the aid of slicing variables. Additionally, for the special case of line-of-sight backhaul links, simplified expressions for sub-channel and power allocation are presented. The developed concepts are evaluated by extensive simulations in different case studies from full orthogonalization to dynamic clustering and full reuse in the downlink and it is shown that proposed framework provides significant improvement over the benchmark cases.
This paper investigates adaptive implementation of the linear minimum mean square error (MMSE) detector in code division multiple access (CDMA). From linear algebra, Cimmino's reflection method is proposed as a possible way of achieving the MMSE solution blindly. Simulation results indicate that the proposed method converges four times faster than the blind least mean squares (LMS) algorithm and has roughly the same convergence performance as the blind recursive least squares (RLS) algorithm. Moreover the proposed algorithm is numerically more stable than the RLS algorithm and also exhibits parallelism for pipelined implementation. © 2009 IEEE.
It has been claimed that the filter bank multicarrier (FBMC) systems suffer from negligible performance loss caused by moderate dispersive channels in the absence of guard time protection between symbols. However, a theoretical and systematic explanation/analysis for the statement is missing in the literature to date. In this paper, based on one-tap minimum mean square error (MMSE) and zero-forcing (ZF) channel equalizations, the impact of doubly dispersive channel on the performance of FBMC systems is analyzed in terms of mean square error (MSE) of received symbols. Based on this analytical framework, we prove that the circular convolution property between symbols and the corresponding channel coefficients in the frequency domain holds loosely with a set of inaccuracies. To facilitate analysis, we first model the FBMC system in a vector/matrix form and derive the estimated symbols as a sum of desired signal, noise, inter-symbol interference (ISI), inter-carrier interference (ICI), inter-block interference (IBI) and estimation bias in the MMSE equalizer. Those terms are derived one-by-one and expressed as a function of channel parameters. The numerical results reveal that in harsh channel conditions, e.g., with large Doppler spread or channel delay spread, the FBMC system performance may be severely deteriorated and error floor will occur.
Recently proposed universal filtered multi-carrier (UFMC) system is not an orthogonal system in multipath channel environments and might cause significant performance loss. In this paper, we propose a cyclic prefix (CP) based UFMC system and first analyze the conditions for interference-free one-tap equalization in the absence of transceiver imperfections. Then the corresponding signal model and output SNR (signal-tonoise ratio) expression are derived. In the presence of carrier frequency offset (CFO), timing offset (TO) and insufficient CP length, we establish an analytical system model as a summation of desired signal, inter-symbol interference (ISI), intercarrier interference (ICI) and noise. New channel equalization algorithms are proposed based on the derived analytical signal model. Numerical results show that the derived model matches the simulation results precisely, and the proposed equalization algorithms improve the UFMC system performance in terms of bit error rate (BER).
The design of efficient wireless fronthaul connections for future heterogeneous networks incorporating emerging paradigms such as cloud radio access network (C-RAN) has become a challenging task that requires the most effective utilization of fronthaul network resources. In this paper, we propose to use distributed compression to reduce the fronthaul traffic in uplink Coordinated Multi-Point (CoMP) for C-RAN. Unlike the conventional approach where each coordinating point quantizes and forwards its own observation to the processing centre, these observations are compressed before forwarding. At the processing centre, the decompression of the observations and the decoding of the user message are conducted in a successive manner. The essence of this approach is the optimization of the distributed compression using an iterative algorithm to achieve maximal user rate with a given fronthaul rate. In other words, for a target user rate the generated fronthaul traffic is minimized. Moreover, joint decompression and decoding is studied and an iterative optimization algorithm is devised accordingly. Finally, the analysis is extended to multi-user case and our results reveal that, in both dense and ultra-dense urban deployment scenarios, the usage of distributed compression can efficiently reduce the required fronthaul rate and a further reduction is obtained with joint operation.
In this paper, we investigate the design of a radio resource control (RRC) protocol in the framework of long-term evolution (LTE) of the 3rd Generation Partnership Project regarding provision of low cost/complexity and low energy consumption machine-type communication (MTC), which is an enabling technology for the emerging paradigm of the Internet of Things. Due to the nature and envisaged battery-operated long-life operation of MTC devices without human intervention, energy efficiency becomes extremely important. This paper elaborates the state-of-the-art approaches toward addressing the challenge in relation to the low energy consumption operation of MTC devices, and proposes a novel RRC protocol design, namely, semi-persistent RRC state transition (SPRST), where the RRC state transition is no longer triggered by incoming traffic but depends on pre-determined parameters based on the traffic pattern obtained by exploiting the network memory. The proposed RRC protocol can easily co-exist with the legacy RRC protocol in the LTE. The design criterion of SPRST is derived and the signalling procedure is investigated accordingly. Based upon the simulation results, it is shown that the SPRST significantly reduces both the energy consumption and the signalling overhead while at the same time guarantees the quality of service requirements.
To flexibly support diverse communication requirements (e.g., throughput, latency, massive connection, etc.) for the next generation wireless communications, one viable solution is to divide the system bandwidth into several service subbands, each for a different type of service. In such a multi-service (MS) system, each service has its optimal frame structure while the services are isolated by subband filtering. In this paper, a framework for multi-service (MS) system is established based on subband filtered multi-carrier (SFMC) modulation. We consider both single-rate (SR) and multi-rate (MR) signal processing as two different MS-SFMC implementations, each having different performance and computational complexity. By comparison, the SR system outperforms the MR system in terms of performance while the MR system has a significantly reduced computational complexity than the SR system. Numerical results show the effectiveness of our analysis and the proposed systems. These proposed SR and MR MS-SFMC systems provide guidelines for next generation wireless system frame structure optimization and algorithm design.
The high speed downlink packet access (HSDPA) system has been investigated for adaptation in the GEO satellite environment in order to achieve high packet user throughput and system efficiency. This paper discusses the performance of the so called satellite-HSDPA (S-HSDPA) system, where the impacts of the power amplifier non-linearity, space time transmit diversity (STTD) and multicode transmission, are examined. The S-HSDPA performance is obtained from simulations of a modified terrestrial HSDPA link simulator in a rich multipath urban environment with three intermediate module repeaters (IMR). The results indicate an appropriate choice of system parameters. © 2008 IEEE.
In this paper, we investigate the throughput performance of single-packet and multi-packet hybrid-automatic repeat request (HARQ) with blanking for downlink non-orthogonal multiple access (NOMA) systems. While conventional singlepacket HARQ achieves high throughput at the expense of high latency, multi-packet HARQ, where several data packets are sent in the same channel block, can achieve high throughput with low latency. Previous works have shown that multi-packet HARQ outperforms single-packet HARQ in orthogonal multiple access (OMA) systems, especially in the moderate to high signal-tonoise ratio regime. This work amalgamates multi-packet HARQ with NOMA to achieve higher throughput than the conventional single-packet HARQ and OMA, which has been adopted in the legacy mobile networks. We conduct theoretical analysis for the throughput per user and also investigate the optimization of the power and rate allocations of the packets, in order to maximize the weighted-sum throughput. It is demonstrated that the gain of multi-packet HARQ over the single-packet HARQ in NOMA systems is reduced compared to that obtained in OMA systems due to inter-user interference. It is also shown that NOMAHARQ cannot achieve any throughput gain with respect to OMAHARQ when the error propagation rate of the NOMA detector is above a certain threshold.
Seamless mobility support is a key technical requirement to motivate the market acceptance of the femtocells. The current 3GPP handover procedure may cause large downlink service interruption time when users move from a macrocell to a femtocell or vice versa due to the data forwarding operation. In this letter, a practical scheme is proposed to enable seamless handover by reactively bicasting the data to both the source cell and the target cell after the handover is actually initiated. Numerical results show that the proposed scheme can significantly reduce the downlink service interruption time while still avoiding the packet loss with only limited extra resource requirements compared to the standard 3GPP scheme. © 2012 IEEE.