Femtocell is becoming a promising solution to face the explosive growth of mobile broadband usage in cellular networks. While each femtocell only covers a small area, a massive deployment is expected in the near future forming networked femtocells. An immediate challenge is to provide seamless mobility support for networked femtocells with minimal support from mobile core networks. In this paper, we propose efficient local mobility management schemes for networked femtocells based on X2 traffic forwarding under the 3GPP Long Term Evolution Advanced (LTE-A) framework. Instead of implementing the path switch operation at core network entity for each handover, a local traffic forwarding chain is constructed to use the existing Internet backhaul and the local path between the local anchor femtocell and the target femtocell for ongoing session communications. Both analytical studies and simulation experiments are conducted to evaluate the proposed schemes and compare them with the original 3GPP scheme. The results indicate that the proposed schemes can significantly reduce the signaling cost and relieve the processing burden of mobile core networks with the reasonable distributed cost for local traffic forwarding. In addition, the proposed schemes can enable fast session recovery to adapt to the self-deployment nature of the femtocells.
It has been envisaged that in future 5G networks user devices will become an integral part by participating in the transmission of mobile content traffic typically through Deviceto- device (D2D) technologies. In this context, we promote the concept of Mobility as a Service (MaaS), where content-aware mobile network edge is equipped with necessary knowledge on device mobility in order to distribute popular mobile content items to interested clients via a small number of helper devices. Towards this end, we present a device-level Information Centric Networking (ICN) architecture that is able to perform intelligent content distribution operations according to necessary context information on mobile user mobility and content characteristics. Based on such a platform, we further introduce device-level online content caching and offline helper selection algorithms in order to optimise the overall system efficiency. In particular, this paper sheds distinct light on the importance of user mobility data analytics based on which helper selection can lead to overall system optimality. Based on representative user mobility models, we conducted realistic simulation experiments and modelling which have proven the efficiency in terms of both network traffic offloading gains and user-oriented performance improvements. In addition, we show how the framework can be flexibly configured to meet specific delay tolerance constraints according to specific context policies.
This article presents a comprehensive survey of the literature on self-interference management schemes required to achieve a single frequency full duplex communication in wireless communication networks. A single frequency full duplex system often referred to as in-band full duplex (FD) system has emerged as an interesting solution for the next generation mobile networks where scarcity of available radio spectrum is an important issue. Although studies on the mitigation of self-interference have been documented in the literature, this is the first holistic attempt at presenting not just the various techniques available for handling self-interference that arises when a full duplex device is enabled, as a survey, but it also discusses other system impairments that significantly affect the self-interference management of the system, and not only in terrestrial systems, but also on satellite communication systems. The survey provides a taxonomy of self-interference management schemes and shows by means of comparisons the strengths and limitations of various self-interference management schemes. It also quantifies the amount of self-interference cancellation required for different access schemes from the 1 st generation to the candidate 5 th generation of mobile cellular systems. Importantly, the survey summarises the lessons learnt, identifies and presents open research questions and key research areas for the future. This paper is intended to be a guide and take off point for further work on self-interference management in order to achieve full duplex transmission in mobile networks including heterogeneous cellular networks which is undeniably the network of future wireless systems.
Multi-service system is an enabler to flexibly support diverse communication requirements for the next generation wireless communications. In such a system, multiple types of services co-exist in one baseband system with each service having its optimal frame structure and low out of band emission (OoBE) waveforms operating on the service frequency band to reduce the inter-service-band-interference (ISvcBI). In this article, a framework for multi-service system is established and the challenges and possible solutions are studied. The multi-service system implementation in both time and frequency domain is discussed. Two representative subband filtered multicarrier (SFMC) waveforms: filtered orthogonal frequency division multiplexing (F-OFDM) and universal filtered multi-carrier (UFMC) are considered in this article. Specifically, the design methodology, criteria, orthogonality conditions and prospective application scenarios in the context of 5G are discussed. We consider both single-rate (SR) and multi-rate (MR) signal processing methods. Compared with the SR system, the MR system has significantly reduced computational complexity at the expense of performance loss due to inter-subband-interference (ISubBI) in MR systems. The ISvcBI and ISubBI in MR systems are investigated with proposed low-complexity interference cancelation algorithms to enable the multi-service operation in low interference level conditions.
Recent advancements in sensing, networking technologies and collecting real-world data on a large scale and from various environments have created an opportunity for new forms of real-world services and applications. This is known under the umbrella term of the Internet of Things (IoT). Physical sensor devices constantly produce very large amounts of data. Methods are needed which give the raw sensor measurements a meaningful interpretation for building automated decision support systems. To extract actionable information from real-world data, we propose a method that uncovers hidden structures and relations between multiple IoT data streams. Our novel solution uses Latent Dirichlet Allocation (LDA), a topic extraction method that is generally used in text analysis. We apply LDA on meaningful abstractions that describe the numerical data in human understandable terms. We use Symbolic Aggregate approXimation (SAX) to convert the raw data into string-based patterns and create higher level abstractions based on rules. We finally investigate how heterogeneous sensory data from multiple sources can be processed and analysed to create near real-time intelligence and how our proposed method provides an efficient way to interpret patterns in the data streams. The proposed method uncovers the correlations and associations between different pattern in IoT data streams. The evaluation results show that the proposed solution is able to identify the correlation with high efficiency with an F-measure up to 90%.
We derive the uplink system model for In-band and Guard-band narrowband Internet of Things (NB-IoT). The results reveal that the actual channel frequency response (CFR) is not a simple Fourier transform of the channel impulse response, due to sampling rate mismatch between the NB-IoT user and Long Term Evolution (LTE) base station. Consequently, a new channel equalization algorithm is proposed based on the derived effective CFR. In addition, the interference is derived analytically to facilitate the co-existence of NB-IoT and LTE signals. This work provides an example and guidance to support network slicing and service multiplexing in the physical layer.
A statistical model is derived for the equivalent signal-to-noise ratio of the Source-to-Relay-to-Destination (S-R-D) link for Amplify-and-Forward (AF) relaying systems that are subject to block Rayleigh-fading. The probability density function and the cumulated density function of the S-R-D link SNR involve modified Bessel functions of the second kind. Using fractional-calculus mathematics, a novel approach is introduced to rewrite those Bessel functions (and the statistical model of the S-R-D link SNR) in series form using simple elementary functions. Moreover, a statistical characterization of the total receive-SNR at the destination, corresponding to the S-R-D and the S-D link SNR, is provided for a more general relaying scenario in which the destination receives signals from both the relay and the source and processes them using maximum ratio combining (MRC). Using the novel statistical model for the total receive SNR at the destination, accurate and simple analytical expressions for the outage probability, the bit error probability, and the ergodic capacity are obtained. The analytical results presented in this paper provide a theoretical framework to analyze the performance of the AF cooperative systems with an MRC receiver.
this paper presents a novel approach in targeting load balancing in ad hoc networks utilizing the properties of quantum game theory. This approach benefits from the instantaneous and information-less capability of entangled particles to synchronize the load balancing strategies in ad hoc networks. The Quantum Load Balancing (QLB) algorithm proposed by this work is implemented on top of OLSR as the baseline routing protocol; its performance is analyzed against the baseline OLSR, and considerable gain is reported regarding some of the main QoS metrics such as delay and jitter. Furthermore, it is shown that QLB algorithm supports a solid stability gain in terms of throughput which stands a proof of concept for the load-balancing properties of the proposed theory.
In this letter, we analyse the trade-off between collision probability and code-ambiguity, when devices transmit a sequence of preambles as a codeword, instead of a single preamble, to reduce collision probability during random access to a mobile network. We point out that the network may not have sufficient resources to allocate to every possible codeword, and if it does, then this results in low utilisation of allocated uplink resources. We derive the optimal preamble set size that maximises the probability of success in a single attempt, for a given number of devices and uplink resources.
Channel reciprocity in time-division duplexing (TDD) massive MIMO (multiple-input multiple-output) systems can be exploited to reduce the overhead required for the acquisition of channel state information (CSI). However, perfect reciprocity is unrealistic in practical systems due to random radio-frequency (RF) circuit mismatches in uplink and downlink channels. This can result in a significant degradation in the performance of linear precoding schemes which are sensitive to the accuracy of the CSI. In this paper, we model and analyse the impact of RF mismatches on the performance of linear precoding in a TDD multi-user massive MIMO system, by taking the channel estimation error into considerations. We use the truncated Gaussian distribution to model the RF mismatch, and derive closed-form expressions of the output SINR (signal-to-interference-plus-noise ratio) for maximum ratio transmission and zero forcing precoders. We further investigate the asymptotic performance of the derived expressions, to provide valuable insights into the practical system designs, including useful guidelines for the selection of the effective precoding schemes. Simulation results are presented to demonstrate the validity and accuracy of the proposed analytical results.
This paper investigates a full duplex wirelesspowered two way communication networks, where two hybrid access points (HAP) and a number of amplify and forward (AF) relays both operate in full duplex scenario. We use time switching (TS) and static power splitting (SPS) schemes with two way full duplex wireless-powered networks as a benchmark. Then the new time division duplexing static power splitting (TDD SPS) and full duplex static power splitting (FDSPS) schemes as well as a simple relay selection strategy are proposed to improve the system performance. For TS, SPS and FDSPS, the best relay harvests energy using the received RF signal from HAPs and uses harvested energy to transmit signal to each HAP at the same frequency and time, therefore only partial self-interference (SI) cancellation needs to be considered in the FDSPS case. For the proposed TDD SPS, the best relay harvests the energy from the HAP and its self-interference. Then we derive closed-form expressions for the throughput and outage probability for delay limited transmissions over Rayleigh fading channels. Simulation results are presented to evaluate the effectiveness of the proposed scheme with different system key parameters, such as time allocation, power splitting ratio and residual SI.
Flexibly supporting multiple services, each with different communication requirements and frame structure, has been identified as one of the most significant and promising characteristics of next generation and beyond wireless communication systems. However, integrating multiple frame structures with different subcarrier spacing in one radio carrier may result in significant inter-service-band-interference (ISBI). In this paper, a framework for multi-service (MS) systems is established based on subband filtered multi-carrier system. The subband filtering implementations and both asynchronous and generalized synchronous (GS) MS subband filtered multi-carrier (SFMC) systems have been proposed. Based on the GS-MS-SFMC system, the system model with ISBI is derived and a number of properties on ISBI are given. In addition, low-complexity ISBI cancelation algorithms are proposed by precoding the information symbols at the transmitter. For asynchronous MS-SFMC system in the presence of transceiver imperfections including carrier frequency offset, timing offset and phase noise, a complete analytical system model is established in terms of desired signal, intersymbol-interference, inter-carrier-interference, ISBI and noise. Thereafter, new channel equalization algorithms are proposed by considering the errors and imperfections. Numerical analysis shows that the analytical results match the simulation results, and the proposed ISBI cancelation and equalization algorithms can significantly improve the system performance in comparison with the existing algorithms.
Network-enabled sensing and actuation devices are key enablers to connect real-world objects to the cyber world. The Internet of Things (IoT) consists of the network-enabled devices and communication technologies that allow connectivity and integration of physical objects (Things) into the digital world (Internet). Enormous amounts of dynamic IoT data are collected from Internet-connected devices. IoT data is usually multi-variant streams that are heterogeneous, sporadic, multi-modal and spatio-temporal. IoT data can be disseminated with different granularities and have diverse structures, types and qualities. Dealing with the data deluge from heterogeneous IoT resources and services imposes new challenges on indexing, discovery and ranking mechanisms that will allow building applications that require on-line access and retrieval of ad-hoc IoT data. However, the existing IoT data indexing and discovery approaches are complex or centralised which hinders their scalability. The primary objective of this paper is to provide a holistic overview of the state-of-the-art on indexing, discovery and ranking of IoT data. The paper aims to pave the way for researchers to design, develop, implement and evaluate techniques and approaches for on-line large-scale distributed IoT applications and services.
Energy efficiency (EE) is a key design criterion for the next generation of communication systems. Equally, cooperative communication is known to be very effective for enhancing the performance of such systems. This paper proposes a breakthrough approach for maximizing the EE of multiple-inputmultiple- output (MIMO) relay-based nonregenerative cooperative communication systems by optimizing both the source and relay precoders when both relay and direct links are considered. We prove that the corresponding optimization problem is at least strictly pseudo-convex, i.e. having a unique solution, when the relay precoding matrix is known, and that its Lagrangian can be lower and upper bounded by strictly pseudo-convex functions when the source precoding matrix is known. Accordingly, we then derive EE-optimal source and relay precoding matrices that are jointly optimize through alternating optimization. We also provide a low-complexity alternative to the EE-optimal relay precoding matrix that exhibits close to optimal performance, but with a significantly reduced complexity. Simulations results show that our joint source and relay precoding optimization can improve the EE of MIMO-AF systems by up to 50% when compared to direct/relay link only precoding optimization.
This paper proposes a low-complexity hybrid beamforming design for multi-antenna communication systems. The hybrid beamformer comprises of a baseband digital beamformer and a constant modulus analog beamformer in radio frequency (RF) part of the system. As in Singular-Value-Decomposition (SVD) based beamforming, hybrid beamforming design aims to generate parallel data streams in multi-antenna systems, however, due to the constant modulus constraint of the analog beamformer, the problem cannot be solved, similarly. To address this problem, mathematical expressions of the parallel data streams are derived in this paper and desired and interfering signals are specified per stream. The analog beamformers are designed by maximizing the power of desired signal while minimizing the sum-power of interfering signals. Finally, digital beamformers are derived through defining the equivalent channel observed by the transmitter/receiver. Regardless of the number of the antennas or type of channel, the proposed approach can be applied to wide range of MIMO systems with hybrid structure wherein the number of the antennas is more than the number of the RF chains. In particular, the proposed algorithm is verified for sparse channels that emulate mm-wave transmission as well as rich scattering environments. In order to validate the optimality, the results are compared with those of the state-of-the-art and it is demonstrated that the performance of the proposed method outperforms state-of-the-art techniques, regardless of type of the channel and/or system configuration.
Filtered orthogonal frequency division multiplexing (F-OFDM) system is a promising waveform for 5G and beyond to enable multi-service system and spectrum efficient network slicing. However, the performance for F-OFDM systems has not been systematically analyzed in literature. In this paper, we first establish a mathematical model for F-OFDM system and derive the conditions to achieve the interference-free one-tap channel equalization. In the practical cases (e.g., insufficient guard interval, asynchronous transmission, etc.), the analytical expressions for inter-symbol-interference (ISI), inter-carrier-interference (ICI) and adjacent-carrier-interference (ACI) are derived, where the last term is considered as one of the key factors for asynchronous transmissions. Based on the framework, an optimal power compensation matrix is derived to make all of the subcarriers having the same ergodic performance. Another key contribution of the paper is that we propose a multi-rate F-OFDM system to enable low complexity low cost communication scenarios such as narrow band Internet of Things (IoT), at the cost of generating intersubband- interference (ISubBI). Low computational complexity algorithms are proposed to cancel the ISubBI. The result shows that the derived analytical expressions match the simulation results, and the proposed ISubBI cancelation algorithms can significantly save the original F-OFDM complexity (up to 100 times) without significant performance loss.
Frequent handovers (HOs) in dense small cell deployment scenarios could lead to a dramatic increase in signalling overhead. This suggests a paradigm shift towards a signalling conscious cellular architecture with intelligent mobility management. In this direction, a futuristic radio access network with a logical separation between control and data planes has been proposed in research community. It aims to overcome limitations of the conventional architecture by providing high data rate services under the umbrella of a coverage layer in a dual connection mode. This approach enables signalling efficient HO procedures, since the control plane remains unchanged when the users move within the footprint of the same umbrella. Considering this configuration, we propose a core-network efficient radio resource control (RRC) signalling scheme for active state HO and develop an analytical framework to evaluate its signalling load as a function of network density, user mobility and session characteristics. In addition, we propose an intelligent HO prediction scheme with advance resource preparation in order to minimise the HO signalling latency. Numerical and simulation results show promising gains in terms of reduction in HO latency and signalling load as compared with conventional approaches.
It has been claimed that the filter bank multicarrier (FBMC) systems suffer from negligible performance loss caused by moderate dispersive channels in the absence of guard time protection between symbols. However, a theoretical and systematic explanation/analysis for the statement is missing in the literature to date. In this paper, based on one-tap minimum mean square error (MMSE) and zero-forcing (ZF) channel equalizations, the impact of doubly dispersive channel on the performance of FBMC systems is analyzed in terms of mean square error (MSE) of received symbols. Based on this analytical framework, we prove that the circular convolution property between symbols and the corresponding channel coefficients in the frequency domain holds loosely with a set of inaccuracies. To facilitate analysis, we first model the FBMC system in a vector/matrix form and derive the estimated symbols as a sum of desired signal, noise, inter-symbol interference (ISI), inter-carrier interference (ICI), inter-block interference (IBI) and estimation bias in the MMSE equalizer. Those terms are derived one-by-one and expressed as a function of channel parameters. The numerical results reveal that in harsh channel conditions, e.g., with large Doppler spread or channel delay spread, the FBMC system performance may be severely deteriorated and error floor will occur.
Energy efﬁciency (EE) is a key enabler for the next generation of communication systems. Equally, resource allocation and cooperative communication are effective tech-niques for improving communication system performance. In this paper, we propose an optimal energy-efﬁcient joint resource allocation method for the multi-hop multiple-input-multiple-output (MIMO) amplify-and-forward (AF) system. We deﬁne the joint source and multiple relays optimization problem and prove that its objective function, which is not generally quasiconvex, can be lower-bounded by a convex function. Moreover, all the minima of this objective function are strict minima. Based on these two properties, we then simplify the original multivariate optimization problem into a single variable problem and design a novel approach for optimally solving it in both the unconstraint and power constraint cases. In addition, we provide a sub-optimal approach with reduced complexity; the latter reduces the computational complexity by a factor of up to 40 with near-optimal performance. We ﬁnally utilize our novel approach for comparing the optimal energy-per-bit consumption of multi-hop MIMO-AF and MIMO systems; results indicate that MIMO-AF can help to save energy when the direct link quality is poor.
Multiuser selection scheduling concept has been recently proposed in the literature in order to increase the multiuser diversity gain and overcome the significant feedback requirements for the opportunistic scheduling schemes. The main idea is that reducing the feedback overhead saves per-user power that could potentially be added for the data transmission. In this work, we propose to integrate the principle of multiuser selection and the proportional fair scheduling scheme. This is aimed especially at power-limited, multi-device systems in non-identically distributed fading channels. For the performance analysis, we derive closed-form expressions for the outage probabilities and the average system rate of the delay-sensitive and the delay-tolerant systems, respectively, and compare them with the full feedback multiuser diversity schemes. The discrete rate region is analytically presented, where the maximum average system rate can be obtained by properly choosing the number of partial devices. We optimize jointly the number of partial devices and the per-device power saving in order to maximize the average system rate under the power requirement. Through our results, we finally demonstrate that the proposed scheme leveraging the saved feedback power to add for the data transmission can outperform the full feedback multiuser diversity, in non-identical Rayleigh fading of devices’ channels.
This paper investigates self-backhauling with dual antenna selection at multiple small cell base stations. Both half and full duplex transmissions at the small cell base station are considered. Depending on instantaneous channel conditions, the full duplex transmission can have higher throughput than the half duplex transmission, but it is not always the case. Closed-form expressions of the average throughput are obtained, and validated by simulation results. In all cases, the dual receive and transmit antenna selection significantly improves backhaul and data transmission, making it an attractive solution in practical systems.
5G definition and standardization projects are well underway, and governing characteristics and major challenges have been identified. A critical network element impacting the potential performance of 5G networks is the backhaul, which is expected to expand in length and breadth to cater to the exponential growth of small cells while offering high throughput in the order of Gbps and less than one-millisecond latency with high resilience and energy efficiency. Such performance may only be possible with direct optical fibre connections which are often not available countrywide and are cumbersome and expensive to deploy. On the other hand, a prime 5G characteristic is diversity, which describes the radio access network, the backhaul, and also the types of user applications and devices. Thus, we propose a novel, distributed, selfoptimized, end-to-end user-cell-backhaul association scheme that intelligently associates users with potential cells based on corresponding dynamic radio and backhaul conditions while abiding by users’ requirements. Radio cells broadcast multiple bias factors, each reflecting a dynamic performance indicator (DPI) of the endto-end network performance such as capacity, latency, resilience, energy consumption, etc. A given user would employ these factors to derive a user-centric cell ranking that motivates it to select the cell with radio and backhaul performance that conforms to the user requirements. Reinforcement learning is used at the radio cell to optimize the bias factors for each DPI in a way that maximizes the system throughput while minimizing the gap between the users’ achievable and required end-to-end quality of experience (QoE). Preliminary results show considerable improvement in users QoE and cumulative system throughput when compared to state-of-theart user-cell association schemes.
One major advantage of cloud/centralized radio access network (C-RAN) is the ease of implementation of multicell coordination mechanisms to improve the system spectrum efficiency (SE). Theoretically, large number of cooperative cells lead to a higher SE, however, it may also cause significant delay due to extra channel state information (CSI) feedback and joint processing computational needs at the cloud data center, which is likely to result in performance degradation. In order to investigate the delay impact on the throughput gains, we divide the network into multiple clusters of cooperative small cells and formulate a throughput optimization problem. We model various delay factors and the sum-rate of the network as a function of cluster size, treating it as the main optimization variable. For our analysis, we consider both base stations’ as well as users’ geometric locations as random variables for both linear and planar network deployments. The output SINR (signal-tointerference-plus-noise ratio) and ergodic sum-rate are derived based on the homogenous Poisson point processing (PPP) model. The sum-rate optimization problem in terms of the cluster size is formulated and solved. Simulation results show that the proposed analytical framework can be utilized to accurately evaluate the performance of practical cloud-based small cell networks employing clustered cooperation.
The Internet-of-Things (IoT) paradigm envisions billions of devices all connected to the Internet, generating low-rate monitoring and measurement data to be delivered to application servers or end-users. Recently, the possibility of applying innetwork data caching techniques to IoT traffic flows has been discussed in research forums. The main challenge as opposed to the typically cached content at routers, e.g. multimedia files, is that IoT data are transient and therefore require different caching policies. In fact, the emerging location-based services can also benefit from new caching techniques that are specifically designed for small transient data. This paper studies in-network caching of transient data at content routers, considering a key temporal data property: data item lifetime. An analytical model that captures the trade-off between multihop communication costs and data item freshness is proposed. Simulation results demonstrate that caching transient data is a promising information-centric networking technique that can reduce the distance between content requesters and the location in the network where the content is fetched from. To the best of our knowledge, this is a pioneering research work aiming to systematically analyse the feasibility and benefit of using Internet routers to cache transient data generated by IoT applications.
In this paper, using stochastic geometry, we investigate the average energy efficiency (AEE) of the user terminal (UT) in the uplink of a two-tier heterogeneous network (HetNet), where the two tiers are operated on separate carrier frequencies. In such a deployment, a typical UT must periodically perform inter-frequency small cell discovery (ISCD) process in order to discover small cells in its neighborhood and benefit from the high data rate and traffic offloading opportunity that small cells present. We assume that the base stations (BSs) of each tier and UTs are randomly located and we derive the average ergodic rate and UT power consumption, which are later used for our AEE evaluation. The AEE incorporates the percentage of time a typical UT missed small cell offloading opportunity as a result of the periodicity of the ISCD process. In addition to this, the additional power consumed by the UT due to the ISCD measurement is also included. Moreover, we derive the optimal ISCD periodicity based on the UT’s average energy consumption (AEC) and AEE. Our results reveal that ISCD periodicity must be selected with the objective of either minimizing UT’s AEC or maximizing UT’s AEE.
Information-centric networking (ICN) is an emerging networking paradigm that places content identifiers rather than host identifiers at the core of the mechanisms and protocols used to deliver content to end-users. Such a paradigm allows routers enhanced with content-awareness to play a direct role in the routing and resolution of content requests from users, without any knowledge of the specific locations of hosted content. However, to facilitate good network traffic engineering and satisfactory user QoS, content routers need to exchange advanced network knowledge to assist them with their resolution decisions. In order to maintain the location-independency tenet of ICNs, such knowledge (known as context information) needs to be independent of the locations of servers. To this end, we propose CAINE — Context-Aware Information-centric Network Ecosystem — which enables context-based operations to be intrinsically supported by the underlying ICN routing and resolution functions. Our approach has been designed to maintain the location-independence philosophy of ICNs by associating context information directly to content rather than to the physical entities such as servers and network elements in the content ecosystem, while ensuring scalability. Through simulation, we show that based on such location-independent context information, CAINE is able to facilitate traffic engineering in the network, while not posing a significant control signalling burden on the network
Most of the existing distributed beamforming algorithms for relay networks require global channel state information (CSI) at relay nodes and the overall computational complexity is high. In this paper, a new class of adaptive algorithms is proposed which can achieve a globally optimum solution by employing only local CSI. A reference signal based (RSB) scheme is first derived, followed by a constant modulus (CM) based scheme when the reference signal is not available. Considering individual power transmission constraint at each relay node, the corresponding constrained adaptive algorithms are also derived as an extension. An analysis of the overhead and stepsize range for the derived algorithms are then provided and the excess mean square error (EMSE) for the RSB case is studied based on the energy reservation method. As demonstrated by our simulation results, a better performance has been achieved by our proposed algorithms and they have a very low computational complexity and can be implemented on low cost and low processing power devices.
In this paper, we consider multi-relay cooperative networks for the Rayleigh fading channel, where each relay, upon receiving its own channel observation, independently compresses it and forwards the compressed information to the destination. Although the compression at each relay is distributed using Wyner-Ziv coding, there exists an opportunity for jointly optimizing compression at multiple relays to maximize the achievable rate. Considering Gaussian signalling, a primal optimization problem is formulated accordingly. We prove that the primal problem can be solved by resorting to its Lagrangian dual problem and an iterative optimization algorithm is proposed. The analysis is further extended to a hybrid scheme, where the employed forwarding scheme depends on the decoding status of each relay. The relays that are capable of successful decoding perform decode-and-forward and the rest conduct distributed compression. The hybrid scheme allows the cooperative network to adapt to the changes of the channel conditions and benefit from an enhanced level of flexibility. Numerical results from both spectrum and energy efficiency perspectives show that the joint optimization improves efficiency of compression and identify the scenarios where the proposed schemes outperform the conventional forwarding schemes. The findings provide important insights into the optimal deployment of relays in a realistic cellular network.
This paper addresses the problem of joint backhaul and access links optimization in dense small cell networks with special focus on time division duplexing (TDD) mode of operation in backhaul and access links transmission. Here, we propose a framework for joint radio resource management where we systematically decompose the problem in backhaul and access links. To simplify the analysis, the procedure is tackled in two stages. At the first stage, the joint optimization problem is formulated for a point-to-point scenario where each small cell is simply associated to a single user. It is shown that the optimization can be decomposed into separate power and subchannel allocation in both backhaul and access links where a set of rate-balancing parameters in conjunction with duration of transmission governs the coupling across both links. Moreover, a novel algorithm is proposed based on grouping the cells to achieve rate-balancing in different small cells. Next in the second stage, the problem is generalized for multi access small cells. Here, each small cell is associated to multiple users to provide the service. The optimization is similarly decomposed into separate sub-channel and power allocation by employing auxiliary slicing variables. It is shown that similar algorithms as previous stage are applicable by slight change with the aid of slicing variables. Additionally, for the special case of line-of-sight backhaul links, simplified expressions for sub-channel and power allocation are presented. The developed concepts are evaluated by extensive simulations in different case studies from full orthogonalization to dynamic clustering and full reuse in the downlink and it is shown that proposed framework provides significant improvement over the benchmark cases.
© 2014 IEEE.This paper examines the uplink of cellular systems employing base station cooperation for joint signal processing. We consider clustered cooperation and investigate effective techniques for managing inter-cluster interference to improve users' performance in terms of both spectral and energy efficiency. We use information theoretic analysis to establish general closed form expressions for the system achievable sum rate and the users' Bit-per-Joule capacity while adopting a realistic user device power consumption model. Two main inter-cluster interference management approaches are identified and studied, i.e., through: 1) spectrum re-use; and 2) users' power control. For the former case, we show that isolating clusters by orthogonal resource allocation is the best strategy. For the latter case, we introduce a mathematically tractable user power control scheme and observe that a green opportunistic transmission strategy can significantly reduce the adverse effects of inter-cluster interference while exploiting the benefits from cooperation. To compare the different approaches in the context of real-world systems and evaluate the effect of key design parameters on the users' energy-spectral efficiency relationship, we fit the analytical expressions into a practical macrocell scenario. Our results demonstrate that significant improvement in terms of both energy and spectral efficiency can be achieved by energy-aware interference management.
This paper proposes a novel graph-based multicell scheduling framework to efficiently mitigate downlink intercell interference in OFDMA-based small cell networks. We define a graph-based optimization framework based on interference condition between any two users in the network assuming they are served on similar resources. Furthermore, we prove that the proposed framework obtains a tight lower bound for conventional weighted sum-rate maximization problem in practical scenarios. Thereafter, we decompose the optimization problem into dynamic graph-partitioning-based subproblems across different subchannels and provide an optimal solution using branch-and-cut approach. Subsequently, due to high complexity of the solution, we propose heuristic algorithms that display near optimal performance. At the final stage, we apply cluster-based resource allocation per subchannel to find candidate users with maximum total weighted sum-rate. A case study on networked small cells is also presented with simulation results showing a significant improvement over the state-of-the-art multicell scheduling benchmarks in terms of outage probability as well as average cell throughput.
In this paper, we consider the radio resource allocation problem for uplink OFDMA system. The existing algorithms have been derived under the assumption of Gaussian inputs due to its closed-form expression of mutual information. For the sake of practicality, we consider the system with Finite Symbol Alphabet (FSA) inputs, and solve the problem by capitalizing on the recently revealed relationship between mutual information and Minimum Mean-Square Error (MMSE). We first relax the problem to formulate it as a convex optimization problem, then we derive the optimal solution via decomposition methods. The optimal solution serves as an upper bound on the system performance. Due to the complexity of the optimal solution, a low-complexity suboptimal algorithm is proposed. Numerical results show that the presented suboptimal algorithm can achieve performance very close to the optimal solution and outperforms the existing suboptimal algorithms. Furthermore, using our proposed algorithm, significant power saving can be achieved in comparison to the case when Gaussian input is assumed.
This letter proposes a novel graph-based multi-cell scheduling framework to efficiently mitigate downlink inter-cell interference in small cell OFDMA networks. This framework incorporates dynamic clustering combined with channel-aware resource allocation to provide tunable quality of service measures at different levels. Our extensive evaluation study shows that a significant improvement in user's spectral efficiency is achievable, while also maintaining relatively high cell spectral efficiency via empirical tuning of re-use factor across the cells according to the required QoS constraints.
Motivated by increased interests in energy efficient communication systems, the relation between energy efficiency (EE) and spectral efficiency (SE) for multiple-input multipleoutput (MIMO) systems is investigated in this paper. To provide insight into the design of practical MIMO systems, we adopt a realistic power model, as well as consider both independent Rayleigh fading and semicorrelated fading channels. We derive a novel and closed-form upper bound for the system EE as a function of SE. This upper bound exhibits a great accuracy for a wide range of SE values, and thus can be utilized for explicitly assessing the influence of SE on EE, and analytically addressing the EE optimization problems. Using this tight EE upper bound, our analysis unfolds two EE optimization issues: Given the number of transmit and receive antennas, an optimum value of SE is derived such that the overall EE can be maximized; Given a specific value of SE, the optimal number of antennas is derived for maximizing the system EE.
Energy savings are becoming a global trend, hence the importance of energy efficiency (EE) as an alternative performance evaluation metric. This paper proposes an EE based resource allocation method for the broadcast channel (BC), where a linear power model is used to characterize the power consumed at the base station (BS). Having formulated our EE based optimization problem and objective function, we utilize standard convex optimization techniques to show the concavity of the latter, and thus, the existence of a unique globally optimal energy-efficient rate and power allocation. Our EE based resource allocation framework is also extended to incorporate fairness, and provide a minimum user satisfaction in terms of spectral efficiency (SE). We then derive the generic equation of the EE contours and use them to get insights about the EE-SE trade-off over the BC. The performances of the aforementioned resource allocation schemes are compared for different metrics against the number of users and cell radius. Results indicate that the highest EE improvement is achieved by using the unconstrained optimization scheme, which is obtained by significantly reducing the total transmit power. Moreover, the network EE is shown to increase with the number of users and decrease as the cell radius increases.
This letter presents a new posterior Cramér-Rao bound (PCRB) for inertial sensors enhanced mobile positioning, which performs hybrid data fusion of parameters including position estimates, pedestrian step size, pedestrian heading, and the knowledge of random walk motion model. Moreover, a non-matrix closed form of the PCRB is derived without position estimates. Finally, our numerical results show that when the accuracy of step size and heading measurements is high enough, the knowledge of random walk model becomes redundant.
This work addresses joint transceiver optimization for multiple-input, multiple-output (MIMO) systems. In practical systems the complete knowledge of channel state information (CSI) is hardly available at transmitter. To tackle this problem, we resort to the codebook approach to precoding design, where the receiver selects a precoding matrix from a finite set of pre-defined precoding matrices based on the instantaneous channel condition and delivers the index of the chosen precoding matrix to the transmitter via a bandwidth-constraint feedback channel. We show that, when the symbol constellation is improper, the joint codebook based precoding and equalization can be designed accordingly to achieve improved performance compared to the conventional system.
Beside the well-established spectral-efficiency (SE), energy-efficiency (EE) is currently becoming an important performance evaluation metric, which in turn makes the EE-SE trade-off as a prominent criterion for efficiently designing future communication systems. In this letter, we propose a very tight closed-form approximation (CFA) of this trade-off over the single-input single-output (SISO) Rayleigh flat fading channel. We first derive an improved approximation of the SISO ergodic capacity by means of a parametric function and then utilize it for obtaining our novel EE-SE trade-off CFA, which is also generalized for the symmetric multi-input multi-output channel. We compare our CFA with existing CFAs and show its improved accuracy in comparison with the latter.
Along with spectral efficiency (SE), energy efficiency (EE) is becoming one of the key performance evaluation criteria for communication system. These two criteria, which are conflicting, can be linked through their trade-off. The EE-SE trade-off for the multi-input multi-output (MIMO) Rayleigh fading channel has been accurately approximated in the past but only in the low-SE regime. In this paper, we propose a novel and more generic closed-form approximation of this trade-off which exhibits a greater accuracy for a wider range of SE values and antenna configurations. Our expression has been here utilized for assessing analytically the EE gain of MIMO over single-input single-output (SISO) system for two different types of power consumption models (PCMs): the theoretical PCM, where only the transmit power is considered as consumed power; and a more realistic PCM accounting for the fixed consumed power and amplifier inefficiency. Our analysis unfolds the large mismatch between theoretical and practical MIMO vs. SISO EE gains; the EE gain increases both with the SE and the number of antennas in theory, which indicates that MIMO is a promising EE enabler; whereas it remains small and decreases with the number of transmit antennas when a realistic PCM is considered. © 2012 IEEE.
In cognitive radio networks, the licensed frequency bands of the primary users (PUs) are available to the secondary user (SU) provided that they do not cause significant interference to the PUs. In this study, the authors analysed the normalised throughput of the SU with multiple PUs coexisting under any frequency division multiple access communication protocol. The authors consider a cognitive radio transmission where the frame structure consists of sensing and data transmission slots. In order to achieve the maximum normalised throughput of the SU and control the interference level to the legal PUs, the optimal frame length of the SU is found via simulation. In this context, a new analytical formula has been expressed for the achievable normalised throughput of SU with multiple PUs under prefect and imperfect spectrum sensing scenarios. Moreover, the impact of imperfect sensing, variable frame length of SU and the variable PU traffic loads, on the normalised throughput has been critically investigated. It has been shown that the analytical and simulation results are in perfect agreement. The authors analytical results are much useful to determine how to select the frame duration length subject to the parameters of cognitive radio network, such as network traffic load, achievable sensing accuracy and number of coexisting PUs.
In this paper we present a novel framework for spectral efficiency enhancement on the access link between relay stations and their donor base station through Self Organization (SO) of system-wide BS antenna tilts. Underlying idea of framework is inspired by SO in biological systems. Proposed solution can improve the spectral efficiency by upto 1 bps/Hz.
Energy efficiency has become an important aspect of wireless communication, both economically and environmentally. This letter investigates the energy efficiency of downlink AWGN channels by employing multiple decoding policies. The overall energy efficiency of the system is based on the bits-per-joule metric, where energy efficiency contours are used to locate the optimal operating points based on the system requirements. Our novel approach uses a linear power model to define the total power consumed at the base station, encompassing the circuit and processing power, and amplifier efficiency, and ensures that the best energy efficiency value can be achieved whilst satisfying other system targets such as QoS and rate-fairness.
Hot spots in a wireless sensor network emerge as locations under heavy traffic load. Nodes in such areas quickly deplete energy resources, leading to disruption in network services. This problem is common for data collection scenarios in which Cluster Heads (CH) have a heavy burden of gathering and relaying information. The relay load on CHs especially intensifies as the distance to the sink decreases. To balance the traffic load and the energy consumption in the network, the CH role should be rotated among all nodes and the cluster sizes should be carefully determined at different parts of the network. This paper proposes a distributed clustering algorithm, Energy-efficient Clustering (EC), that determines suitable cluster sizes depending on the hop distance to the data sink, while achieving approximate equalization of node lifetimes and reduced energy consumption levels. We additionally propose a simple energy-efficient multihop data collection protocol to evaluate the effectiveness of EC and calculate the end-to-end energy consumption of this protocol; yet EC is suitable for any data collection protocol that focuses on energy conservation. Performance results demonstrate that EC extends network lifetime and achieves energy equalization more effectively than two well-known clustering algorithms, HEED and UCR.
In this paper, we propose novel Hybrid Automatic Repeat re-Quest (HARQ) strategies used in conjunction with hybrid relaying schemes, named as H^2-ARQ-Relaying. The strategies allow the relay to dynamically switch between amplify-and-forward/compress-and-forward and decode-and-forward schemes according to its decoding status. The performance analysis is conducted from both the spectrum and energy efficiency perspectives. The spectrum efficiency of the proposed strategies, in terms of the maximum throughput, is significantly improved compared with their non-hybrid counterparts under the same constraints. The consumed energy per bit is optimized by manipulating the node activation time, the transmission energy and the power allocation between the source and the relay. The circuitry energy consumption of all involved nodes is taken into consideration. Numerical results shed light on how and when the energy efficiency can be improved in cooperative HARQ. For instance, cooperative HARQ is shown to be energy efficient in long distance transmission only. Furthermore, we consider the fact that the compress-and-forward scheme requires instantaneous signal to noise ratios of all three constituent links. However, this requirement can be impractical in some cases. In this regard, we introduce an improved strategy where only partial and affordable channel state information feedback is needed.
This letter addresses energy-efficient design in multi-user, single-carrier uplink channels by employing multiple decoding policies. The comparison metric used in this study is based on average energy efficiency contours, where an optimal rate vector is obtained based on four system targets: Maximum energy efficiency, a trade-off between maximum energy efficiency and rate fairness, achieving energy efficiency target with maximum sum-rate and achieving energy efficiency target with fairness. The transmit power function is approximated using Taylor series expansion, with simulation results demonstrating the achievability of the optimal rate vector, and negligible performance difference in employing this approximation.
The mutual information (MI) of multiple-input multiple-output (MIMO) system over Rayleigh fading channel is known to asymptotically follow a normal probability distribution. In this paper, we first prove that the MI of distributed MIMO (DMIMO) system is also asymptotically equivalent to a Gaussian random variable (RV) by deriving its moment generating function (MGF) and by showing its equivalence with the MGF of a Gaussian RV. We then derive an accurate closed-form approximation of the outage probability for DMIMO system by using the mean and variance of the MI and show the uniqueness of its formulation. Finally, several applications for our analysis are presented.
Novel low-density signature (LDS) structure is proposed for transmission and detection of symbol-synchronous communication over memoryless Gaussian channel. Given N as the processing gain, under this new arrangement, users' symbols are spread over N chips but virtually only d(v) < N chips that contain nonzero-values. The spread symbol is then so uniquely interleaved as the sampled, at chip rate, received signal contains the contribution from only d(c) < K number of users, where K denotes the total number of users in the system. Furthermore, a near-optimum chip-level iterative soft-in-soft-out (SISO) multiuser decoding (MUD), which is based on message passing algorithm (MPA) technique, is proposed to approximate optimum detection by efficiently exploiting the LDS structure. Given beta = K/N as the system loading, our simulation suggested that the proposed system alongside the proposed detection technique, in AWGN channel, can achieve an overall performance that is close to single-user performance, even when the system has 200% loading, i.e., when beta = 2. Its robustness against near-far effect and its performance behavior that is very similar to optimum detection are demonstrated in this paper. In addition, the complexity required for detection is now exponential to d(c) instead of K as in conventional code division multiple access (CDMA) structure employing optimum multiuser detector.
Novel Low-Density Signature (LDS) structure is proposed for synchronous Code Division Multiple Access (CDMA) systems for an uplink communication over AWGN channel. It arranges the system such that the interference pattern being seen by each user at each received sampled chip is different. Furthermore, new near-optimum chip-level iterative multiuser decoder is suggested to exploit the proposed structure. It is shown via computer simulations that, without forward error correction (FEC) coding, the proposed LDS structure could achieve near single-user performance with up to 200% loading condition. As the proposed iterative decoding converges relatively fast, the complexity is kept much more affordable than that of optimum multiuser detection (MUD) with conventional structure. © 2006 IEEE.
With increased complexity of webpages nowadays, computation latency incurred by webpage processing during downloading operations has become a newly identified factor that may substantially affect user experiences in a mobile network. In order to tackle this issue, we propose a simple but effective transport-layer optimization technique which requires necessary context information dissemination from the mobile edge computing (MEC) server to user devices where such an algorithm is actually executed. The key novelty in this case is the mobile edge’s knowledge about webpage content characteristics which is able to increase downloading throughput for user QoE enhancement. Our experiment results based on a real LTE-A test-bed show that, when the proportion of computation latency varies between 20% and 50% (which is typical for today’s webpages), the downloading throughput can be improved up to 34.5%, with reduced downloading time by up to 25.1%
This paper presents details of the indoor wideband and directional propagation measurements at 26 GHz in which a wideband channel sounder using a millimeter wave (mmWave) signal analyzer and vector signal generator was employed. The setup provided 2 GHz bandwidth and the mechanically steerable directional lens antenna with 5 degrees beamwidth provides 5 degrees of directional resolution over the azimuth. Measurements provide path loss, delay and spatial spread of the channel. Angular and delay dispersion are presented for line-of-sight (LoS) and non-line-of-sight (NLoS) scenarios.
Wideband millimeter-wave (mmWave) directional propagation measurements were conducted in the 32 GHz and 39 GHz bands in outdoor line-of-sight (LoS) small cell scenarios. The measurement provides spatial and temporal statistics that will be useful for small-cell outdoor wireless networks for future mmWave bands. Measurements were performed at two outdoor environments and repeated for all polarization combinations. Measurement results show little spread in the angular and delay domains for the LoS scenario. Moreover root-mean-squared (RMS) delay spread at different polarizations show small difference which can be due to specific scatterers in the channel.
This paper presents empirically-based large-scale propagation path loss models for small cell fifth generation (5G) cellular system in the millimeter-wave bands, based on practical propagation channel measurements at 26 GHz, 32 GHz, and 39 GHz. To characterize path loss at these frequency bands for 5G small cell scenarios, extensive wideband and directional channel measurements have been performed on the campus of the University of Surrey. Close-in reference (CI), and 3GPP path loss models have been studied, and large-scale fading characteristics have been obtained and presented.
Nowadays, system architecture of the fifth generation (5G) cellular system is becoming of increasing interest. To reach the ambitious 5G targets, a dense base station (BS) deployment paradigm is being considered. In this case, the conventional always-on service approach may not be suitable due to the linear energy/density relationship when the BSs are always kept on. This suggests a dynamic on/off BS operation to reduce the energy consumption. However, this approach may create coverage holes and the BS activation delay in terms of hardware transition latency and software reloading could result in service disruption. To tackle these issues, we propose a predictive BS activation scheme under the control/data separation architecture (CDSA). The proposed scheme exploits user context information, network parameters, BS sleep depth and measurement databases to send timely predictive activation requests in advance before the connection is switched to the sleeping BS. An analytical model is developed and closed-form expressions are provided for the predictive activation criteria. Analytical and simulation results show that the proposed scheme achieves a high BS activation accuracy with low errors w.r.t. the optimum activation time.
Network slicing has been identified as one of the most important features for 5G and beyond to enable operators to utilize networks on an as-a-service basis and meet the wide range of use cases. In physical layer, the frequency and time resources are split into slices to cater for the services with individual optimal designs, resulting in services/slices having different baseband numerologies (e.g., subcarrier spacing) and / or radio frequency (RF) front-end configurations. In such a system, the multi-service signal multiplexing and isolation among the service/slices are critical for the Physical-Layer Network Slicing (PNS) since orthogonality is destroyed and significant inter-service/ slice-band-interference (ISBI) may be generated. In this paper, we first categorize four PNS cases according to the baseband and RF configurations among the slices. The system model is established by considering a low out of band emission (OoBE) waveform operating in the service/slice frequency band to mitigate the ISBI. The desired signal and interference for the two slices are derived. Consequently, one-tap channel equalization algorithms are proposed based on the derived model. The developed system models establish a framework for further interference analysis, ISBI cancelation algorithms, system design and parameter selection (e.g., guard band), to enable spectrum efficient network slicing.
The Internet of Things (IoT) has become a new enabler for collecting real-world observation and measurement data from the physical world. The IoT allows objects with sensing and network capabilities (i.e. Things and devices) to communicate with one another and with other resources (e.g. services) on the digital world. The heterogeneity, dynamicity and ad-hoc nature of underlying data, and services published by most of IoT resources make accessing and processing the data and services a challenging task. The IoT demands distributed, scalable, and efficient indexing solutions for large-scale distributed IoT networks. We describe a novel distributed indexing approach for IoT resources and their published data. The index structure is constructed by encoding the locations of IoT resources into geohashes and then building a quadtree on the minimum bounding box of the geohash representations. This allows to aggregate resources with similar geohashes and reduce the size of the index. We have evaluated our proposed solution on a large-scale dataset and our results show that the proposed approach can efficiently index and enable discovery of the IoT resources with 65% better response time than a centralised approach and with a high success rate (around 90% in the first few attempts).
The current Web and data indexing and search mechanisms are mainly tailored to process text-based data and are limited in addressing the intrinsic characteristics of distributed, large-scale and dynamic Internet of Things (IoT) data networks. The IoT demands novel indexing solutions for large-scale data to create an ecosystem of system; however, IoT data are often numerical, multi-modal and heterogeneous. We propose a distributed and adaptable mechanism that allows indexing and discovery of real-world data in IoT networks. Comparing to the state-of-the-art approaches, our model does not require any prior knowledge about the data or their distributions. We address the problem of distributed, efficient indexing and discovery for voluminous IoT data by applying an unsupervised machine learning algorithm. The proposed solution aggregates and distributes the indexes in hierarchical networks. We have evaluated our distributed solution on a large-scale dataset, and the results show that our proposed indexing scheme is able to efficiently index and enable discovery of the IoT data with 71% to 92% better response time than a centralised approach.
Due to dynamic wireless network conditions and heterogeneous mobile web content complexities, web-based content services in mobile network environments always suffer from long loading time. The new HTTP/2.0 protocol only adopts one single TCP connection, but recent research reveals that in real mobile environments, web downloading using single connection will experience long idle time and low bandwidth utilization, in particular with dynamic network conditions and web page characteristics. In this paper, by leveraging the Mobile Edge Computing (MEC) technique, we present the framework of Mobile Edge Hint (MEH), in order to enhance mobile web downloading performances. Specifically, the mobile edge collects and caches the meta-data of frequently visited web pages and also keeps monitoring the network conditions. Upon receiving requests on these popular webpages, the MEC server is able to hint back to the HTTP/2.0 clients on the optimized number of TCP connections that should be established for downloading the content. From the test results on real LTE testbed equipped with MEH, we observed up to 34.5% time reduction and in the median case the improvement is 20.5% compared to the plain over-the-top (OTT) HTTP/2.0 protocol.
In this paper, a novel low-complexity and spectrally efficient modulation scheme for visible light communication (VLC) is proposed. Our new spatial quadrature modulation (SQM) is designed to efficiently adapt traditional complex modulation schemes to VLC; i.e. converting multi-level quadrature amplitude modulation (M-QAM), to real-unipolar symbols, making it suitable for transmission over light intensity. The proposed SQM relies on the spatial domain to convey the orthogonality and polarity of the complex signals, rather than mapping bits to symbol as in existing spatial modulation (SM) schemes. The detailed symbol error analysis of SQM is derived and the derivation is validated with link level simulation results. Using simulation and derived results, we also provide a performance comparison between the proposed SQM and SM. Simulation results demonstrate that SQM could achieve a better symbol error rate (SER) and/or data rate performance compared to the state of the art in SM; for instance a Eb/No gain of at least 5 dB at a SER of 10 4.
Decentralized dynamic spectrum allocation (DSA) that exploit adaptive antenna array interference mitigation (IM) diversity at the receiver, is studied for interference-limited environments with high level of frequency reuse. The system consists of base stations (BSs) that can optimize uplink frequency allocation to their user equipments (UEs) to minimize impact of interference on the useful signal, assuming no control over band allocation of other BSs sharing the same bands. To this end, “good neighbor” (GN) rules allow effective trade off between the equilibrium and transient decentralized DSA behavior if the performance targets are adequate to the interference scenario. In this paper, we extend the GN rules by including a spectrum occupation control that allows adaptive selection of the performance targets corresponding to the potentially “interference free” DSA; define the semi-analytic absorbing Markov chain model for the GN DSA with occupation control and study the convergence properties including effects of possible breaks of the GN rules; and for higher-dimension networks, develop the simplified search GN algorithms with occupation and power control (PC) and demonstrate their efficiency by means of simulations in the scenario with unlimited requested network occupation.
To flexibly support diverse communication requirements (e.g., throughput, latency, massive connection, etc.) for the next generation wireless communications, one viable solution is to divide the system bandwidth into several service subbands, each for a different type of service. In such a multi-service (MS) system, each service has its optimal frame structure while the services are isolated by subband filtering. In this paper, a framework for multi-service (MS) system is established based on subband filtered multi-carrier (SFMC) modulation. We consider both single-rate (SR) and multi-rate (MR) signal processing as two different MS-SFMC implementations, each having different performance and computational complexity. By comparison, the SR system outperforms the MR system in terms of performance while the MR system has a significantly reduced computational complexity than the SR system. Numerical results show the effectiveness of our analysis and the proposed systems. These proposed SR and MR MS-SFMC systems provide guidelines for next generation wireless system frame structure optimization and algorithm design.
With the recent development of Device-toDevice (D2D) communication technologies, mobile devices will no longer be treated as pure “terminals”, but they could become an integral part of the network in specific application scenarios. In this paper, we introduce a novel scheme of using D2D communications for enabling data relay services in partial Not-Spots, where a client without local network access may require data relay by other devices. Depending on specific social application scenarios that can leverage on the D2D technology, we consider tailored algorithms in order to achieve optimised data relay service performance on top of our proposed networkcoordinated communication framework. The approach is to exploit the network’s knowledge on its local user mobility patterns in order to identify best helper devices participating in data relay operations. This framework also comes with our proposed helper selection optimization algorithm based on reactive predictability of individual user. According to our simulation analysis based on both theoretical mobility models and real human mobility data traces, the proposed scheme is able to flexibly support different service requirements in specific social application scenarios.
—This work introduces MultiSphere, a method to massively parallelize the tree search of large sphere decoders in a nearly-independent manner, without compromising their maximum-likelihood performance, and by keeping the overall processing complexity at the levels of highly-optimized sequential sphere decoders. MultiSphere employs a novel sphere decoder tree partitioning which can adjust to the transmission channel with a small latency overhead. It also utilizes a new method to distribute nodes to parallel sphere decoders and a new tree traversal and enumeration strategy which minimize redundant computations despite the nearly-independent parallel processing of the subtrees. For an 8 × 8 MIMO spatially multiplexed system with 16-QAM modulation and 32 processing elements MultiSphere can achieve a latency reduction of more than an order of magnitude, approaching the processing latency of linear detection methods, while its overall complexity can be even smaller than the complexity of well-known sequential sphere decoders. For 8×8 MIMO systems, MultiSphere’s sphere decoder tree partitioning method can achieve the processing latency of other partitioning schemes by using half of the processing elements. In addition, it is shown that for a multi-carrier system with 64 subcarriers, when performing sequential detection across subcarriers and using MultiSphere with 8 processing elements to parallelize detection, a smaller processing latency is achieved than when parallelizing the detection process by using a single processing element per subcarrier (64 in total).
In this paper, we investigate the optimal inter- frequency small cell discovery (ISCD) periodicity for small cells deployed on carrier frequency other than that of the serving macro cell. We consider that the small cells and user terminals (UTs) positions are modeled according to a homogeneous Poisson Point Process (PPP). We utilize polynomial curve fitting to approximate the percentage of time the typical UT misses small cell offloading opportunity, for a fixed small cell density and fixed UT speed. We then derive analytically, the optimal ISCD periodicity that minimizes the average UT energy consumption (EC). Furthermore, we also derive the optimal ISCD periodicity that maximizes the average energy efficiency (EE), i.e. bit- per-joule capacity. Results show that the EC optimal ISCD periodicity always exceeds the EE optimal ISCD periodicity, with the exception of when the average ergodic rate in both tiers are equal, in which the optimal ISCD periodicity in both cases also becomes equal.
Most of the wireless systems such as the long term evolution (LTE) adopt a pilot symbol-aided channel estimation approach for data detection purposes. In this technique, some of the transmission resources are allocated to common pilot signals which constitute a significant overhead in current standards. This can be traced to the worst-case design approach adopted in these systems where the pilot spacing is chosen based on extreme condition assumptions. This suggests extending the set of the parameters that can be adaptively adjusted to include the pilot density. In this paper, we propose an adaptive pilot pattern scheme that depends on estimating the channel correlation. A new system architecture with a logical separation between control and data planes is considered and orthogonal frequency division multiplexing (OFDM) is chosen as the access technique. Simulation results show that the proposed scheme can provide a significant saving of the LTE pilot overhead with a marginal performance penalty.
In research community, a new radio access network architecture with a logical separation between control plane (CP) and data plane (DP) has been proposed for future cellular systems. It aims to overcome limitations of the conventional architecture by providing high data rate services under the umbrella of a coverage layer in a dual connection mode. This configuration could provide significant savings in signalling overhead. In particular, mobility robustness with minimal handover (HO) signalling is considered as one of the most promising benefits of this architecture. However, the DP mobility remains an issue that needs to be investigated. We consider predictive DP HO management as a solution that could minimise the out-of band signalling related to the HO procedure. Thus we propose a mobility prediction scheme based on Markov Chains. The developed model predicts the user’s trajectory in terms of a HO sequence in order to minimise the interruption time and the associated signalling when the HO is triggered. Depending on the prediction accuracy, numerical results show that the predictive HO management strategy could significantly reduce the signalling cost as compared with the conventional non-predictive mechanism.
As soon as 2020, network densification and spectrum extension will be the dominant theme to support enormous capacity and massive connectivity . However, this approach may not guarantee wide area coverage due to the poor propagation capabilities of high frequency bands. In addition, energy efficiency and signalling overhead will become critical considerations in ultra-dense deployment scenarios. This calls for a futuristic two layer RAN architecture with dual connectivity, where the high frequency bands are used for data services, complemented by a coverage layer at conventional cellular bands . This separation of control and data planes will enable a transition from always-on to always-available systems and could result in order of magnitude savings in energy and signalling overhead.
Orthogonal Frequency Division Multiple Access (OFDMA) as well as other orthogonal multiple access techniques fail to achieve the system capacity limit in the uplink due to the exclusivity in resource allocation. This issue is more prominent when fairness among the users is considered in the system. Current Non-Orthogonal Multiple Access techniques (NOMA) introduce redundancy by coding/spreading to facilitate the users' signals separation at the receiver, which degrade the system spectral efficiency. Hence, in order to achieve higher capacity, more efficient NOMA schemes need to be developed. In this paper, we propose a NOMA scheme for uplink that removes the resource allocation exclusivity and allows more than one user to share the same subcarrier without any coding/spreading redundancy. Joint processing is implemented at the receiver to detect the users' signals. However, to control the receiver complexity, an upper limit on the number of users per subcarrier needs to be imposed. In addition, a novel subcarrier and power allocation algorithm is proposed for the new NOMA scheme that maximizes the users' sum-rate. The link-level performance evaluation has shown that the proposed scheme achieves bit error rate close to the single-user case. Numerical results show that the proposed NOMA scheme can significantly improve the system performance in terms of spectral efficiency and fairness comparing to OFDMA.
Conventional cellular systems are dimensioned according to a worst case scenario, and they are designed to ensure ubiquitous coverage with an always-present wireless channel irrespective of the spatial and temporal demand of service. A more energy conscious approach will require an adaptive system with a minimum amount of overhead that is available at all locations and all times but becomes functional only when needed. This approach suggests a new clean slate system architecture with a logical separation between the ability to establish availability of the network and the ability to provide functionality or service. Focusing on the physical layer frame of such an architecture, this paper discusses and formulates the overhead reduction that can be achieved in next generation cellular systems as compared with the Long Term Evolution (LTE). Considering channel estimation as a performance metric whilst conforming to time and frequency constraints of pilots spacing, we show that the overhead gain does not come at the expense of performance degradation.
Requirement for low operating and deployment costs of cellular networks motivate the need for self-organisation in cellular networks. To reduce operational costs, self-organising networks are fast becoming a necessity. One key issue in this context is self-organised coverage estimation that is done based on the signal strength measurement and reported position information of system users. In this paper, the effect of inaccurate position estimation on self-organised coverage estimation is investigated. We derive the signal reliability expression (i.e. probability of the received signal being above a certain threshold) and the cell coverage expressions that take the error in position estimation into consideration. This is done for both the shadowing and non-shadowing channel models. The accuracy of the modified reliability and cell coverage probability expressions are also numerically verified for both cases.
In this paper we extend the analysis of two-receiver broadcast channels with random parameters to the three-receivers case. Specifically we base our work on Nair and El Gamal's results for the three-receiver discrete memoryless multilevel broadcast channel and assume that state information is available non-causally at the transmitter. We provide an achievable rate region for this setting and acknowledge its importance in the study of multiuser cognitive radio configurations.
Network performance optimization is among the most important tasks within the area of wireless communication networks. In a Self- Organizing Network (SON) with the capability of adaptively changing parameters of a network, the optimization tasks are more feasible than static networks. Yet, with an increase of OPEX and CAPEX in new generation telecommunication networks, the optimization tasks are inevitable. In this paper, it is proven that the similarity among target and network parameters can produce lower Uncertainty Entropy (UEN) in a self-organizing system as a higher degree of organizing is gained. The optimization task is carried out with the Adaptive Simulated Annealing method, which is enhanced with a Similarity Measure (SM) in the proposed approach (EASA). The Markov model of EASA is provided to assess the proposed approach. We also show a higher performance through a simulation, based on a scenario in LTE network.
In this paper, we evaluate the performance of Multicarrier-Low Density Spreading Multiple Access (MC-LDSMA) as a multiple access technique for mobile communication systems. The MC-LDSMA technique is compared with current multiple access techniques, OFDMA and SC-FDMA. The performance is evaluated in terms of cubic metric, block error rate, spectral efficiency and fairness. The aim is to investigate the expected gains of using MC-LDSMA in the uplink for next generation cellular systems. The simulation results of the link and system-level performance evaluation show that MC-LDSMA has significant performance improvements over SC-FDMA and OFDMA. It is shown that using MC-LDSMA can considerably reduce the required transmission power and increase the spectral efficiency and fairness among the users.
Energy consumption has become an increasingly important aspect of wireless communications, from both an economical and environmental point of view. New enhancements are being placed on mobile networks to reduce the power consumption of both mobile terminals and base stations. This paper studies the achievable rate region of AWGN broadcast channels under Time-division, Frequency-division and Superposition coding, and locates the optimal energy-efficient rate-pair according to a comparison metric based on the average energy efficiency of the system. In addition to the transmit power, circuit power and signalling power are also incorporated in the energy efficiency function, with simulation results verifying that the Superposition coding scheme achieves the highest energy efficiency in an ideal, but non-realistic scenario, where the signalling power is zero. With moderate signalling power, the Frequency-division scheme is the most energy-efficient, with Superposition coding and Time-division becoming second and third best. Conversely, when the signalling power is high, both Time-division and Frequency-division schemes outperform Superposition coding. On the other hand, the Superposition coding scheme also incorporates rate-fairness into the system, which allows both users to transmit whilst maximising the energy efficiency.
In this paper we present the uDirect algorithm as a novel approach for mobile phone centric observation of a user's facing direction, through which the device and user orientations relative to earth coordinate are estimated. While the device orientation estimation is based on accelerometer and magnetometer measurements in standing mode, the unique behavior of measured acceleration during stance phase of a human's walking cycle is used for detecting user direction. Furthermore, the algorithm is independent of initial orientation of the device which gives the user higher space of freedom for long term observations. As the algorithm only relies on embedded accelerometer and magnetometer sensors of the mobile phone, it is not susceptible to shadowing effect as GPS. In addition, by performing independent estimations during each step of walking the model is robust to error accumulation. Evaluating the algorithm with 180 data samples from 10 participates has empirically confirmed the assumptions of our analytical model about the unique characteristics of the human stance phase for direction estimation. Moreover, our initial inspection has shown a system based on our algorithm outperforms conventional use of GPS and PCA analysis based techniques for walking distances more than 2 steps. © 2011 IEEE.
Spectrum sensing is one of key enabling techniques to advanced radio technologies such as cognitive radios and ALOHA. This paper presents a novel non-cooperative spectrum sensing approach that can achieve a good trade-off between latency, reliability and computational complexity. Our major idea is to exploit the first-order cyclostationarity of the primary user's signal to reduce the noise-uncertainty problem inherent in the conventional energy detection approach. It is shown that the proposed approach is suitable for detecting the primary user's activity in the interweave paradigm of cognitive spectrum sharing, while the active primary user is periodically sending training sequence. Computer simulations are carried out for the typical IEEE 802.11g system. It is observed that the proposed approach outperforms both the energy detection and the second-order cyclostationarity approach when the observation period is more than 10 frames corresponding to 0.56 ms. ©2010 IEEE.
Mobile communications are increasingly contributing to global energy consumption. The EARTH (Energy Aware Radio and neTworking tecHnologies) project tackles the important issue of reducing CO emissions by enhancing the energy efficiency of cellular mobile networks. EARTH is a holistic approach to develop a new generation of energy efficient products, components, deployment strategies and energy-aware network management solutions. In this paper the holistic EARTH approach to energy efficient mobile communication systems is introduced. Performance metrics are studied to assess the theoretical bounds of energy efficiency as well as the practical achievable limits. A vast potential for energy savings lies in the operation of radio base stations. In particular, base stations consume a considerable amount of the available power budget even when operating at low load. Energy efficient radio resource management (RRM) strategies need to take into account slowly changing daily load patterns, as well as highly dynamic traffic fluctuations. Moreover, various deployment strategies are examined focusing on their potential to reduce energy consumption, whilst providing uncompromised coverage and user experience. This includes heterogeneous networks with a sophisticated mix of different cell sizes, which may be further enhanced by energy efficient relaying and base station cooperation technologies. Finally, scenarios leveraging the capability of advanced terminals to operate on multiple radio access technologies (RAT) are discussed with respect to their energy savings potential. ©2010 IEEE.
Exploiting path diversity to enhance communication reliability is a key desired property in Internet. While the existing routing architecture is reluctant to adopt changes, overlay routing has been proposed to circumvent the constraints of native routing by employing intermediary relays. However, the selfish interdomain relay placement may violate local routing policies at intermediary relays and thus affect their economic costs and performances. With the recent advance of the concept of network virtualization, it is envisioned that virtual networks should be provisioned in cooperation with infrastructure providers in a holistic view without compromising their profits. In this paper, the problem of policy-aware virtual relay placement is first studied to investigate the feasibility of provisioning policycompliant multipath routing via virtual relays for inter-domain communication reliability. By evaluation on a real domain-level Internet topology, it is demonstrated that policy-compliant virtual relaying can achieve a similar protection gain against single link failures compared to its selfish counterpart. It is also shown that the presented heuristic placement strategies perform well to approach the optimal solution.
In this paper, we propose a rate-adaptive bit and power loading approach for the OFDM-based relaying communications. The cooperative relay operates in the half-duplex amplify-and-forward mode. The source and the relay has the separate power constraints. The maximum-ratio combining is employed at -the destination for maximizing the received SNR. Assuming the perfect channel knowledge available at all nodes, the proposed approach is to maximize the throughput (the number of bits/symbol) at the given power constraint and the target link performance. Unlike the water-filling method, the proposed approach does not need the iterative loading process, and can otTer the sub-optimum performance. Computer simulations are used to test the proposed approach for various scenarios with respect to the relay location or the distributed power allocation. © 2008 IEEE.
Personal Network Federation (PN-F) aims to provide secure interactions between a subset of devices of different Personal Networks (PN) for achieving a common goal or providing some services in collaborative environments. Security and privacy is one of the major concerns in the development and acceptance of PN-F like collaborative networks and as any other security architecture, the key management is the corner stone of any possible solution. In this paper, we provide security mechanisms and protocols for key exchange and key management in PN Federations and specify how the established keys can be used to secure communications in different layers. © 2008 IEEE.
The log-normal probability distribution is commonly used in wireless communications to model the shadowing and, more recently, the small-scale fading for indoor ultra-wide-band communications. In this paper, an accurate closed-form approximation of the ergodic capacity over log-normal fading channels is derived. This expression can be easily used to evaluate and compare the ergodic capacity of communication systems operating over log-normal fading channels. © 2008 IEEE.
Doubly differential modem turns out to be a promising technology for coping with unknown frequency offsets with the pay of signal-to-noise ratio (SNR). In this paper, we propose to compensate the SNR loss by employing the detection-forward cooperative relay. The receiver can employ two kind of combiners to attain the achievable spatial diversity-gain. Performance analysis is carefully investigated for the Rayleigh-fading channel. It is shown that the SNR-compensation is satisfied for the large-SNR range.
This paper investigates spectrum sharing (in the form of code sharing) between two Universal Mobile Telecommunication System (UMTS) operators in the UMTS extension band (2500-2690MHz) with equal and unequal number of proprietary carriers, respectively. The paper proposes a Dynamic Spectrum Allocation (DSA) algorithm to address the problem of spectrum sharing between two operators on a non-pool basis. It also investigates the impact of Adjacent Channel Interference (ACI) on the spectrum sharing gain. Additionally, an architecture that enables spectrum sharing to take place between two or more UMTS operators is presented. The simulated performance of the proposed DSA algorithm shows that under peak-hour loading, up to 32% increase in capacity can be obtained when compared to currently used Fixed Spectrum Allocation (FSA). © 2008 IEEE.
This paper provides an efficient key management scheme for large scale personal networks (PN) and introduces the Certified PN Formation Protocol (CPFP) based on a personal public key infrastructure (personal PKI) concept and Elliptic Curve Cryptography (ECC) techniques. © 2008 IEEE.
In this paper, the efficiencies of different interference coordination schemes are evaluated for emerging wireless networks and the possible impact on intra-cell scheduling is studied through extensive simulations. The results show that pure fractional frequency reuse can provide similar improvement in the cell-edge throughput compared to power coordinated counterpart at a less cost in terms of overall throughput. Moreover, it can provide fairer distribution of throughput in both central as well as cell-edge areas. However, this scheme can not mange asymmetrical changes in the distribution of users across different cells in the entire system. As a result, a power coordination mechanism would be still necessary on top of such flexible frequency reuse schemes. © 2008 IEEE.
A novel approach for implementation of opportunistic scheduling without explicit feedback channels is proposed in this paper, which exploits the existing, ARQ signals instead of feedback channels to reduce the complexity of implementation. Monte Carlo simulation results demonstrate the efficacy of the proposed approach in harvesting multiuser diversity gain. The proposed approach enables implementation of opportunistic scheduling, in a variety of wireless networks, such as the IEEE 802.11, without feedback facilities for collecting partial channel state information from users.
This paper describes a distributed, cooperative and real time rental protocol for DCA operations in a multi system and mult) cell context for OFDMA systems. A credit token based rental protocol using auctioning Is proposed in support of dynamic spectrum sharing between cells. The proposed scheme can be tuned adaptively as a function of the context by specifying the credit tokens usage in the radio etiquette. The application of the rental protocol is illustrated with an ascending bid auctioning. The paper also describes two approaches for BS-BS communications in support of the rental protocol. Finally, it is described how the proposed mechanisms contribute to the current approaches followed in the IEEE 802.16h and IEEE 802.22 standards efforts addressing cognitive radio, © 2006 IEEE.
Network scenarios beyond 3G assume the cooperation of operators with wireless access networks of different technologies in order to improve scalability and provide enhanced services to their mobile customers. While the selection of an optimised delivery path in such scenarios with multiple access networks is already a challenging task for unicast delivery, the problem becomes more severe for multicast services, where a potentially large group of heterogeneous receivers has to be served simultaneously via shared resources. In this paper we study the problem of selecting the optimal bearer paths for multicast services with groups of heterogeneous receivers in wireless networks with overlapping coverage. We propose an algorithm for bearer selection with different optimisation goals, demonstrating the existing tradeoff between user preference and resource efficiency.
The hype surrounding the 5G mobile networks is well justified in view of the explosive increase in mobile traffic and the inclusion of massive “non-human” users that form the internet of things. Advanced radio features such as network densification, cloud radio access networks (C-RAN), and untapped frequency bands jointly succeed in increasing the radio capacity to accommodate the increasing traffic demand. However, a new challenge has arisen: the backhaul (BH), the transport network that connects radio cells to the core network. The BH needs to expand in a timely fashion to reach the fast spreading small cells. Moreover, the realistic BH solutions are unable to provide the unprecedented 5G performance requirements to every cell. To this end, this research addresses the gap between the 5G stipulated BH characteristics and the available BH capabilities. On the other hand, heterogeneity is a leading trait in 5G networks. First, the RAN is heterogeneous since it comprises different cell types, radio access technologies, and architectures. Second, the BH is composed of a mix of different wired and wireless technologies with different limitations. In addition, 5G users have a broader range of capabilities and requirements than any incumbent mobile network. We exploit this trait and develop a novel scheme, termed User-Centric-BH (UCB). The UCB targets the user association mechanism which is traditionally blind to users’ needs and BH conditions. The UCB builds on the existing concept of cell range extension (CRE) and proposes multiple-offset factors (CREO) whereby each reflects the cell's joint RAN and BH capability with respect to a defined attribute (e.g., throughput, latency, resilience, etc.). In parallel, users associate different weights to different attributes, hence, they can make a user-centric decision. The proposed scheme significantly outperforms the state-of-the-art and unlocks the BH bottleneck by availing existing but misused resources to users in need.
There has recently been a real demand to design and deploy mobile communication networks that consume significantly less energy compared to the existing s ystems. The main thrust of this research focuses on investigation of the impacts of radio resource allocation schemes in the current state-of-the-art Orthogonal Frequency Division Multiple Access (OFDMA) systems on energy efficiency (EE) o f modern Radio Access Networks ( RANs), a s well a s design of effective solutions to reduce RAN energy consumption in such networks. Due to data traffic fluctuation of communication networks, there are often many unused radio resource blocks in OFDMA systems. Efficient allocation of these surplus resource blocks can lead to considerable energy savings. One of the key objectives of this thesis is to exploit this opportunity by designing practical and effective radio resource allocation techniques that exploit fundamental trade-off between energy consumption and bandwidth by reducing energy consumption of the RAN while providing the required quality of service (QoS) for the network users. The basic concept here is to exploit fluctuations of data traffic in the network. Specifically, a novel e nergy e fficient re source al location te chnique, fo r low lo ad tr affic conditions is proposed. This technique is then applied to three bespoke scheduling schemes, namely Round Robin (RR), Best Channel Quality Indicator (BCQI), and Proportional Fair (PF) for performance assessment. Comprehensive evaluation of the proposed scheduling schemes demonstrates that adopting the proposed resource allocation technique significantly enhances the performance of RAN in terms of energy consumption in comparison with the conventional schemes such as the three aforementioned schedulers. Finding an optimal method for surplus resource allocation is firstly modelled as an optimisation problem which is subsequently solved using dynamic programming. In this context, a Knapsack Problem (KP) is adopted to find an optimal solution for a single-cell s cenario. The proposed heuristic method is simulated using Equal Power (EP) and Water Filling (WF) algorithms for surplus resource allocation. It is shown that the optimal solution is achieved using the WF algorithm leading to an EE saving of 60% compared to the greedy KP solution, whilst significantly lower computational complexity. The optimality of the proposed algorithm is evaluated in a multi-cell scenario to take into account reali
It has been envisaged that in future 5G networks user devices will become an integral part of the network by participating in the transmission of mobile content traffic typically through Device-to-device (D2D) technologies. In this context, we promote the concept of Mobility as a Service (MaaS), where the mobile network edge is equipped with necessary knowledge on device mobility in order to meet specific service requirements for clients via a small number of helper devices. In this thesis, we propose a MaaS paradigm based frameworks to address clients’ requirement with regards to content offloading service and connectivity relaying service via network assisted D2D communication framework. To address content traffic offloading, we present a device-level Information Centric Networking (ICN) architecture that is able to perform intelligent content distribution operations according to necessary context information on mobile user mobility and content characteristics. Based on such an architecture, we further introduce device-level online content caching and offline helper selection algorithms in order to optimise the overall system efficiency. In particular, this piece of work sheds distinct light on the importance of user mobility data analytics based on which helper selection can lead to overall system optimality. Based on representative user mobility models, we conducted realistic simulation experiments and modelling which have proven the efficiency in terms of both network traffic offloading gains and user-oriented performance improvements. In addition, we show how the framework can be flexibly configured to meet specific delay tolerance constraints according to specific context policies. With regard to connectivity relaying service, we introduce a novel scheme of using D2D communications for enabling data relay services in partial Not-Spots, where a client without local network access may require data relay by other devices. Depending on specific social application scenarios, this piece of work introduces tailored algorithms in order to achieve optimised data relay service performance. The approach is to exploit the network’s knowledge on its local user mobility patterns to identify best helper devices for participating in data relay operations. This framework is also supported with our proposed helper selection optimisation algorithm based on prediction of individual user mobility. According to our simulation analysis, based on both theoretical mobility
Multi-user (MU) massive multiple-input-multiple-output (MIMO) is one of the promising technologies for the 5th Generation of wireless communication systems. However, as an emerging technology, various technical challenges that hinder practical use of massive MIMO need to be addressed, e.g., imperfections on channel estimation and channel reciprocity. The overall objective of the proposed research is to investigate some of the key practical challenges of implementation of the massive MIMO system and propose effective solutions for those problems. First, in order to realise promised benefits of massive MIMO, there is a need for a highly accurate technique for provisioning of channel state information (CSI). However, the acquisition of CSI can be considerably influenced by imperfect channel estimation in practice. We therefore analyse the impact of channel estimation error on the performance of massive MIMO uplinks with the considerations of the channel correlation over space. We then propose a novel antenna selection scheme by exploiting the sparsity of the channel gain matrix at the received end, which significantly reduces implementation overhead and complexity compared to the well-adopted scheme, without degrading the system performance. Second, it is known that channel reciprocity in time-division duplexing (TDD) massive MIMO systems can be exploited to reduce the overhead required for the acquisition of CSI. However, perfect reciprocity is unrealistic in practical systems due to random radio-frequency (RF) circuit mismatches in uplink and downlink channels. We model and analyse the impact of the RF mismatches by taking into account the channel estimation error. We derive closed-form expressions of the output signal-to-interference-plus- noise ratio for typical linear precoding schemes, and further investigate the asymptotic performance of the considered precoding schemes to provide insights into the practical system designs, including guidelines for the selection of the effective precoding schemes. Third, our theoretical model for analysing the effect of channel reciprocity error on massive MIMO systems reveals that the imperfections in channel reciprocity might become a performance limiting factor. In order to compensate for these imperfections, we present and investigate two calibration schemes for TDD-based MU massive MIMO systems, namely, relative calibration and inverse calibration. In particular, the design of the proposed inverse calibration takes into account a compound effect of channel reciprocity error and channel estimation error. To compare two calibration schemes, we derive closed-form expressions for the ergodic sum-rate and the receive mean-square error for downlinks. We demonstrate that the proposed inverse calibration outperforms the relative calibration, thanks to its greater robustness to the compound effect of both errors.
Page Owner: ees1rt
Page Created: Thursday 5 August 2010 15:03:17 by lb0014
Last Modified: Wednesday 6 April 2016 10:49:59 by ns0021
Expiry Date: Thursday 18 August 2011 17:49:26
Assembly date: Sat Jan 20 00:12:46 GMT 2018
Content ID: 32823