Professor Rahim Tafazolli FREng
Academic and research departmentsInstitute for Communication Systems.
Rahim Tafazolli is Regius Professor of Electronic Engineering, Professor of Mobile and Satellite Communications, Founder and Director of 5GIC, 6GIC and ICS (Institute for Communication System) at the University of Surrey. He has over 30 years of experience in digital communications research and teaching. He has authored and co-authored more than 1000 research publications and is regularly invited to deliver keynote talks and distinguished lectures to international conferences and workshops.
Affiliations and memberships
01 OCT 2018
Surrey’s 5G technologies give conference goers immersive close-up 3D holographic teleportation experience
12 MAR 2018
The University of Surrey 5G Innovation Centre (5GIC) helping the UK take the lead in the global 5G race
26 FEB 2018
British universities to debut world’s first 5G end-to-end network at Mobile World Congress
15 DEC 2016
University of Surrey 5G Innovation Centre welcomes the UK Government’s commitment to fibre and 5G
27 JUL 2016
5G Innovation Centre and Digital Greenwich form partnership to create pioneering smart city technology
Indicators of esteem
Professor Tafazolli was awarded the 28th KIA Laureate Award in 2015 for his contribution to communications technology.
The laureates of the KIA, Khwarizmi International Award (KIA) of Iran, are selected from the internationally distinguished scientists and researchers whose contributions to the advancement of science and technology are confirmed by the Iranian Research Organisation of Science and Technology (IROST) scientific committee.
MIMO mobile systems, with a large number of antennas at the base-station side, enable the concurrent transmission of multiple, spatially separated information streams, and therefore, enable improved network throughput and connectivity both in uplink and downlink transmissions. Traditionally, such MIMO transmissions adopt linear base-station processing, that translates the MIMO channel into several single-antenna channels. While such approaches are relatively easy to implement, they can leave on the table a significant amount of unexploited MIMO capacity and connectivity capabilities. Recently-proposed non-linear base-station processing methods claim this unexplored capacity and promise substantially increased network throughput and connectivity capabilities. Still, to the best of the authors' knowledge, non-linear base-station processing methods not only have not yet been adopted by actual systems, but have not even been evaluated in a standard-compliant framework, involving of all the necessary algorithmic modules required by a practical system. In this work, for the first time, we incorporate and evaluate non-linear base-station processing in a 3GPP standard environment. We outline the required research platform modifications and we verify that significant throughput gains can be achieved, both in indoor and outdoor settings, even when the number of base-station antennas is much larger than the number of transmitted information streams. Then, we identify missing algorithmic components that need to be developed to make non-linear base-station practical, and discuss future research directions towards potentially transformative next-generation mobile systems and base-stations (i.e., 6G) that explore currently unexploited non-linear processing gains.
A cell-free Massive multiple-input multiple-output (MIMO) system is considered, where the access points (APs) are linked to a central processing unit (CPU) via the limited-capacity fronthaul links. It is assumed that only the quantized version of the weighted signals are available at the CPU. The achievable rate of a limited fronthaul cell-free massive MIMO with local minimum mean square error (MMSE) detection is studied. We study the assumption of uncorrelated quantization distortion, which is commonly used in literature. We show that this assumption will not affect the validity of the insights obtained in our work. To investigate this, we compare the uplink per-user rate with different system parameters for two different scenarios; 1) the exact uplink per-user rate and 2) the uplink per-user rate while ignoring the correlation between the inputs of the quantizers. Finally, we present the conditions which imply that the quantization distortions across APs can be assumed to be uncorrelated.
In this letter, we analyse the trade-off between collision probability and code-ambiguity, when devices transmit a sequence of preambles as a codeword, instead of a single preamble, to reduce collision probability during random access to a mobile network. We point out that the network may not have sufficient resources to allocate to every possible codeword, and if it does, then this results in low utilisation of allocated uplink resources. We derive the optimal preamble set size that maximises the probability of success in a single attempt, for a given number of devices and uplink resources.
Cloud envisioned Cyber-Physical Systems (CCPS) is a practical technology that relies on the interaction among cyber elements like mobile users to transfer data in cloud computing. In CCPS, cloud storage applies data deduplication techniques aiming to save data storage and bandwidth for real-time services. In this infrastructure, data deduplication eliminates duplicate data to increase the performance of the CCPS application. However, it incurs security threats and privacy risks. In this area, several types of research have been done. Nevertheless, they are suffering from a lack of security, high performance, and applicability. Motivated by this, we propose a message Lock Encryption with neVer-decrypt homomorphic EncRyption (LEVER) protocol between the uploading CCPS user and cloud storage to reconcile the encryption and data deduplication. Interestingly, LEVER is the first brute-force resilient encrypted deduplication with only cryptographic two-party interactions
Quantization is the characterization of analogueto- digital converters (ADC) in massive MIMO systems. The design of quantization function or quantization thresholds is found to relate to quantization step, which is the factor that adapts with the changing of transmit power and noise variance. With the objective of utilizing low-resolution ADC is reducing the cost of massive MIMO, we propose an idea as if it is necessary to have adaptive-threshold quantization function. It is found that when maximum-likelihood (ML) is employed as the detection method, having quantization thresholds fixed for low-resolution ADCs will not cause significant performance loss. Moreover, such fixed-threshold quantization function does not require any information of signal power which can reduce the hardware cost of ADCs. Simulations have been carried out in this paper to make comparisons between fixed-threshold and adaptive-threshold quantization regarding various factors.
In this paper, a novel terahertz (THz) spectroscopy technique and a new graphene-based sensor is proposed. The proposed sensor consists of a graphene-based metasurface (MS) that operates in reflection mode over a broad range of frequency band (0.2 -6 THz) and can detect relative permittivity of up to 4 with a resolution of 0.1 and a thickness ranging from 5 μm to 600 μm with a resolution of 0.5 μm. To the best of author’s knowledge, such a THz sensor with such capabilities has not been reported yet. Additionally, an equivalent circuit of the novel unit cell is derived and compared with two conventional grooved structures to showcase the superiority of the proposed unit cell. The proposed spectroscopy technique utilizes some unique spectral features of a broadband reflection wave including Accumulated Spectral power (ASP) and Averaged Group Delay (AGD), which are independent to resonance frequencies and can operate over a broad range of spectrum. ASP and AGD can be combined to analyse the magnitude and phase of the reflection diagram as a coherent technique for sensing purposes. This enables the capability to distinguish between different analytes with high precision which, to the best of author’s knowledge, has been accomplished for the first time.
Volunteer computing is an Internet based distributed computing in which volunteers share their extra available resources to manage large-scale tasks. However, computing devices in a Volunteer Computing System (VCS) are highly dynamic and heterogeneous in terms of their processing power, monetary cost, and data transferring latency. To ensure both of the high Quality of Service (QoS) and low cost for different requests, all of the available computing resources must be used efficiently. Task scheduling is an NP-hard problem that is considered as one of the main critical challenges in a heterogeneous VCS. Due to this, in this paper, we design two task scheduling algorithms for VCSs, named Min-CCV and Min-V. The main goal of the proposed algorithms is jointly minimizing the computation, communication and delay violation cost for the Internet of Things (IoT) requests. Our extensive simulation results show that proposed algorithms are able to allocate tasks to volunteer fog/cloud resources more efficiently than the state-of-the-art. Specifically, our algorithms improve the deadline satisfaction task rates around 99.5% and decrease the total cost between 15 to 53% in comparison with the genetic-based algorithm.
Recent telecommunication paradigms, such as big data, Internet of Things (IoT), ubiquitous edge computing (UEC), and machine learning, are encountering with a tremendous number of complex applications that require different priorities and resource demands. These applications usually consist of a set of virtual machines (VMs) with some predefined traffic load between them. The efficiency of a cloud data center (CDC) as prominent component in UEC significantly depends on the efficiency of its VM placement algorithm applied. However, VM placement is an NP-hard problem and thus there exist practically no optimal solution for this problem. In this paper, motivated by this, we propose a priority, power and traffic-aware approach for efficiently solving the VM placement problem in a CDC. Our approach aims to jointly minimize power consumption, network consumption and resource wastage in a multi-dimensional and heterogeneous CDC. To evaluate the performance of the proposed method, we compared it to the state-of-the-art on a fat-tree topology under various experiments. Results demonstrate that the proposed method is capable of reducing the total network consumption up to 29%, the consumption of power up to 18%, and the wastage of resources up to 68%, compared to the second-best results.
Softwarization has been deemed as a key feature of 5G networking in the sense that the support of network functions migrates from traditional hardware-based solutions to software based ones. While the main rationale of 5G softwarization is to achieve high degree of flexibility/ programmability as well as reduction of total cost of ownership (TCO), it remains an interesting but significant issue on how to strike a desirable balance between system openness and necessary standardization in the context of 5G. The aim of this article is to systematically survey relevant enabling technologies, platforms and tools for 5G softwarization, together with ongoing standardization activities at relevant SDOs (Standards Developing Organizations). Based on these, we aim to shed light on the future evolution of 5G technologies in terms of softwarization versus standardization requirements and options.
Being able to accommodate multiple simultaneous transmissions on a single channel, non-orthogonal multiple access (NOMA) appears as an attractive solution to support massive machine type communication (mMTC) that faces a massive number of devices competing to access the limited number of shared radio resources. In this paper, we first analytically study the throughput performance of NOMA-based random access (RA), namely NOMA-RA. We show that while increasing the number of power levels in NOMA-RA leads to a further gain in maximum throughput, the growth of throughput gain is slower than linear. This is due to the higher-power dominance characteristic in power-domain NOMA known in the literature. We explicitly quantify the throughput gain for the very first time in this paper. With our analytical model, we verify the performance advantage of NOMA-RA scheme by comparing with the baseline multi-channel slotted ALOHA (MS-ALOHA), with and without capture effect. Despite the higher-power dominance effect, the maximum throughput of NOMA-RA with four power levels achieves over three times that of the MS-ALOHA. However, our analytical results also reveal the sensitivity of load on the throughput of NOMA-RA. To cope with the potential bursty traffic in mMTC scenarios, we propose adaptive load regulation through a practical user barring algorithm. By estimating the current load based on the observable channel feedback, the algorithm adaptively controls user access to maintain the optimal loading of channels to achieve maximum throughput. When the proposed user barring algorithm is applied, simulations demonstrate that the instantaneous throughput of NOMA-RA always remains close to the maximum throughput confirming the effectiveness of our load regulation.
Software-Defined Networking (SDN) is a promising paradigm of computer networks, offering a programmable and centralised network architecture. However, although such a technology supports the ability to dynamically handle network traffic based on real-time and flexible traffic control, SDN-based networks can be vulnerable to dynamic change of flow control rules, which causes transmission disruption and packet loss in SDN hardware switches. This problem can be critical because the interruption and packet loss in SDN switches can bring additional performance degradation for SDN-controlled traffic flows in the data plane. In this paper, we propose a novel robust flow control mechanism referred to as Priority-based Flow Control (PFC) for dynamic but disruption-free flow management when it is necessary to change flow control rules on the fly. PFC minimizes the complexity of flow modification process in SDN switches by temporarily adapting the priority of flow rules in order to substantially reduce the time spent on control-plane processing during run-time. Measurement results show that PFC is able to successfully prevent transmission disruption and packet loss events caused by traffic path changes, thus offering dynamic and lossless traffic control for SDN switches.
Deep learning is driving a radical paradigm shift in wireless communications, all the way from the application layer down to the physical layer. Despite this, there is an ongoing debate as to what additional values artificial intelligence (or machine learning) could bring to us, particularly on the physical layer design; and what penalties there may have? These questions motivate a fundamental rethinking of the wireless modem design in the artificial intelligence era. Through several physical-layer case studies, we argue for a significant role that machine learning could play, for instance in parallel error-control coding and decoding, channel equalization, interference cancellation, as well as multiuser and multiantenna detection. In addition, we will also discuss the fundamental bottlenecks of machine learning as well as their potential solutions in this paper.
In this paper, unsupervised deep learning solutions for multiuser single-input multiple-output (MU-SIMO) coherent detection are extensively investigated. According to the ways of utilizing the channel state information at the receiver side (CSIR), deep learning solutions are divided into two groups. One group is called equalization and learning, which utilizes the CSIR for channel equalization and then employ deep learning for multiuser detection (MUD). The other is called direct learning, which directly feeds the CSIR, together with the received signal, into deep neural networks (DNN) to conduct the MUD. It is found that the direct learning solutions outperform the equalizationand- learning solutions due to their better exploitation of the sequence detection gain. On the other hand, the direct learning solutions are not scalable to the size of SIMO networks, as current DNN architectures cannot efficiently handle many cochannel interferences. Motivated by this observation, we propose a novel direct learning approach, which can combine the merits of feedforward DNN and parallel interference cancellation. It is shown that the proposed approach trades off the complexity for the learning scalability, and the complexity can be managed due to the parallel network architecture.
In conventional hybrid beamforming approaches, the number of radio-frequency (RF) chains is the bottleneck on the achievable spatial multiplexing gain. Recent studies have overcome this limitation by increasing the update-rate of the RF beamformer. This paper presents a framework to design and evaluate such approaches, which we refer to as agile RF beamforming, from theoretical and practical points of view. In this context, we consider the impact of the number of RF-chains, phase shifters speed, and resolution to design agile RF beamformers. Our analysis and simulations indicate that even an RF-chain-free transmitter, which its beamformer has no RF-chains, can provide a promising performance compared with fully-digital systems and significantly outperform the conventional hybrid beamformers. Then, we show that the phase shifter's limited switching speed can result in signal aliasing, in-band distortion, and out-of-band emissions. We introduce performance metrics and approaches to measure such effects and compare the performance of the proposed agile beamformers using the Gram-Schmidt orthogonalization process. Although this paper aims to present a generic framework for deploying agile RF beamformers, it also presents extensive performance evaluations in communication systems in terms of adjacent channel leakage ratio, sum-rate, power efficiency, error vector magnitude, and bit-error rates.
We are on the brink of a new era for the wireless telecommunications, an era that will change the way that business is done. The fifth generation (5G) systems will be the first realization in this new digital era where various networks will be interconnected forming a unified system. With support for higher capacity as well as low-delay and machine-type communication services, the 5G networks will significantly improve performance over the current fourth generation (4G) systems and will also offer seamless connectivity to numerous devices by integrating different technologies, intelligence, and flexibility. In addition to ongoing 5G standardization activities and technologies under consideration in the Third Generation Partnership Project (3GPP), the Institute of Electrical and Electronic Engineers (IEEE) based technologies operating on unlicensed bands, will also be an integral part of a 5G eco-system. Along with the 3GPP-based cellular technology, IEEE standards and technologies are also evolving to keep pace with the user demands and new 5G services. In this article, we provide an overview of the evolution of the cellular and Wi-Fi standards over the last decade with particular focus on Medium Access Control (MAC) and Physical (PHY) layers, and highlight the ongoing activities in both camps driven by the 5G requirements and use-cases.
A machine learning (ML) technique has been used to synthesis a linear millimetre wave (mmWave) phased array antenna by considering the phase-only synthesis approach. For the first time, gradient boosting tree (GBT) is applied to estimate the phase values of a 16-element array antenna to generate different far-field radiation patterns. GBT predicts phases while the amplitude values have been equally set to generate different beam patterns for various 5G mmWave transmission scenarios such as multicast, unicast, broadcast and unmanned aerial vehicle (UAV) applications.
The sheer volume of IIOT malware is one of the most serious security threats in today's interconnected world, with new types of advanced persistent threats and advanced forms of obfuscations. This paper presents a robust Federated Learning-based architecture called Fed-IIoT for detecting Android malware applications in IIoT. Fed-IIoT consists of two parts: i) participant side, where the data are triggered by two dynamic poisoning attacks based on a generative adversarial network (GAN) and Federated Generative Adversarial Network (FedGAN). While ii) server-side, aim to monitor the global model and shape a robust collaboration training model, by avoiding anomaly in aggregation by GAN network (A3GAN) and adjust two GAN-based countermeasure algorithms. One of the main advantages of Fed-IIoT is that devices can safely participate in the IIoT and efficiently communicate with each other, with no privacy issues. We evaluate our solution through experiments on various features using three IoT datasets. The results confirm the high accuracy rates of our attack and defence algorithms and show that the A3GAN defensive approach preserves the robustness of data privacy for Android mobile users and is about 8% higher accuracy with existing state-of-the-art solutions.
Network performance optimization is among the most important tasks within the area of wireless communication networks. In a Self- Organizing Network (SON) with the capability of adaptively changing parameters of a network, the optimization tasks are more feasible than static networks. Yet, with an increase of OPEX and CAPEX in new generation telecommunication networks, the optimization tasks are inevitable. In this paper, it is proven that the similarity among target and network parameters can produce lower Uncertainty Entropy (UEN) in a self-organizing system as a higher degree of organizing is gained. The optimization task is carried out with the Adaptive Simulated Annealing method, which is enhanced with a Similarity Measure (SM) in the proposed approach (EASA). The Markov model of EASA is provided to assess the proposed approach. We also show a higher performance through a simulation, based on a scenario in LTE network.
Multi-access edge computing for mobile computingtask offloading is driving the extreme utilization of available degrees of freedom (DoF) for ultra-reliable low-latency downlink communications. The fundamental aim of this work is to find latency-constrained transmission protocols that can achieve a very-low outage probability (e.g. 0:001%). Our investigation is mainly based upon the Polyanskiy-Poor-Verd´u formula on the finite-length coded channel capacity, which is extended from the quasi-static fading channel to the frequency selective channel. Moreover, the use of a suitable duplexing mode is also critical to the downlink reliability. Specifically, time-division duplexing (TDD) outperforms frequency-division duplexing (FDD) in terms of the frequency diversity-gain. On the other hand, FDD takes the advantage of having more temporal DoF in the downlink, which can be exchanged into the spatial diversity-gain through the use of space-time coding. Numerical study is carried out to compare the reliability between FDD and TDD under various latency constraints.
The fifth-generation (5G) mobile communication technology with higher capacity and data rate, ultra-low device to device (D2D) latency, and massive device connectivity will greatly promote the development of vehicular ad hoc networks (VANETs). Meantime, new challenges such as security, privacy and efficiency are raised. In this article, a hybrid D2D message authentication (HDMA) scheme is proposed for 5G-enabled VANETs, in which a novel group signature-based algorithm is used for mutual authentication between vehicle to vehicle (V2V) communication. In addition, a pre-computed lookup table is adopted to reduce the computation overhead of modular exponentiation operation. Security analysis shows that HDMA is robust to resist various security attacks, and performance analysis also points out that, the authentication overhead of HDMA is more efficient than some traditional schemes with the help of the pre-computed lookup table in V2V and vehicle to infrastructure (V2I) communication.
Authentication protocols are powerful tools to ensure confidentiality as an important feature of Internet of Things (IoT). The Denial-of-Service (DoS) attack is one of the significant threats to availability , as another essential feature of IoT, which deprives users of services by consuming the energy of IoT nodes. On the other hand, computational intelligence algorithms can be applied to solve such issues in the network and cyber domains. Motivated by this, this article links these concepts. To do so, we analyze two lightweight authentication protocols, present a DoS attack inspired by users' misbehavior and suggest a solution called received signal strength, which is easy to compute, applicable for resisting against different kinds of vulnerabilities in Internet protocols, and feasible for practical implementations. We implement it on two scenarios for locating attackers, investigate the effects of IoT devices' internal error on locating, and propose an optimization problem to finding the exact location of attackers, which is efficiently solvable for computational intelligence algorithms, such as TLBO. Besides, we analyze the solutions for unreliable results of accurate devices and provide a solution to detect attackers with less than 12-cm error and the false alarm probability of 0.7%.
Orthogonal frequency division multiplexing (OFDM) with index modulation (IM) (OFDM-IM), which employs the activated sub-carrier indices to convey information, exhibits higher energy efficiency and lower peak-to-average power ratio (PAPR)thanconventionalOFDMsystems.Tofurtherimprovethe throughput of discrete Fourier transform (DFT) based OFDM-IM (DFT-OFDM-IM),discretecosinetransform(DCT)basedOFDMIM (DCT-OFDM-IM) can be employed with double subcarriers giventhesamebandwidth.However,oneofthemaindisadvantage of DCT-OFDM-IM is its lack of circular convolutional property over a dispersive channel. To address this issue, an enhanced DCT-OFDM-IM(EDCT-OFDM-IM)systemhasbeenproposedby introducing symmetric prefix and suffix at the transmitter and a pre-filter at the receiver leading to better performance than DFTOFDM-IM in terms of bit error rate (BER). However, due to its special structure, it is difficult to derive the accurate absolute bit error probability (ABEP) upper bound, which is essential for the performance evaluation. In this paper, a tight ABEP upper bound is derived using the moment-generating-function (MGF). Our theoretical analysis is validated by simulation results and proven to be very accurate. Consequently the advantages of the EDCT-OFDM-IM system over the classic OFDM-IM system are further demonstrated analytically.
Seamless and ubiquitous coverage are key factors for future cellular networks. Despite capacity and data rates being the main topics under discussion when envisioning the Fifth Generation (5G) and beyond of mobile communications, network coverage remains one of the major issues since coverage quality highly impacts the system performance and end-user experience. The increasing number of base stations and user terminals is anticipated to negatively impact the network coverage due to increasing interference. Furthermore, the "ubiquitous coverage" use cases, including rural and isolated areas, present a significant challenge for mobile communication technologies. This survey presents an overview of the concept of coverage, highlighting the ways it is studied, measured, and how it impacts the network performance. Additionally, an overlook of the most important key performance indicators influenced by coverage, which may affect the envisioned use cases with respect to throughput, latency, and massive connectivity, are discussed. Moreover, the main existing developments and deployments which are expected to augment the network coverage, in order to meet the requirements of the emerging systems, are presented as well as implementation challenges.
Network-enabled sensing and actuation devices are key enablers to connect real-world objects to the cyber world. The Internet of Things (IoT) consists of the network-enabled devices and communication technologies that allow connectivity and integration of physical objects (Things) into the digital world (Internet). Enormous amounts of dynamic IoT data are collected from Internet-connected devices. IoT data is usually multi-variant streams that are heterogeneous, sporadic, multi-modal and spatio-temporal. IoT data can be disseminated with different granularities and have diverse structures, types and qualities. Dealing with the data deluge from heterogeneous IoT resources and services imposes new challenges on indexing, discovery and ranking mechanisms that will allow building applications that require on-line access and retrieval of ad-hoc IoT data. However, the existing IoT data indexing and discovery approaches are complex or centralised which hinders their scalability. The primary objective of this paper is to provide a holistic overview of the state-of-the-art on indexing, discovery and ranking of IoT data. The paper aims to pave the way for researchers to design, develop, implement and evaluate techniques and approaches for on-line large-scale distributed IoT applications and services.
In this paper, selection criteria of Forward Error Correction (FEC) codes, in particular, the convolutional codes are evaluated for a novel air interface scheme, called Low Density Signature Orthogonal Frequency Division Multiple Access (LDS-OFDM). In this regard, the mutual information transfer characteristics of turbo Multiuser Detector (MUD) are investigated using Extrinsic Information Transfer (EXIT) charts. LDS-OFDM uses Low Density Signature structure for spreading the data symbols in frequency domain. This technique benefits from frequency diversity in addition to its ability of supporting parallel data streams more than the number of subcarriers (overloaded condition). The turbo MUD couples the data symbols’ detector of LDS scheme with users’ FEC decoders through the message passing principle. Index Terms — Low density signature, Multiuser detection, Iterative decoding.
It is well documented that the achievable throughput of MIMO systems that employ linear beamforming can significantly degrade when the number of concurrently transmitted information streams approaches the number of base-station antennas. To increase the number of the supported streams, and therefore, to increase the achievable net throughput, non-linear beamforming techniques have been proposed. These beamforming approaches are typically evaluated via simulations or via simplified over-the-air experiments that are sufficient for validating their basic principles, but they neither provide insights about potential practical challenges when trying to adopt such approaches in a standards-compliant framework, nor they provide any indication about the achievable performance when they are part of a standards-compliant protocol stack. In this work, for first time, we evaluate non-linear beamforming in a 3GPP standards- compliant framework, using our recently-proposed SWORD research platform. SWORD is a flexible, open for research, software-driven platform that enables the rapid evaluation of advanced algorithms without extensive hardware optimizations that can prevent promising algorithms from being evaluated in a standards-compliant stack. We show that in an indoor environment, vector perturbation-based non-linear beamforming can provide up to 46% throughput gains compared to linear approaches for 4×4 MIMO systems, while it can still provide gains of nearly 10% even if the number of base-station antennas is doubled.
With the advent of Network Function Virtualization (NFV) techniques, a subset of the Internet traffic will be treated by a chain of virtual network functions (VNFs) during their journeys while the rest of the background traffic will still be carried based on traditional routing protocols. Under such a multi-service network environment, we consider the co-existence of heterogeneous traffic control mechanisms, including flexible, dynamic service function chaining (SFC) traffic control and static, dummy IP routing for the aforementioned two types of traffic that share common network resources. Depending on the traffic patterns of the background traffic which is statically routed through the traditional IP routing platform, we aim to perform dynamic service function chaining for the foreground traffic requiring VNF treatments, so that both the end-to-end SFC performance and the overall network resource utilization can be optimized. Towards this end, we propose a deep reinforcement learning based scheme to enable intelligent SFC routing decision-making in dynamic network conditions. The proposed scheme is ready to be deployed on both hybrid SDN/IP platforms and future advanced IP environments. Based on the real GEANT network topology and its one-week traffic traces, our experiments show that the proposed scheme is able to significantly improve from the traditional routing paradigm and achieve close-to-optimal performances very fast while satisfying the end-to-end SFC requirements.
This paper introduces a millimeter-wave multipleinput- multiple-output (MIMO) antenna for autonomous (selfdriving) cars. The antenna is a modified four-port balanced antipodal Vivaldi which produces four directional beams and provides pattern diversity to cover 90 deg angle of view. By using four antennas of this kind on four corners of the car’s bumper, it is possible to have a full 360 deg view around the car. The designed antenna is simulated by two commercially full-wave packages and the results indicate that the proposed method can successfully bring the required 90 deg angle of view.
This paper presents empirically based ultrawideband and directional channel measurements, performed in the Terahertz (THz) frequency range over 250 GHz bandwidth from 500 GHz to 750 GHz. Measurement setup calibration technique is presented for free-space measurements taken at Line-of-Sight (LoS) between the transmitter (Tx) and receiver(Rx) in an indoor environment. The atmospheric effects on signal propagation in terms of molecular absorption by oxygen and water molecules are calculated and normalized. Channel impulse responses (CIRs) are acquired for the LoS scenario for different antenna separation distances. From the CIRs the Power Delay Profile (PDP) is presented where multiple delay taps can be observed caused due to group delay products and reflections from the measurement bench.
Recently, the fifth-generation (5G) cellular system has been standardised. As opposed to legacy cellular systems geared towards broadband services, the 5G system identifies key use cases for ultra-reliable and low latency communications (URLLC) and massive machine-type communications (mMTC). These intrinsic 5G capabilities enable promising sensor-based vertical applications and services such as industrial process automation. The latter includes autonomous fault detection and prediction, optimised operations and proactive control. Such applications enable equipping industrial plants with a sixth sense (6S) for optimised operations and fault avoidance. In this direction, we introduce an inter-disciplinary approach integrating wireless sensor networks with machine learningenabled industrial plants to build a step towards developing this 6S technology. We develop a modular-based system that can be adapted to the vertical-specific elements. Without loss of generalisation, exemplary use cases are developed and presented including a fault detection/prediction scheme, and a sensor density-based boundary between orthogonal and non-orthogonal transmissions. The proposed schemes and modelling approach are implemented in a real chemical plant for testing purposes, and a high fault detection and prediction accuracy is achieved coupled with optimised sensor density analysis.
With increased complexity of webpages nowadays, computation latency incurred by webpage processing during downloading operations has become a newly identified factor that may substantially affect user experiences in a mobile network. In order to tackle this issue, we propose a simple but effective transport-layer optimization technique which requires necessary context information dissemination from the mobile edge computing (MEC) server to user devices where such an algorithm is actually executed. The key novelty in this case is the mobile edge’s knowledge about webpage content characteristics which is able to increase downloading throughput for user QoE enhancement. Our experiment results based on a real LTE-A test-bed show that, when the proportion of computation latency varies between 20% and 50% (which is typical for today’s webpages), the downloading throughput can be improved up to 34.5%, with reduced downloading time by up to 25.1%
—This work introduces MultiSphere, a method to massively parallelize the tree search of large sphere decoders in a nearly-independent manner, without compromising their maximum-likelihood performance, and by keeping the overall processing complexity at the levels of highly-optimized sequential sphere decoders. MultiSphere employs a novel sphere decoder tree partitioning which can adjust to the transmission channel with a small latency overhead. It also utilizes a new method to distribute nodes to parallel sphere decoders and a new tree traversal and enumeration strategy which minimize redundant computations despite the nearly-independent parallel processing of the subtrees. For an 8 × 8 MIMO spatially multiplexed system with 16-QAM modulation and 32 processing elements MultiSphere can achieve a latency reduction of more than an order of magnitude, approaching the processing latency of linear detection methods, while its overall complexity can be even smaller than the complexity of well-known sequential sphere decoders. For 8×8 MIMO systems, MultiSphere’s sphere decoder tree partitioning method can achieve the processing latency of other partitioning schemes by using half of the processing elements. In addition, it is shown that for a multi-carrier system with 64 subcarriers, when performing sequential detection across subcarriers and using MultiSphere with 8 processing elements to parallelize detection, a smaller processing latency is achieved than when parallelizing the detection process by using a single processing element per subcarrier (64 in total).
Nowadays, dense network deployment is being considered as one of the effective strategies to meet capacity and connectivity demands of the fifth generation (5G) cellular system. Among several challenges, energy consumption will be a critical consideration in the 5G era. In this direction, base station on/off operation, i.e., sleep mode, is an effective technique to mitigate the excessive energy consumption in ultra-dense cellular networks. However, current implementation of this technique is unsuitable for dynamic networks with fluctuating traffic profiles due to coverage constraints, quality-of-service requirements and hardware switching latency. In this direction, we propose an energy/load proportional approach for 5G base stations with control/data plane separation. The proposed approach depends on a multi-step sleep mode profiling, and predicts the base station vacation time in advance. Such a prediction enables selecting the best sleep mode strategy whilst minimizing the effect of base station activation/reactivation latency, resulting in significant energy saving gains.
This paper presents a fully-transparent and novel frequency selective surface (FSS) that can be deployed instead of conventional glass to reduce the penetration loss encountered by millimeter wave (mmWave) frequencies in typical outdoorindoor (O2I) communication scenarios. The presented design uses a 0:035 mm thick layer of indium tin oxide (ITO), which is a transparent conducting oxide (TCO) deposited on the surface of the glass, thereby ensuring the transparency of the structure. The paper also presents a novel unit cell that has been used to design the hexagonal lattice of the FSS structure. The dispersion and transmission characteristics of the proposed design are presented and compared with conventional glass. The presented FSS can be used for both 26 GHz and 28 GHz bands of the mmWave spectrum and offers a lower transmission loss as compared to conventional glass without any considerable impact on the aesthetics of the building infrastructure.
The launch of the StarLink Project has recently stimulated a new wave of research on integrating Low Earth Orbit (LEO) satellite networks with the terrestrial Internet infrastructure. In this context, one distinct technical challenge to be tackled is the frequent topology change caused by the constellation behaviour of LEO satellites. Frequent change of the peering IP connection between the space and terrestrial Autonomous Systems (ASes) inevitably disrupts the Border Gateway Protocol (BGP) routing stability at the network boundaries which can be further propagated into the internal routing infrastructures within ASes. To tackle this problem, we introduce the Geosynchronous Network Grid Addressing (GNGA) scheme by decoupling IP addresses from physical network elements such as a LEO satellite. Specifically, according to the density of LEO satellites on the orbits, the IP addresses are allocated to a number of stationary "grids" in the sky and dynamically bound to the interfaces of the specific satellites moving into the grids along time. Such a scheme allows static peering connection between a terrestrial BGP speaker and a fixed external BGP (e-BGP) peer in the space, and hence is able to circumvent the exposure of routing disruptions to the legacy terrestrial ASes. This work-in-progress specifically addresses a number of fundamental technical issues pertaining to the design of the GNGA scheme.
In this paper, an 8×8 Multiple Input Multiple Output (MIMO) antenna design for Fifth Generation (5G) sub- 6GHz smartphone applications is presented. The antenna elements are based off a folded quarter wavelength monopole that operate at 3.4-3.8GHz. Isolation between antenna elements is provided through physical distancing. The fabricated antenna prototype outer casing is made from Rogers R04003C with dimensions based on future 5G enabled phones. Measured results show an operating bandwidth of 3.32 to 3.925GHz (S11 < 6dB) with a transmission coefficient < -14.7dB. A high total efficiency for an antenna array is also obtained at 70-85.6%. The design is suitable for MIMO communications exhibited by an Envelope Correlation Coefficient (ECC) < 0.014. To conclude a Specific Absorption Rate (SAR) model has been constructed and presented showing the user’s effects on the antenna’s Sparameter results. Measurements of the amount of power absorbed by the head and hand during operation have also been simulated.
One of the key research issues in wireless systems is how to improve the system capacity. MIMO has been proven as an effective method to achieve this. Previously, the focus of MIMO-OFDM research in High Performance Metropolitan Area Network (HIPERMAN) systems was on Space Time Coding (STC) and beamforming. Recently, Multi-User Detection (MUD) has emerged as a novel approach in MIMO-OFDM-based HIPERMAN systems. In this paper, we have proposed a new MAC design, which includes the new and flexible MAC frame structure and an efficient dynamic resource allocation algorithm, in order to accommodate the MUD techniques in uplink transmission in HIPERMAN systems. The performance of the new MAC design has been evaluated via simulation means. The simulation results show that the new MAC design based on MUD can significantly increase the system capacity.
In this letter, a dual-band 8x8 MIMO antenna that operates in the sub-6 GHz spectrum for future 5G multiple-input multiple-output (MIMO) smartphone applications is presented. The design consists of a fully grounded plane with closely spaced orthogonal pairs of antennas placed symmetrically along the long edges and on the corners of the smartphone. The orthogonal pairs are connected by a 7.8 mm short neutral line for mutual coupling reduction at both bands. Each antenna element consists of a folded monopole with dimensions 17.85 x 5mm2 and can operate in 3100-3850 MHz for the low band and 4800-6000 MHz for the high band ([S11] ˂ -10dB). The fabricated antenna prototype is tested and offers good performance in terms of Envelope Correlation Coefficient (ECC), Mean Effective Gain (MEG), total efficiency and channel capacity. Finally, the user effects on the antenna and the Specific Absorption Rate (SAR) are also presented.
This letter describes the impact of unknown channel access delay on the timeline of Hybrid Automatic Repeat Request (HARQ) process in the 3rd Generation Partnership Project Long Term Evolution (3GPP LTE) system when a Relay Node (RN) is used for coverage extension of Machine Type Communication (MTC) devices. A solution is also proposed for the determination of unknown channel access delay when the RN operates in the unlicensed spectrum band. The proposed mechanism is expected to help MTC operation in typical coverage holes areas such as smart meters located in the basement of buildings.
The fifth-generation (5G) new radio (NR) cellular system promises a significant increase in capacity with reduced latency. However, the 5G NR system will be deployed along with legacy cellular systems such as the long-term evolution (LTE). Scarcity of spectrum resources in low frequency bands motivates adjacent-/co-carrier deployments. This approach comes with a wide range of practical benefits and it improves spectrum utilization by re-using the LTE bands. However, such deployments restrict the 5G NR flexibility in terms of frame allocations to avoid the most critical mutual adjacent-channel interference. This in turns prevents achieving the promised 5G NR latency figures. In this we paper, we tackle this issue by proposing to use the minislot uplink feature of 5G NR to perform uplink acknowledgement and feedback to reduce the frame latency with selective blind retransmission to overcome the effect of interference. Extensive system-level simulations under realistic scenarios show that the proposed solution can reduce the peak frame latency for feedback and acknowledgment up to 33% and for retransmission by up to 25% at a marginal cost of an up to 3% reduction in throughput.
This paper proposes a low-complexity hybrid beamforming design for multi-antenna communication systems. The hybrid beamformer comprises of a baseband digital beamformer and a constant modulus analog beamformer in radio frequency (RF) part of the system. As in Singular-Value-Decomposition (SVD) based beamforming, hybrid beamforming design aims to generate parallel data streams in multi-antenna systems, however, due to the constant modulus constraint of the analog beamformer, the problem cannot be solved, similarly. To address this problem, mathematical expressions of the parallel data streams are derived in this paper and desired and interfering signals are specified per stream. The analog beamformers are designed by maximizing the power of desired signal while minimizing the sum-power of interfering signals. Finally, digital beamformers are derived through defining the equivalent channel observed by the transmitter/receiver. Regardless of the number of the antennas or type of channel, the proposed approach can be applied to wide range of MIMO systems with hybrid structure wherein the number of the antennas is more than the number of the RF chains. In particular, the proposed algorithm is verified for sparse channels that emulate mm-wave transmission as well as rich scattering environments. In order to validate the optimality, the results are compared with those of the state-of-the-art and it is demonstrated that the performance of the proposed method outperforms state-of-the-art techniques, regardless of type of the channel and/or system configuration.
Orthogonal Frequency Division Multiple Access (OFDMA) as well as other orthogonal multiple access techniques fail to achieve the system capacity limit in the uplink due to the exclusivity in resource allocation. This issue is more prominent when fairness among the users is considered in the system. Current Non-Orthogonal Multiple Access techniques (NOMA) introduce redundancy by coding/spreading to facilitate the users' signals separation at the receiver, which degrade the system spectral efficiency. Hence, in order to achieve higher capacity, more efficient NOMA schemes need to be developed. In this paper, we propose a NOMA scheme for uplink that removes the resource allocation exclusivity and allows more than one user to share the same subcarrier without any coding/spreading redundancy. Joint processing is implemented at the receiver to detect the users' signals. However, to control the receiver complexity, an upper limit on the number of users per subcarrier needs to be imposed. In addition, a novel subcarrier and power allocation algorithm is proposed for the new NOMA scheme that maximizes the users' sum-rate. The link-level performance evaluation has shown that the proposed scheme achieves bit error rate close to the single-user case. Numerical results show that the proposed NOMA scheme can significantly improve the system performance in terms of spectral efficiency and fairness comparing to OFDMA.
By performing the Floquet-mode analysis of a periodic slotted waveguide, a multiple-beam leaky wave antenna is proposed in the millimetre-wave (mmW) band. Considering the direction of surface current lines on the broad/side-walls of the waveguide, the polarization of constructed beams are also controlled. The simulation results are well matched with the initial mathematical analysis.
A compact size, dual-band wearable antenna for off-body communication operating at the both 2.45 and 5.8 GHz industrial, scientific, and medical (ISM) band is presented. The antenna is a printed monopole on an FR4 substrate with a modified loaded ground plane to make the antenna profile compact. Antennas’ radiation characteristics have been optimized while the proposed antenna placed close to the human forearm. The fabricated antenna operating on the forearm has been measured to verify the simulation results.
In this paper, we study the Gaussian Cognitive Zinterference channel (GCZIC) and its multiuser extension the Gaussian Cognitive Z-broadcast interference channel (GBZIC). We review some known capacity results and bounds for the GCZIC for various levels of interference. We derive a new improved inner bound for the CZIC under conditions which intersect with those for which the capacity is not known. Then we derive the capacity results and bounds for the CBZIC when the broadcast component of the channel is a degraded broadcast channel.
The Self-Organizing Network (SON) has been seen as one of the promising areas to save OPerational EXpenditure (OPEX) and to bring real efficiency to the wireless networks. Though the studies in literature concern with local interaction and distributed structure for SON, study on its coherent pattern has not yet been well-conducted. We consider a targetfollowing regime and propose a novel approach of goal attainment using Similarity Measure (SM) for Coverage & Capacity Optimization (CCO) use-case in SON. The methodology is based on a self-optimization algorithm, which optimizes the multiple objective functions of UE throughput and fairness using performance measure, which is carried out using SM between target and measured KPIs. After certain epochs, the optimum results are used in adjustment and updating modules of goal attainment. To investigate the proposed approach, a simulation in downlink LTE has also been set up. In a scenario including a congested cell with hotspot, the joint antenna parameters of tilt/azimuth using a 3D beam pattern is considered. The final CDF results show a noticeable migration of hot-spot UEs to higher throughputs, while no one worse off.
This paper addresses the problem of joint backhaul and access links optimization in dense small cell networks with special focus on time division duplexing (TDD) mode of operation in backhaul and access links transmission. Here, we propose a framework for joint radio resource management where we systematically decompose the problem in backhaul and access links. To simplify the analysis, the procedure is tackled in two stages. At the first stage, the joint optimization problem is formulated for a point-to-point scenario where each small cell is simply associated to a single user. It is shown that the optimization can be decomposed into separate power and subchannel allocation in both backhaul and access links where a set of rate-balancing parameters in conjunction with duration of transmission governs the coupling across both links. Moreover, a novel algorithm is proposed based on grouping the cells to achieve rate-balancing in different small cells. Next in the second stage, the problem is generalized for multi access small cells. Here, each small cell is associated to multiple users to provide the service. The optimization is similarly decomposed into separate sub-channel and power allocation by employing auxiliary slicing variables. It is shown that similar algorithms as previous stage are applicable by slight change with the aid of slicing variables. Additionally, for the special case of line-of-sight backhaul links, simplified expressions for sub-channel and power allocation are presented. The developed concepts are evaluated by extensive simulations in different case studies from full orthogonalization to dynamic clustering and full reuse in the downlink and it is shown that proposed framework provides significant improvement over the benchmark cases.
This paper studies the optimum user selection scheme in a hybrid-duplex device-to-device (D2D) cellular networks. We derive an analytical integral-form expression of the cumulative distribution function (CDF) for the received signal-to-noise-plus-interference-ratio (SINK) at the D2D node, based on which the closed-form of the outage probability is obtained. Analysis shows that the proposed user selection scheme achieves the best SINK at the D2D node with interference to base station being limited by a pre-defined level. Hybrid duplex D2D can be switched between half and full duplex according to different residual self-interference to enhance the throughput of D2D pair. Simulation results are presented to validate the analysis.
This letter proposes a novel graph-based multi-cell scheduling framework to efficiently mitigate downlink inter-cell interference in small cell OFDMA networks. This framework incorporates dynamic clustering combined with channel-aware resource allocation to provide tunable quality of service measures at different levels. Our extensive evaluation study shows that a significant improvement in user's spectral efficiency is achievable, while also maintaining relatively high cell spectral efficiency via empirical tuning of re-use factor across the cells according to the required QoS constraints.
This paper addresses the problem of opportunistic spectrum access in support of mission-critical ultra-reliable and low latency communications (URLLC). Considering the ability of supporting short packet transmissions in URLLC scenarios, a new capacity metric in finite blocklength regime is introduced as the traditional performance metrics such as ergodic capacity and outage capacity are no longer applicable. We focus on an opportunistic spectrum access system in which the secondary user (SU) opportunistically occupies the frequency resources of the primary user (PU) and transmits reliable short packets to its destination. An achievable rate maximization problem is then formulated for the SU in supporting URLLC services, subject to a probabilistic received-power constraint at the PU receiver and imperfect channel knowledge of the SU-PU link. To tackle this problem, an optimal power allocation policy is proposed. Closedform expressions are then derived for the maximum achievable rate in finite blocklength regime, the approximate transmission rate at high signal-to-noise ratios (SNRs) and the optimal average power. Numerical results validate the accuracy of the proposed closed-form expressions and further reveal the impact of channel estimation error, block error probability, finite blocklength and received-power constraint.
In cognitive radio networks, the licensed frequency bands of the primary users (PUs) are available to the secondary user (SU) provided that they do not cause significant interference to the PUs. In this study, the authors analysed the normalised throughput of the SU with multiple PUs coexisting under any frequency division multiple access communication protocol. The authors consider a cognitive radio transmission where the frame structure consists of sensing and data transmission slots. In order to achieve the maximum normalised throughput of the SU and control the interference level to the legal PUs, the optimal frame length of the SU is found via simulation. In this context, a new analytical formula has been expressed for the achievable normalised throughput of SU with multiple PUs under prefect and imperfect spectrum sensing scenarios. Moreover, the impact of imperfect sensing, variable frame length of SU and the variable PU traffic loads, on the normalised throughput has been critically investigated. It has been shown that the analytical and simulation results are in perfect agreement. The authors analytical results are much useful to determine how to select the frame duration length subject to the parameters of cognitive radio network, such as network traffic load, achievable sensing accuracy and number of coexisting PUs.
Coordinated multi-point (CoMP) architecture has proved to be very effective for improving the user fairness and spectral efficiency of cellular communication system, however, its energy efficiency remains to be evaluated. In this paper, CoMP system is idealized as a distributed antenna system by assuming perfect backhauling and cooperative processing. This simplified model allows us to express the capacity of the idealized CoMP system with a simple and accurate closed-form approximation. In addition, a framework for the energy efficiency analysis of CoMP system is introduced, which includes a power consumption model and an energy efficiency metric, i.e. bit-per-joule capacity. This framework along with our closed-form approximation are utilized for assessing both the channel and bit-per-joule capacities of the idealized CoMP system. Results indicate that multi-base-station cooperation can be energy efficient for cell-edge communication and that the backhauling and cooperative processing power should be kept low. Overall, it has been shown that the potential of improvement of CoMP in terms of bit-per-joule capacity is not as high as in terms of channel capacity due to associated energy cost for cooperative processing and backhauling.
The recent paradigm shift towards the transmission of large numbers of mutually interfering information streams, as in the case of aggressive spatial multiplexing, combined with requirements towards very low processing latency despite the frequency plateauing of traditional processors, initiates a need to revisit the fundamental maximum-likelihood (ML) and, consequently, the sphere-decoding (SD) detection problem. This work presents the design and VLSI architecture of MultiSphere; the first method to massively parallelize the tree search of large sphere decoders in a nearly-concurrent manner, without compromising their maximum-likelihood performance, and by keeping the overall processing complexity comparable to that of highly-optimized sequential sphere decoders. For a 10 ⇥ 10 MIMO spatially multiplexed system with 16-QAM modulation and 32 processing elements, our MultiSphere architecture can reduce latency by 29⇥ against well-known sequential SDs, approaching the processing latency of linear detection methods, without compromising ML optimality. In MIMO multicarrier systems targeting exact ML decoding, MultiSphere achieves processing latency and hardware efficiency that are orders of magnitude improved compared to approaches employing one SD per subcarrier. In addition, for 16⇥16 both “hard”- and “soft”-output MIMO systems, approximate MultiSphere versions are shown to achieve similar error rate performance with state-of-the art approximate SDs having akin parallelization properties, by using only one tenth of the processing elements, and to achieve up to approximately 9⇥ increased energy efficiency.
In this letter, we study the beamforming design in a lens-antenna array-based joint multicast-unicast millimeter wave massive MIMO system, where the simultaneous wireless information and power transfer at users is considered. First, we develop a beam selection scheme based on the structure of the lens-antenna array and then, the zero forcing precoding is adopted to cancel the inter-unicast interference among users. Next, we formulate a sum rate maximization problem by jointly optimizing the unicast power, multicast beamforming and power splitting ratio. Meanwhile, the maximum transmit power constraint for the base station and the minimum harvested energy for each user are imposed. By employing the successive convex approximation technique, we transform the original optimization problem into a convex one, and propose an iterative algorithm to solve it. Finally, simulation results are conducted to verify the effectiveness of the proposed schemes.
Dynamic spectrum allocation (DSA) seeks to exploit the variations in the loads of various radio-access networks to allocate the spectrum efficiently. Here, a spectrum manager implements DSA by periodically auctioning short-term spectrum licenses. We solve analytically the problem of the operator of a CDMA cell populated by delaytolerant terminals operating at various data rates, on the downlink, and representing users with dissimilar "willingness to pay" (WtP). WtP is the most a user would pay for a correctly transferred information bit. The operator finds a revenue-maximising internal pricing and a service priority policy, along with a bid for spectrum. Our clear and specific analytical results apply to a wide variety of physical layer configurations. The optimal operating point can be easily obtained from the frame-success rate function. At the optimum, (with a convenient time scale) a terminal's contribution to revenues is the product of its WtP by its data rate; and the product of its WtP by its channel gain determines its service priority ("revenue per Hertz"). Assuming a second-price auction, the operator's optimal bid for a certain spectrum band equals the sum of the individual revenue contributions of the additional terminals that could be served, if the band is won.
Clustering algorithms have been extensively applied for energy conservation in wireless sensor networks (WSNs). Cluster-heads (CHs) play an important role and drain energy more rapidly than other member nodes. Numerous mechanisms to optimize CH selection and cluster formation during the set-up phase have been proposed for extending the stable operation period of the network until any node depletes its energy. However, the existing mechanisms assume that the traffic load contributed by each node is the same, in other words, same amount of data are sent to CH from the member nodes during each scheduled round. This paper assumes the nodes contribute traffic load at different rates, and consequently proposes an energy-efficient clustering algorithm by considering both the residual node energy and the traffic load contribution of each node during the set-up phase. The proposed algorithm makes nodes with more residual energy and less traffic load contribution get more chances to become CHs. Furthermore, clusters are adaptively organized in a way that the deviation of ratio between the total cluster energy and the total cluster traffic load (ETRatio) is limited, in order to balance the energy usage among the clusters. Performance evaluation shows that the proposed algorithm extends the stable operation period of the network significantly
This work addresses joint transceiver optimization for multiple-input, multiple-output (MIMO) systems. In practical systems the complete knowledge of channel state information (CSI) is hardly available at transmitter. To tackle this problem, we resort to the codebook approach to precoding design, where the receiver selects a precoding matrix from a finite set of pre-defined precoding matrices based on the instantaneous channel condition and delivers the index of the chosen precoding matrix to the transmitter via a bandwidth-constraint feedback channel. We show that, when the symbol constellation is improper, the joint codebook based precoding and equalization can be designed accordingly to achieve improved performance compared to the conventional system.
The spatially-incoherent radiators in visible light communication (VLC) constrain the optical carrier to be only driven by a real electrical sub-carrier, which cannot be quadrature modulated as in classic RF-based systems. This restriction, in turn, severely limits the transmission throughput of VLC systems. To overcome this technical challenge, we propose a novel coherent transmission scheme for VLC, in which the optical carrier is only treated as a purely amplitude-modulated carrier capable of transmitting two-dimensional (2D) symbols (e.g. quadrature modulated symbols). The ability of our new coherent transmission scheme to transmit 2D symbols is validated through analytical symbol error rate derivation and Matlab simulations. Results show that our scheme can improve both the spectral and energy efficiency of VLC systems, i.e. by either doubling the spectral efficiency or achieving more than 45% energy efficiency improvement, when compared to its existing counterparts.
Novel low-density signature (LDS) structure is proposed for transmission and detection of symbol-synchronous communication over memoryless Gaussian channel. Given N as the processing gain, under this new arrangement, users' symbols are spread over N chips but virtually only d(v) < N chips that contain nonzero-values. The spread symbol is then so uniquely interleaved as the sampled, at chip rate, received signal contains the contribution from only d(c) < K number of users, where K denotes the total number of users in the system. Furthermore, a near-optimum chip-level iterative soft-in-soft-out (SISO) multiuser decoding (MUD), which is based on message passing algorithm (MPA) technique, is proposed to approximate optimum detection by efficiently exploiting the LDS structure. Given beta = K/N as the system loading, our simulation suggested that the proposed system alongside the proposed detection technique, in AWGN channel, can achieve an overall performance that is close to single-user performance, even when the system has 200% loading, i.e., when beta = 2. Its robustness against near-far effect and its performance behavior that is very similar to optimum detection are demonstrated in this paper. In addition, the complexity required for detection is now exponential to d(c) instead of K as in conventional code division multiple access (CDMA) structure employing optimum multiuser detector.
With the recent development of Device-toDevice (D2D) communication technologies, mobile devices will no longer be treated as pure “terminals”, but they could become an integral part of the network in specific application scenarios. In this paper, we introduce a novel scheme of using D2D communications for enabling data relay services in partial Not-Spots, where a client without local network access may require data relay by other devices. Depending on specific social application scenarios that can leverage on the D2D technology, we consider tailored algorithms in order to achieve optimised data relay service performance on top of our proposed networkcoordinated communication framework. The approach is to exploit the network’s knowledge on its local user mobility patterns in order to identify best helper devices participating in data relay operations. This framework also comes with our proposed helper selection optimization algorithm based on reactive predictability of individual user. According to our simulation analysis based on both theoretical mobility models and real human mobility data traces, the proposed scheme is able to flexibly support different service requirements in specific social application scenarios.
Holographic-type Communication (HTC) has been widely deemed as an emerging type of augmented reality (AR) media which offers Internet users deeply immersive experiences. In contrast to the traditional video content transmissions, the characteristics and network requirements of HTC have been much less studied in the literature. Due to the high bandwidth requirements and various limitations of today’s HTC platforms, large-scale HTC streaming has never been systematically attempted and comprehensively evaluated till now. In this paper, we introduce a novel HTC based teleportation platform leveraging cloud-based remote production functions, also supported with newly proposed adaptive frame buffering and end-to-end signalling techniques against network uncertainties, which for the first time is able to provide assured user experiences at the public Internet scale. According to our real-life experiments based on strategically deployed cloud sites for remote production functions, we have demonstrated the feasibility of supporting user assured performances for such applications at the global Internet scale.
The design of efficient wireless fronthaul connections for future heterogeneous networks incorporating emerging paradigms such as cloud radio access network (C-RAN) has become a challenging task that requires the most effective utilization of fronthaul network resources. In this paper, we propose to use distributed compression to reduce the fronthaul traffic in uplink Coordinated Multi-Point (CoMP) for C-RAN. Unlike the conventional approach where each coordinating point quantizes and forwards its own observation to the processing centre, these observations are compressed before forwarding. At the processing centre, the decompression of the observations and the decoding of the user message are conducted in a successive manner. The essence of this approach is the optimization of the distributed compression using an iterative algorithm to achieve maximal user rate with a given fronthaul rate. In other words, for a target user rate the generated fronthaul traffic is minimized. Moreover, joint decompression and decoding is studied and an iterative optimization algorithm is devised accordingly. Finally, the analysis is extended to multi-user case and our results reveal that, in both dense and ultra-dense urban deployment scenarios, the usage of distributed compression can efficiently reduce the required fronthaul rate and a further reduction is obtained with joint operation.
This paper proposes a novel graph-based multicell scheduling framework to efficiently mitigate downlink intercell interference in OFDMA-based small cell networks. We define a graph-based optimization framework based on interference condition between any two users in the network assuming they are served on similar resources. Furthermore, we prove that the proposed framework obtains a tight lower bound for conventional weighted sum-rate maximization problem in practical scenarios. Thereafter, we decompose the optimization problem into dynamic graph-partitioning-based subproblems across different subchannels and provide an optimal solution using branch-and-cut approach. Subsequently, due to high complexity of the solution, we propose heuristic algorithms that display near optimal performance. At the final stage, we apply cluster-based resource allocation per subchannel to find candidate users with maximum total weighted sum-rate. A case study on networked small cells is also presented with simulation results showing a significant improvement over the state-of-the-art multicell scheduling benchmarks in terms of outage probability as well as average cell throughput.
A key requirement for ease of migration from legacy to ambient networks is the elimination of dependencies between functionalities. Currently, in the case of designs for QoS and mobility in IP networks, it is apparent that there is a strong coupling between the two functions. One way to reduce this coupling is to remove the need to have QoS state within the network. This can be done if applications are able to manage the QoS parameters themselves. The basic idea is that the application is responsible for keeping the network in a congestion-free state in order to minimize the loss and delay experienced by the application, much as TCP does today for best effort traffic. On-line estimation of end-to-end packet loss can be used to monitor wireless link performance and to help adaptive applications to make better use of network resources. Existing work has focused on measuring and modeling packet loss in the Internet but most of these technologies do not address end-to-end path performance. This paper proposes an on-line stripe packet-pair probing approach to estimate the packet loss rate that applications may suffer. The approach uses 2-order Markov chain to model the loss rate and loss burstiness. To reduce the computational complexity, we employ maximum entropy to estimate the parameters. The paper validates the model via existing loss traces
The performance of SIR-based Closed Loop power control (CLPC) is analytically analysed. The evaluation work has been carried out using the standard deviation of the power control error (PCE) as the performance metric. A non-linear control theory method is applied to the feedback system under fast fading. An analytical expression of the CLPC under fast fading is also produced. Finally a quantized-step size power control algorithm, replacing the hard limiter is considered. The proposed method is found to work considerably better for high speed MS as well as being a powerful tool to optimise the loop performance.
Device-to-device (D2D) communication is being considered an important traffic offloading mechanism for future cellular networks. Coupled with pro-active device caching, it offers huge potential for capacity and coverage enhancements. In order to ensure maximum capacity enhancement, number of nodes for direct communication needs to be identified. In this paper, we derive analytic expression that relates number of D2D nodes (i.e., D2D user density) and average coverage probability of reference D2D receiver. Using stochastic geometry and poisson point process, we introduce retention probability within cooperation region and shortest distance based selection criterion to precisely quantify interference due to D2D pairs in coverage area. The simulation setup and numerical evaluation validates the closed-form expression.
In this paper, using stochastic geometry, we investigate the average energy efficiency (AEE) of the user terminal (UT) in the uplink of a two-tier heterogeneous network (HetNet), where the two tiers are operated on separate carrier frequencies. In such a deployment, a typical UT must periodically perform inter-frequency small cell discovery (ISCD) process in order to discover small cells in its neighborhood and benefit from the high data rate and traffic offloading opportunity that small cells present. We assume that the base stations (BSs) of each tier and UTs are randomly located and we derive the average ergodic rate and UT power consumption, which are later used for our AEE evaluation. The AEE incorporates the percentage of time a typical UT missed small cell offloading opportunity as a result of the periodicity of the ISCD process. In addition to this, the additional power consumed by the UT due to the ISCD measurement is also included. Moreover, we derive the optimal ISCD periodicity based on the UT’s average energy consumption (AEC) and AEE. Our results reveal that ISCD periodicity must be selected with the objective of either minimizing UT’s AEC or maximizing UT’s AEE.
Session Initiation Protocol (SIP) is an application layer signalling protocol used in the IP-based UMTS network for establishing multimedia sessions. With a satellite component identified to play an integral role in UNITS, there is a need to support SIP-based session establishment over Satellite-UMTS (SUNITS) as well. Due to the inherent characteristics of SIP, the transport of SIP over an unreliable wireless link with a larger propagation delay is inefficient. To improve the session setup performance, a link layer retransmission based on the Radio Link Control acknowledgement mode (RLC-AM) mechanisms is utilised. However the current UMTS RLC-AM procedure is found to cause undesirable redundant retransmission when applied over the satellite. As such, this paper proposes an enhancement to the RLC protocol through a timer-based retransmission scheme. Simulation results reveal that not only the system capacity can be improved through this redundant retransmission avoidance scheme, but also better system performances in terms of session setup delay and failure are gained.
Vehicular Ad-Hoc Networks (VANETs) are a critical component of the Intelligent Transportation Systems (ITS), which involve the applications of advanced information processing, communications, sensing, and controlling technologies in an integrated manner to improve the functionality and the safety of transportation systems, providing drivers with timely information on road and traffic conditions, and achieving smooth traffic flow on the roads. Recently, the security of VANETs has attracted major attention for the possible presence of malicious elements, and the presence of altered messages due to channel errors in transmissions. In order to provide reliable and secure communications, Intrusion Detection Systems (IDSs) can serve as a second defense wall after prevention-based approaches, such as encryption. This chapter first presents the state-of-the-art literature on intrusion detection in VANETs. Next, the detection of illicit wireless transmissions from the physical layer perspective is investigated, assuming the presence of regular ongoing legitimate transmissions. Finally, a novel cooperative intrusion detection scheme from the MAC sub-layer perspective is discussed.
Multiuser selection scheduling concept has been recently proposed in the literature in order to increase the multiuser diversity gain and overcome the significant feedback requirements for the opportunistic scheduling schemes. The main idea is that reducing the feedback overhead saves per-user power that could potentially be added for the data transmission. In this work, we propose to integrate the principle of multiuser selection and the proportional fair scheduling scheme. This is aimed especially at power-limited, multi-device systems in non-identically distributed fading channels. For the performance analysis, we derive closed-form expressions for the outage probabilities and the average system rate of the delay-sensitive and the delay-tolerant systems, respectively, and compare them with the full feedback multiuser diversity schemes. The discrete rate region is analytically presented, where the maximum average system rate can be obtained by properly choosing the number of partial devices. We optimize jointly the number of partial devices and the per-device power saving in order to maximize the average system rate under the power requirement. Through our results, we finally demonstrate that the proposed scheme leveraging the saved feedback power to add for the data transmission can outperform the full feedback multiuser diversity, in non-identical Rayleigh fading of devices’ channels.
In this paper we present a novel framework for spectral efficiency enhancement on the access link between relay stations and their donor base station through Self Organization (SO) of system-wide BS antenna tilts. Underlying idea of framework is inspired by SO in biological systems. Proposed solution can improve the spectral efficiency by upto 1 bps/Hz.
Network virtualization has been recognized as a promising solution to enable the rapid deployment of customized services by building multiple Virtual Networks (VNs) on a shared substrate network. Whereas various VN embedding schemes have been proposed to allocate the substrate resources to each VN requests, little work has been done to provide backup mechanisms in case of substrate network failures. In a virtualized infrastructure, a single substrate failure will affect all the VNs sharing that resource. Provisioning a dedicated backup network for each VN is not efficient in terms of substrate resource utilization. In this paper, we investigate the problem of shared backup network provision for VN embedding and propose two schemes: shared on-demand and shared pre-allocation backup schemes. Simulation experiments show that both proposed schemes make better utilization of substrate resources than the dedicated backup scheme without sharing, while each of them has its own advantages. © 2011 IEEE.
Cooperative communications can exploit distributed spatial diversity gain to improve link performance. When the message is coded at a low rate, source and relay can send different parts of a codeword to destination. This is referred to as the coded cooperation. In this paper, we propose two novel coded cooperation schemes for three-node relay networks, i.e., adaptive coded cooperation and ARQ-based coded cooperation. The former one needs the channel quality information available at source. The codeword splits adaptively to minimize the overall BER. The latter one is devised for relay network with erasure. In the first time slot, source sends a high-rate sub-codeword. Once destination reports the decoding errors, either source or relay can send one or two new bits selected from the mother codeword. Unlike random rateless erasure codes, such as Fountain code, the proposed scheme is based on the deterministic code generator and puncture pattern. It is experimentally shown that the proposed scheme can offer improved throughput in comparison with the conventional approach.
Multi-service system is an enabler to flexibly support diverse communication requirements for the next generation wireless communications. In such a system, multiple types of services co-exist in one baseband system with each service having its optimal frame structure and low out of band emission (OoBE) waveforms operating on the service frequency band to reduce the inter-service-band-interference (ISvcBI). In this article, a framework for multi-service system is established and the challenges and possible solutions are studied. The multi-service system implementation in both time and frequency domain is discussed. Two representative subband filtered multicarrier (SFMC) waveforms: filtered orthogonal frequency division multiplexing (F-OFDM) and universal filtered multi-carrier (UFMC) are considered in this article. Specifically, the design methodology, criteria, orthogonality conditions and prospective application scenarios in the context of 5G are discussed. We consider both single-rate (SR) and multi-rate (MR) signal processing methods. Compared with the SR system, the MR system has significantly reduced computational complexity at the expense of performance loss due to inter-subband-interference (ISubBI) in MR systems. The ISvcBI and ISubBI in MR systems are investigated with proposed low-complexity interference cancelation algorithms to enable the multi-service operation in low interference level conditions.
Full-duplex transceivers enable transmission and reception at the same time on the same frequency, and have the potential to double the wireless system spectral efficiency. Recent studies have shown the feasibility of full-duplex transceivers. In this paper, we address the radio resource allocation problem for full-duplex system. Due to the self-interference and inter-user interference, the problem is coupled between uplink and downlink channels, and can be formulated as joint uplink and downlink sum-rate maximization. As the problem is non-convex, an iterative algorithm is proposed based on game theory by modelling the problem as a noncooperative game between the uplink and downlink channels. The algorithm iteratively carries out optimal uplink and downlink resource allocation until a Nash equilibrium is achieved. Simulation results show that the algorithm achieves fast convergence, and can significantly improve the full-duplex performance comparing to the equal resource allocation approach. Furthermore, the full-duplex system with the proposed algorithm can achieve considerable gains in spectral efficiency, that reach up to 40%, comparing to half-duplex system.
The most common use of formal verification methods and tools so far has been in identifying whether livelock and/or deadlock situations can occur during protocol execution, process, or system operation. In this work we aim to show that an additional equally important and useful application of formal verification tools can be in protocol design and protocol selection in terms of performance related metrics. This can be achieved by using the tools in a rather different context compared to their traditional use. That is not only as model checking tools to assess the correctness of a protocol in terms of lack of livelock and deadlock situations but rather as tools capable of building profiles of protocol operations, assessing their performance, and identifying operational patterns and possible bottleneck operations. This process can provide protocol designers with an insight about the protocols' behavior and guide them towards further protocol design optimizations. It can also assist network operators and service providers in selecting the most suitable protocol for specific network and service configurations. We illustrate these principles by showing how formal verification tools can be applied in this protocol profiling and performance assessment context using some existing protocols as case studies.
In this paper, we propose a ¯nite-state Markov model for per-user service of an oppor- tunistic scheduling scheme over Rayleigh fading channels, where a single base station serves an arbitrary number of users. By approximating the power gain of Rayleigh fading chan- nels as ¯nite-state Markov processes, we develop an algorithm to obtain dynamic stochastic model of the transmission service, received by an individual user for a saturated scenario, where user data queues are highly loaded. The proposed analytical model is a ¯nite-state Markov process. We provide a comprehensive comparison between the predicted results by the proposed analytical model and the simulation results, which demonstrate a high degree of match between the two sets.
Decentralized dynamic spectrum allocation (DSA) that exploits adaptive antenna array interference mitigation diversity at the receiver, is studied for interference-limited environments with high level of frequency reuse. The system consists of base stations (BSs) that can optimize uplink frequency allocation to their user equipments (UEs) to minimize impact of interference on the useful signal, assuming no control over resource allocation of other BSs sharing the same bands. To this end“, good neighbor” (GN) rules allow effective trade-off between the equilibrium and transient decentralized DSA behavior if the performance targets are adequate to the interference scenario. In this paper, we 1) extend the GN rules by including a spectrum occupation control that allows adaptive selection of the performance targets; 2) derive estimates of absorbing state statistics that allow formulation of applicability areas for different DSA algorithms; 3) define a semi-analytic absorbing Markov chain model and study convergence probabilities and rates of DSA with occupation control including networks with possible partial breaks of the GN rules. For higher-dimension networks, we develop simplified search GN algorithms with occupation and power control and demonstrate their efficiency by means of simulations.
This work addresses joint transceiver optimization for multiple-input, multiple-output (MIMO) systems. In practical systems the complete knowledge of channel state information (CSI) is hardly available at transmitter. To tackle this problem, we resort to the codebook approach to precoding design, where the receiver selects a precoding matrix from a finite set of pre-defined precoding matrices based on the instantaneous channel condition and delivers the index of the chosen precoding matrix to the transmitter via a bandwidth-constraint feedback channel. We show that, when the symbol constellation is improper, the joint codebook based precoding and equalization can be designed accordingly to achieve improved performance compared to the conventional system. © 2012 IEEE.
© 2014 IEEE. Cognitive radio has emerged as a promising paradigm to improve the spectrum usage efficiency and to cope with the spectrum scarcity problem through dynamically detecting and re-allocating white spaces in licensed radio band to unlicensed users. However, cognitive radio may cause extra energy consumption because it relies on new and extra technologies and algorithms. The main objective of this work is to enhance the energy efficiency of proposed cellular cognitive radio network (CRN), which is defined as bits/Joule/Hz. In this paper, a typical frame structure of a secondary user (SU) is considered, which consists of sensing and data transmission slots. We analyze and derive the expression for energy efficiency for the proposed CRN as a function of sensing and data transmission duration. The optimal frame structure for maximum bits per joule is investigated under practical network traffic environments. he impact of optimal sensing time and frame length on the achievable energy efficiency, throughput and interference are investigated and verified by simulation results compared with relevant state of art. Our analytical results are in perfect agreement with the empirical results and provide useful insights on how to select sensing length and frame length subject to network environment and required network performance.
This paper investigates a secure wireless powered integrated service system with full duplex self-energy recycling. Specifically, an energy-constrained information transmitter (IT), powered by a power station (PS) in a wireless fashion, broadcasts two types of services to all users: a multicast service intended for all users, and a confidential unicast service subscribed to by only one user while protecting it from any other unsubscribed users and an eavesdropper. Our goal is to jointly design the optimal input covariance matrices for the energy beamforming, the multicast service, the confidential unicast service, and the artificial noises from the PS and the IT, such that the secrecy-multicast rate region (SMRR) is maximized subject to the transmit power constraints. Due to the non-convexity of the SMRR maximization (SMRRM) problem, we employ a semidefinite programmingbased two-level approach to solve this problem and find all of its Pareto optimal points. In addition, we extend the SMRRM problem to the imperfect channel state information case where a worst-case SMRRM formulation is investigated. Moreover, we exploit the optimized transmission strategies for the confidential service and energy transfer by analyzing their own rank-one profile. Finally, numerical results are provided to validate our proposed schemes.
Database-aided user association, where users are associated with data base stations (BSs) based on a database which stores their geographical location with signal-to-noise-ratio tagging, will play a vital role in the futuristic cellular architecture with separated control and data planes. However, such approach can lead to inaccurate user-data BS association, as a result of the inaccuracies in the positioning technique, thus leading to sub-optimal performance. In this paper, we investigate the impact of database-aided user association approach on the average spectral efficiency (ASE). We model the data plane base stations using its fluid model equivalent and derive the ASE for the channel model with pathloss only and when shadowing is incorporated. Our results show that the ASE in database-aided networks degrades as the accuracy of the user positioning technique decreases. Hence, system specifications for database-aided networks must take account of inaccuracies in positioning techniques.
A batch Kalman-based blind adaptive multiuser detection (K-BA-MUD) with multiple receiver (Rx) antennas is investigatedfor asynchronous CDMA systems in the Uplink direction. In this paper, we consider two receiver structures:the Independent and the Cooperative structure. Previousresults had stated that the Cooperative structure always outperforms the Independent one. However, with a limited number of samples available for signal detection, we need to justify how cooperative the processing should be to maintain that statement. Toward this end we propose the Partially Cooperative structure that relaxes the Identifiability Condition (IC) of a single Rx antenna K-BA-MUD. It is concluded that the proposed structure will outperform the Fully Cooperative one in any condition, given the number of samples is small and the IC is not violated. Finally, by reducing the size of the steering vector, we also reduce its computational complexity for updating the detector parameters.
This paper proposes an analytical model for the throughput of the enhanced distributed channel access (EDCA) mechanism in the IEEE 802.11p medium-access control (MAC) sublayer. Features in EDCA such as different contention windows (CW) and arbitration interframe space (AIFS) for each access category (AC) and internal collisions are taken into account. The analytical model is suitable for both basic access and the request-to-send/clear-to-send (RTS/CTS) access mode. Different from most of existing 3-D or 4-D Markov-chain-based analytical models for IEEE 802.11e EDCA, without computation complexity, the proposed analytical model is explicitly solvable and applies to four access categories of traffic in the IEEE 802.11p. The proposed model can be used for large-scale network analysis and validation of network simulators under saturated traffic conditions. Simulation results are given to demonstrate the accuracy of the analytical model. In addition, we investigate service differentiation capabilities of the IEEE 802.11p MAC sublayer. © 2011 IEEE.
The vision, as we move to future wireless communication systems, embraces diverse qualities targeting significant enhancements from the spectrum, to user experience. Newly-defined air-interface features, such as large number of base station antennas and computationally complex physical layer approaches come with a non-trivial development effort, especially when scalability and flexibility need to be factored in. In addition, testing those features without commercial, off-the-shelf equipment has a high deployment, operational and maintenance cost. On one hand, industry-hardened solutions are inaccessible to the research community due to restrictive legal and financial licensing. On the other hand, researchgrade real-time solutions are either lacking versatility, modularity and a complete protocol stack, or, for those that are full-stack and modular, only the most elementary transmission modes are on offer (e.g., very low number of base station antennas). Aiming to address these shortcomings towards an ideal research platform, this paper presents SWORD, a SoftWare Open Radio Design that is flexible, open for research, low-cost, scalable and software-driven, able to support advanced large and massive Multiple-Input Multiple- Output (MIMO) approaches. Starting with just a single-input single-output air-interface and commercial off-the-shelf equipment, we create a software-intensive baseband platform that, together with an acceleration/ profiling framework, can serve as a research-grade base station for exploring advancements towards future wireless systems and beyond.
This letter presents a reduced-complexity algorithm for coordinated beamforming aimed at solving the multicell downlink max-min signal-to- interference-plus-noise problem under per-base-station power constraints. It is shown that the proposed algorithm can achieve close performance to the optimum algorithm with faster convergence and lower complexity. © 2014 IEEE.
It has been envisaged that in future 5G networks user devices will become an integral part by participating in the transmission of mobile content traffic typically through Deviceto- device (D2D) technologies. In this context, we promote the concept of Mobility as a Service (MaaS), where content-aware mobile network edge is equipped with necessary knowledge on device mobility in order to distribute popular mobile content items to interested clients via a small number of helper devices. Towards this end, we present a device-level Information Centric Networking (ICN) architecture that is able to perform intelligent content distribution operations according to necessary context information on mobile user mobility and content characteristics. Based on such a platform, we further introduce device-level online content caching and offline helper selection algorithms in order to optimise the overall system efficiency. In particular, this paper sheds distinct light on the importance of user mobility data analytics based on which helper selection can lead to overall system optimality. Based on representative user mobility models, we conducted realistic simulation experiments and modelling which have proven the efficiency in terms of both network traffic offloading gains and user-oriented performance improvements. In addition, we show how the framework can be flexibly configured to meet specific delay tolerance constraints according to specific context policies.
This letter presents a novel opportunistic cooperative positioning approach for orthogonal frequency-division multiple access (OFDMA) systems. The basic idea is to allow idle mobile terminals (MTs) opportunistically estimating the arrival timing of the training sequences for uplink synchronization from active MTs. The major advantage of the proposed approach over state-of-the-arts is that the positioning-related measurements among MTs are performed without the paid of training overhead. Moreover, Cramer-Rao lower bound (CRLB) is utilized to derive the positioning accuracy limit of the proposed approach, and the numerical results show that the proposed approach can improve the accuracy of non-cooperative approaches with the a-priori stochastic knowledge of clock bias among idle MTs.
5G is the next cellular generation and is expected to quench the growing thirst for taxing data rates and to enable the Internet of Things. Focused research and standardization work have been addressing the corresponding challenges from the radio perspective while employing advanced features, such as network densi cation, massive multiple-input-multiple-output antennae, coordinated multi-point processing, intercell interference mitigation techniques, carrier aggregation, and new spectrum exploration. Nevertheless, a new bottleneck has emerged: the backhaul. The ultra-dense and heavy traf c cells should be connected to the core network through the backhaul, often with extreme requirements in terms of capacity, latency, availability, energy, and cost ef ciency. This pioneering survey explains the 5G backhaul paradigm, presents a critical analysis of legacy, cutting-edge solutions, and new trends in backhauling, and proposes a novel consolidated 5G backhaul framework. A new joint radio access and backhaul perspective is proposed for the evaluation of backhaul technologies which reinforces the belief that no single solution can solve the holistic 5G backhaul problem. This paper also reveals hidden advantages and shortcomings of backhaul solutions, which are not evident when backhaul technologies are inspected as an independent part of the 5G network. This survey is key in identifying essential catalysts that are believed to jointly pave the way to solving the beyond-2020 backhauling challenge. Lessons learned, unsolved challenges, and a new consolidated 5G backhaul vision are thus presented.
Network slicing has been identified as one of the most important features for 5G and beyond to enable operators to utilize networks on an as-a-service basis and meet the wide range of use cases. In physical layer, the frequency and time resources are split into slices to cater for the services with individual optimal designs, resulting in services/slices having different baseband numerologies (e.g., subcarrier spacing) and / or radio frequency (RF) front-end configurations. In such a system, the multi-service signal multiplexing and isolation among the service/slices are critical for the Physical-Layer Network Slicing (PNS) since orthogonality is destroyed and significant inter-service/ slice-band-interference (ISBI) may be generated. In this paper, we first categorize four PNS cases according to the baseband and RF configurations among the slices. The system model is established by considering a low out of band emission (OoBE) waveform operating in the service/slice frequency band to mitigate the ISBI. The desired signal and interference for the two slices are derived. Consequently, one-tap channel equalization algorithms are proposed based on the derived model. The developed system models establish a framework for further interference analysis, ISBI cancelation algorithms, system design and parameter selection (e.g., guard band), to enable spectrum efficient network slicing.
In this paper we extend the analysis of two-receiver broadcast channels with random parameters to the three-receivers case. Specifically we base our work on Nair and El Gamal's results for the three-receiver discrete memoryless multilevel broadcast channel and assume that state information is available non-causally at the transmitter. We provide an achievable rate region for this setting and acknowledge its importance in the study of multiuser cognitive radio configurations.
The Internet of Things (IoT) has become a new enabler for collecting real-world observation and measurement data from the physical world. The IoT allows objects with sensing and network capabilities (i.e. Things and devices) to communicate with one another and with other resources (e.g. services) on the digital world. The heterogeneity, dynamicity and ad-hoc nature of underlying data, and services published by most of IoT resources make accessing and processing the data and services a challenging task. The IoT demands distributed, scalable, and efficient indexing solutions for large-scale distributed IoT networks. We describe a novel distributed indexing approach for IoT resources and their published data. The index structure is constructed by encoding the locations of IoT resources into geohashes and then building a quadtree on the minimum bounding box of the geohash representations. This allows to aggregate resources with similar geohashes and reduce the size of the index. We have evaluated our proposed solution on a large-scale dataset and our results show that the proposed approach can efficiently index and enable discovery of the IoT resources with 65% better response time than a centralised approach and with a high success rate (around 90% in the first few attempts).
Seamless mobility support is a key technical requirement to motivate the market acceptance of the femtocells. The current 3GPP handover procedure may cause large downlink service interruption time when users move from a macrocell to a femtocell or vice versa due to the data forwarding operation. In this letter, a practical scheme is proposed to enable seamless handover by reactively bicasting the data to both the source cell and the target cell after the handover is actually initiated. Numerical results show that the proposed scheme can significantly reduce the downlink service interruption time while still avoiding the packet loss with only limited extra resource requirements compared to the standard 3GPP scheme. © 2012 IEEE.
In existing energy-efficient clustering algorithms for Wireless Sensor Networks (WSNs), individual nodes usually experience significant differences in lifetime. The issue of some nodes depleting energy earlier than other is usually referred to as hot-spot issue in WSNs, which dramatically shortens the stable operation period of a network when all nodes are live with residual energy. This paper addresses hot-spot issue through equalizing individual node's lifetime throughout the network. The probability of nodes to become cluster-head (CH) in this algorithm is relevant to node distance to the sink and is subject to the individual node-lifetime equalization. When selecting CHs, the residual node energy is considered as well. Performance evaluation illustrates the effectiveness of our algorithm in terms of extending the stable operation period of the clustered WSNs. Copyright © 2010 The authors.
The first 5G (5th generation wireless systems) New Radio Release-15 was recently completed. However, the specification only considers the use of unicast technologies and the extension to point-to-multipoint (PTM) scenarios is not yet considered. To this end, we first present in this work a technical overview of the state-of-the-art LTE (Long Term Evolution) PTM technology, i.e., eMBMS (evolved Multimedia Broadcast Multicast Services), and investigate the physical layer performance via link-level simulations. Then based on the simulation analysis, we discuss potential improvements for the two current eMBMS solutions, i.e., MBSFN (MBMS over Single Frequency Networks) and SCPTM (Single-Cell PTM). This work explicitly focus on equipping the current eMBMS solutions with 5G candidate techniques, e.g., multiple antennas and millimeter wave, and its potentials to meet the requirements of next generation PTM transmissions.
In this paper, we consider multi-relay cooperative networks for the Rayleigh fading channel, where each relay, upon receiving its own channel observation, independently compresses it and forwards the compressed information to the destination. Although the compression at each relay is distributed using Wyner-Ziv coding, there exists an opportunity for jointly optimizing compression at multiple relays to maximize the achievable rate. Considering Gaussian signalling, a primal optimization problem is formulated accordingly. We prove that the primal problem can be solved by resorting to its Lagrangian dual problem and an iterative optimization algorithm is proposed. The analysis is further extended to a hybrid scheme, where the employed forwarding scheme depends on the decoding status of each relay. The relays that are capable of successful decoding perform decode-and-forward and the rest conduct distributed compression. The hybrid scheme allows the cooperative network to adapt to the changes of the channel conditions and benefit from an enhanced level of flexibility. Numerical results from both spectrum and energy efficiency perspectives show that the joint optimization improves efficiency of compression and identify the scenarios where the proposed schemes outperform the conventional forwarding schemes. The findings provide important insights into the optimal deployment of relays in a realistic cellular network.
This letter addresses energy-efficient design in multi-user, single-carrier uplink channels by employing multiple decoding policies. The comparison metric used in this study is based on average energy efficiency contours, where an optimal rate vector is obtained based on four system targets: Maximum energy efficiency, a trade-off between maximum energy efficiency and rate fairness, achieving energy efficiency target with maximum sum-rate and achieving energy efficiency target with fairness. The transmit power function is approximated using Taylor series expansion, with simulation results demonstrating the achievability of the optimal rate vector, and negligible performance difference in employing this approximation.
The aim of this letter is to exhibit some advantages of using real constellations in large multi-user (MU) MIMO systems. It is shown that a widely linear zero-forcing (WLZF) receiver with M-ASK modulation enjoys a spatial-domain diversity gain, which linearly increases with the MIMO size even in fully- and over-loaded systems. Using the decision of WLZF as the initial state, the likelihood ascent search (LAS) achieves nearoptimal BER performance in fully-loaded large MIMO systems. Interestingly, for coded systems, WLZF shows a much closer BER to that of WLZF-LAS with a gap of only 0:9-2 dB in SNR.
In this paper, we consider the radio resource allocation problem for uplink OFDMA system. The existing algorithms have been derived under the assumption of Gaussian inputs due to its closed-form expression of mutual information. For the sake of practicality, we consider the system with Finite Symbol Alphabet (FSA) inputs, and solve the problem by capitalizing on the recently revealed relationship between mutual information and Minimum Mean-Square Error (MMSE). We first relax the problem to formulate it as a convex optimization problem, then we derive the optimal solution via decomposition methods. The optimal solution serves as an upper bound on the system performance. Due to the complexity of the optimal solution, a low-complexity suboptimal algorithm is proposed. Numerical results show that the presented suboptimal algorithm can achieve performance very close to the optimal solution and outperforms the existing suboptimal algorithms. Furthermore, using our proposed algorithm, significant power saving can be achieved in comparison to the case when Gaussian input is assumed.
Energy consumption has become an increasingly important aspect of wireless communications, from both an economical and environmental point of view. New enhancements are being placed on mobile networks to reduce the power consumption of both mobile terminals and base stations. This paper studies the achievable rate region of AWGN broadcast channels under Time-division, Frequency-division and Superposition coding, and locates the optimal energy-efficient rate-pair according to a comparison metric based on the average energy efficiency of the system. In addition to the transmit power, circuit power and signalling power are also incorporated in the energy efficiency function, with simulation results verifying that the Superposition coding scheme achieves the highest energy efficiency in an ideal, but non-realistic scenario, where the signalling power is zero. With moderate signalling power, the Frequency-division scheme is the most energy-efficient, with Superposition coding and Time-division becoming second and third best. Conversely, when the signalling power is high, both Time-division and Frequency-division schemes outperform Superposition coding. On the other hand, the Superposition coding scheme also incorporates rate-fairness into the system, which allows both users to transmit whilst maximising the energy efficiency.
Requirement for low operating and deployment costs of cellular networks motivate the need for self-organisation in cellular networks. To reduce operational costs, self-organising networks are fast becoming a necessity. One key issue in this context is self-organised coverage estimation that is done based on the signal strength measurement and reported position information of system users. In this paper, the effect of inaccurate position estimation on self-organised coverage estimation is investigated. We derive the signal reliability expression (i.e. probability of the received signal being above a certain threshold) and the cell coverage expressions that take the error in position estimation into consideration. This is done for both the shadowing and non-shadowing channel models. The accuracy of the modified reliability and cell coverage probability expressions are also numerically verified for both cases.
It is foreseen that the next generation of cellular network would integrate the relaying or multihop scheme. In a multihop cellular architecture, the users are not only able to communicate directly to the base station (BS) but can also use some relay stations to relay their data to the BS. In such architecture, it may happen that a relayed user handover to another relay station during its communication: this process is called the inter-relay handoff. The main objective of this paper is to study how frequent the inter-relay handoff occurs and its impact on the relaying system performance. For this, different algorithms to decide when a user should inter-relay handover are proposed and tested through a dynamic system level simulator. We compare the capacity gain for the different algorithm with the conventional cellular networks using the UMTS FDD mode. The result showed that with an appropriate inter-relay handoff scheme, the uplink capacity gain of 35% is readily achievable.
Interference forwarding has been shown to be beneficial in the interference channel with a relay as it enlarges the strong interference region, allowing the decoding of the interference at the receivers for larger ranges of the channel gains. In this work we demonstrate the benefit of adding a relay to the cognitive interference channel. We pay special attention to the effect of interference forwarding in this configuration. Two setups are presented. In the first, the interference forwarded by the relay is the primary user's signal, and in the second, this is the cognitive user's signal. We characterise the capacity regions of these two models in the case of strong interference. We show that as opposed to the first setup, in the second setup the capacity region is enlarged, compared to the capacity region of the cognitive interference channel, when the relay does not help the intended receiver.
This paper proposes a two stage algorithm to address spectrum sharing between two Universal Mobile Telecommunication System (UMTS) operators. The two stage algorithm uses both genetic algorithm and load balancing techniques. The first stage uses genetic algorithm as a solution to optimize the allocation when the correlation of traffic is low. The second stage uses load balancing scheme in the highly correlated traffic region. The simulation result shows that significant spectrum sharing gains up to 26 percent and 20 percent respectively, can be obtained on both networks using the proposed algorithm.
In this paper, a high flat gain waveguide-fed aperture antenna has been proposed. For this purpose, two layers of FR4 dielectric as superstrates have been located in front of the aperture to enhance the bandwidth and the gain of the antenna. Moreover, a conductive shield, which is connected to the edges of the ground plane and surrounding aperture and superstrates, applied to the proposed structure to improve its radiation characteristics. The proposed antenna has been simulated with HFSS and optimized with parametric study and the following results have been obtained. The maximum gain of 13.0 dBi and 0.5-dBi gain bandwidth of 25.9 % (8.96 – 11.63 GHz) has been achieved. The 3-dBi gain bandwidth of the proposed antenna is 40.7% (8.07-12.20 GHz), which has a suitable reflection coefficient (≤-10dBi) in whole bandwidth. This antenna comprises a compact size of (1.5λ×1.5λ), easy structure and low-cost fabrication.
This paper describes a mechanism of forwarding secure state information associated to communication sessions, between middleboxes belonging to different Radio Access Networks (RANs). The transfer of state information among RANs could support service integrity and continuity by maintaining a mobile user's multimedia sessions which may otherwise be dropped and also minimize security vulnerabilities. The paper demonstrates how the context transfer protocol could be employed for this purpose to forward certain security information from the old to the new middlebox to support multimedia session maintenance during mobility and also at the same time notify the previous middlebox to close unnecessary open ports for improved security and resolve vulnerability. A number of test scenarios are used to demonstrate how middleboxes could intervene with multimedia sessions during mobility and show how context transfer can provide a solution for improving the performance in the multimedia session re-establishment as well as enhancing middlebox security. Copyright 2006 ACM.
The ever-growing computation and storage capability of mobile phones have given rise to mobile-centric context recognition systems, which are able to sense and analyze the context of the carrier so as to provide an appropriate level of service. As nonintrusive autonomous sensing and context recognition are desirable characteristics of a personal sensing system; efforts have been made to develop opportunistic sensing techniques on mobile phones. The resulting combination of these approaches has ushered in a new realm of applications, namely opportunistic user context recognition with mobile phones. This article surveys the existing research and approaches towards realization of such systems. In doing so, the typical architecture of a mobile-centric user context recognition system as a sequential process of sensing, preprocessing, and context recognition phases is introduced. The main techniques used for the realization of the respective processes during these phases are described, and their strengths and limitations are highlighted. In addition, lessons learned from previous approaches are presented as motivation for future research. Finally, several open challenges are discussed as possible ways to extend the capabilities of current systems and improve their real-world experience.
Node clustering has been widely studied in recent years for Wireless Sensor Networks (WSN) as a technique to form a hierarchical structure and prolong network lifetime by reducing the number of packet transmissions. Cluster Heads (CH) are elected in a distributed way among sensors, but are often highly overloaded, and therefore re-clustering operations should be performed to share the resource intensive CH-role. Existing protocols involve periodic network-wide re-clustering operations that are simultaneously performed, which requires global time synchronisation. To address this issue, some recent studies have proposed asynchronous node clustering for networks with direct links from CHs to the data sink. However, for large-scale WSNs, multihop packet delivery to the sink is required since longrange transmissions are costly for sensor nodes. In this paper, we present an asynchronous node clustering protocol designed for multihop WSNs, considering dynamic conditions such as residual node energy levels and unbalanced data traffic loads caused by packet forwarding. Simulation results demonstrate that it is possible to achieve similar levels of lifetime extension by re-clustering a multihop WSN via independently made decisions at CHs, without a need for time synchronisation required by existing synchronous protocols.
This paper describes several communication categories for personal and body centric communications. It uses several application scenarios to give examples of these categories and therefore to concretise these categories. Further, the paper presents a first set of analysis for off-body communications.
In order to minimize the downloading time of short-lived applications like web browsing, web application and short video clips, the recently standardized HTTP/2 adopts stream multiplexing on one single TCP connection. However, aggregating all content objects within one single connection suffers from the Head-of-Line blocking issue. QUIC, by eliminating such an issue on the basis of UDP, is expected to further reduce the content downloading time. However, in mobile network environments, the single connection strategy still leads to a degraded and high variant completion time due to the unexpected hindrance of congestion window growth caused by the common but uncertain fluctuations in round trip time and also random loss event at the air interface. To retain resilient congestion window against such network fluctuations, we propose an intelligent connection management scheme based on QUIC which not only employs adaptively multiple connections but also conducts a tailored state and congestion window synchronization between these parallel connections upon the detection of network fluctuation events. According to the performance evaluation results obtained from an LTE-A/Wi-Fi testing network, the proposed multiple QUIC scheme can effectively overcome the limitations of different congestion control algorithms (e.g. the loss-based New Reno/CUBIC and the rate-based BBR), achieving substantial performance improvement in both median (up to 59.1%) and 95th completion time (up to 72.3%). The significance of this piece of work is to achieve highly robust short-lived content downloading performance against various uncertainties of network conditions as well as with different congestion control schemes.
Energy efficiency (EE) is a key design criterion for the next generation of communication systems. Equally, cooperative communication is known to be very effective for enhancing the performance of such systems. This paper proposes a breakthrough approach for maximizing the EE of multiple-inputmultiple- output (MIMO) relay-based nonregenerative cooperative communication systems by optimizing both the source and relay precoders when both relay and direct links are considered. We prove that the corresponding optimization problem is at least strictly pseudo-convex, i.e. having a unique solution, when the relay precoding matrix is known, and that its Lagrangian can be lower and upper bounded by strictly pseudo-convex functions when the source precoding matrix is known. Accordingly, we then derive EE-optimal source and relay precoding matrices that are jointly optimize through alternating optimization. We also provide a low-complexity alternative to the EE-optimal relay precoding matrix that exhibits close to optimal performance, but with a significantly reduced complexity. Simulations results show that our joint source and relay precoding optimization can improve the EE of MIMO-AF systems by up to 50% when compared to direct/relay link only precoding optimization.
Hot spots in a wireless sensor network emerge as locations under heavy traffic load. Nodes in such areas quickly deplete energy resources, leading to disruption in network services. This problem is common for data collection scenarios in which Cluster Heads (CH) have a heavy burden of gathering and relaying information. The relay load on CHs especially intensifies as the distance to the sink decreases. To balance the traffic load and the energy consumption in the network, the CH role should be rotated among all nodes and the cluster sizes should be carefully determined at different parts of the network. This paper proposes a distributed clustering algorithm, Energy-efficient Clustering (EC), that determines suitable cluster sizes depending on the hop distance to the data sink, while achieving approximate equalization of node lifetimes and reduced energy consumption levels. We additionally propose a simple energy-efficient multihop data collection protocol to evaluate the effectiveness of EC and calculate the end-to-end energy consumption of this protocol; yet EC is suitable for any data collection protocol that focuses on energy conservation. Performance results demonstrate that EC extends network lifetime and achieves energy equalization more effectively than two well-known clustering algorithms, HEED and UCR.
Conventional cellular systems are dimensioned according to a worst case scenario, and they are designed to ensure ubiquitous coverage with an always-present wireless channel irrespective of the spatial and temporal demand of service. A more energy conscious approach will require an adaptive system with a minimum amount of overhead that is available at all locations and all times but becomes functional only when needed. This approach suggests a new clean slate system architecture with a logical separation between the ability to establish availability of the network and the ability to provide functionality or service. Focusing on the physical layer frame of such an architecture, this paper discusses and formulates the overhead reduction that can be achieved in next generation cellular systems as compared with the Long Term Evolution (LTE). Considering channel estimation as a performance metric whilst conforming to time and frequency constraints of pilots spacing, we show that the overhead gain does not come at the expense of performance degradation.
In this paper, we consider multigroup multicast transmissions with different types of service messages in an overloaded multicarrier system, where the number of transmitter antennas is insufficient to mitigate all inter-group interference. We show that employing a rate-splitting based multiuser beamforming approach enables a simultaneous delivery of the multiple service messages over the same time-frequency resources in a non-orthogonal fashion. Such an approach, taking into account transmission power constraints which are inevitable in practice, outperforms classic beamforming methods as well as current standardized multicast technologies, in terms of both spectrum efficiency and the flexibility of radio resource allocation.
Motivated by increased interests in energy efficient communication systems, the relation between energy efficiency (EE) and spectral efficiency (SE) for multiple-input multipleoutput (MIMO) systems is investigated in this paper. To provide insight into the design of practical MIMO systems, we adopt a realistic power model, as well as consider both independent Rayleigh fading and semicorrelated fading channels. We derive a novel and closed-form upper bound for the system EE as a function of SE. This upper bound exhibits a great accuracy for a wide range of SE values, and thus can be utilized for explicitly assessing the influence of SE on EE, and analytically addressing the EE optimization problems. Using this tight EE upper bound, our analysis unfolds two EE optimization issues: Given the number of transmit and receive antennas, an optimum value of SE is derived such that the overall EE can be maximized; Given a specific value of SE, the optimal number of antennas is derived for maximizing the system EE.
The Datagram Congestion Control Protocol (DCCP) has been recently proposed as a new transport protocol, suitable for use by applications such as multimedia streaming. Wireless mesh networks have promising commercial potential for a large variety of applications. In this paper, we evaluate the performance of DCCP with TCP Friendly Rate Control (TFRC) in wireless mesh networks using ns2 simulations, in terms of fairness and throughput smoothness. Our results show that in wireless mesh networks DCCP shares the limited wireless channel bandwidth fairly with the competing flows and provides better throughput smoothness than TCP flows in isolation i.e. with no competing flows. However, DCCP loses its ability to maintain the smoothness for streaming media applications with competing flows in the network. Copyright 2006 ACM.
Energy efficiency (EE) is growing in importance as a system design crite- rion for power-unlimited system such as cellular systems. Equally, resource allocation is a well-known method for improving the performance of the latter. In this paper, we propose two novel coordinated resource allocation strategies for jointly optimizing the resources of three sectors/cells in an energy-efficient manner in the downlink of multi-cell/sector systems. Given that this optimization problem is non-convex, it can only be optimally solved using high complexity exhaustive search. Here, we propose two practical approaches for allocating resources in a low complexity manner. We then compare our novel approaches against other existing non-coordinated and co- ordinated ones in order to highlight their benefit. Our results indicate that our first approach performs the best in terms of EE but with a low level of fairness in the user rate allocation; whereas our second approach provides a good trade-off between EE and fairness. Overall, base station selection, i.e. allowing only one sector to transmit at a time, is a very energy-efficient approach when the sleeping power is considered in the base station power model.
Virtual multiple-input-multiple-output (MIMO) systems using multiple antennas at the transmitter and a single antenna at each of the receivers have recently emerged as an alternative to point-to-point MIMO systems. This paper investigates the relationship between energy efficiency (EE) and spectral efficiency (SE) for a virtual-MIMO system that has one destination and one relay using compress-and-forward (CF) cooperation. To capture the cost of cooperation, the power allocation (between the transmitter and the relay) and the bandwidth allocation (between the data and cooperation channels) are studied. This paper derives a tight upper bound for the overall system EE as a function of SE, which exhibits good accuracy for a wide range of SE values. The EE upper bound is used to formulate an EE optimization problem. Given a target SE, the optimal power and bandwidth allocation can be derived such that the overall EE is maximized. Results indicate that the EE performance of virtual-MIMO is sensitive to many factors, including resource-allocation schemes and channel characteristics. When an out-of-band cooperation channel is considered, the performance of virtual-MIMO is close to that of the MIMO case in terms of EE. Considering a shared-band cooperation channel, virtual-MIMO with optimal power and bandwidth allocation is more energy efficient than the noncooperation case under most SE values.
This paper proposes a framework for spectrum sharing between multiple Universal Mobile Telecommunication System (UMTS) operators in the UMTS extension band. An algorithm is proposed, and the performance of the algorithm is investigated under uniform and non-uniform traffic conditions. The impact of call setup messages on the overall performance of the algorithm show that DSA gains in the region of 7% and 2% can be obtained under uniform and non-uniform traffic conditions.
In mobile ad hoc networks (MANETs), accurate throughput-constrained Quality of Service (QoS) routing and admission control have proven difficult to achieve, mainly due to node mobility and contention for channel access. In this paper we propose a solution to those problems, utilising the Dynamic Source Routing (DSR) protocol for basic routing. Our design considers the throughput requirements of data sessions and how these are affected by protocol overbeads and contention between nodes. Furthermore, in contrast to previous work, the time wasted at the MAC layer by collisions and channel access delays, is also considered. Simulation results show that in a stationary scenario with high offered load, and at the cost of increased routing overhead, our protocol more than doubles session completion rate (SCR) and reduces average packet delay by a factor of seven compared to classic DSR. Even in a highly mobile scenario, it can double SCR and cut average packet delay to a third.
Recent advancements in sensing, networking technologies and collecting real-world data on a large scale and from various environments have created an opportunity for new forms of real-world services and applications. This is known under the umbrella term of the Internet of Things (IoT). Physical sensor devices constantly produce very large amounts of data. Methods are needed which give the raw sensor measurements a meaningful interpretation for building automated decision support systems. To extract actionable information from real-world data, we propose a method that uncovers hidden structures and relations between multiple IoT data streams. Our novel solution uses Latent Dirichlet Allocation (LDA), a topic extraction method that is generally used in text analysis. We apply LDA on meaningful abstractions that describe the numerical data in human understandable terms. We use Symbolic Aggregate approXimation (SAX) to convert the raw data into string-based patterns and create higher level abstractions based on rules. We finally investigate how heterogeneous sensory data from multiple sources can be processed and analysed to create near real-time intelligence and how our proposed method provides an efficient way to interpret patterns in the data streams. The proposed method uncovers the correlations and associations between different pattern in IoT data streams. The evaluation results show that the proposed solution is able to identify the correlation with high efficiency with an F-measure up to 90%.
Multiuser multiple-input multiple-output (MUMIMO) nonlinear precoding techniques face the problem of poor computational scalability to the size of the network. In this paper, the fundamental problem of MU-MIMO scalability is tackled through a novel signal-processing approach, which is called degree-2 vector perturbation (D2VP). Unlike the conventional VP approaches that aim at minimizing the transmit-to-receive energy ratio through searching over an N-dimensional Euclidean space, D2VP shares the same target through an iterative-optimization procedure. Each iteration performs vector perturbation over two optimally selected subspaces. By this means, the computational complexity is managed to be in the cubic order of the size of MUMIMO, and this mainly comes from the inverse of the channel matrix. In terms of the performance, it is shown that D2VP offers comparable bit-error-rate to the sphere encoding approach for the case of small MU-MIMO. For the case of medium and large MU-MIMO when the sphere encoding does not apply due to unimplementable complexity, D2VP outperforms the lattice reduction VP by around 5-10 dB in Eb/No and 10-50 dB in normalized computational complexity.
In this paper, a cooperative iterative water-filling approach is investigated for two-user Gaussian interference channel. State-of-the-art approaches only maximize the individual user's own rate and always model interference as noise. Our proposed approach establishes user cooperation through sharing network side information. It iteratively maximizes the sum-rate of both users subject to distributed power constraint. Interference is optimally regarded as message or noise. Three efficient rate-sharing schemes are also investigated between two users based on priority. Numerical results are performed in frequency-selective environment. It is observed that the proposed approach offers significantly performance improvement in comparison with conventional iterative water-filling approaches.
Ultra densification in heterogeneous networks (HetNets) and the advent of millimeter wave (mmWave) technology for fifth generation (5G) networks have led the researchers to redesign the existing resource management techniques. A salient feature of this activity is to accentuate the importance of computationally intelligent (CI) resource allocation schemes offering less complexity and overhead. This paper overviews the existing literature on resource management in mmWave-based HetNets with a special emphasis on CI techniques and further proposes frameworks that ensure quality-of-service requirements for all network entities. More specifically, HetNets with mmWavebased small cells pose different challenges as compared to an allmicrowave- based system. Similarly, various modes of small cell access policies and operations of base stations in dual mode, i.e., operating both mmWave and microwave links simultaneously, offer unique challenges to resource allocation. Furthermore, the use of multi-slope path loss models becomes inevitable for analysis owing to irregular cell patterns and blocking characteristics of mmWave communications. This paper amalgamates the unique challenges posed because of the aforementioned recent developments and proposes various CI-based techniques including game theory and optimization routines to perform efficient resource management.
A novel approach for implementation of opportunistic scheduling without explicit feedback channels is proposed in this paper, which exploits the existing, ARQ signals instead of feedback channels to reduce the complexity of implementation. Monte Carlo simulation results demonstrate the efficacy of the proposed approach in harvesting multiuser diversity gain. The proposed approach enables implementation of opportunistic scheduling, in a variety of wireless networks, such as the IEEE 802.11, without feedback facilities for collecting partial channel state information from users.
A novel reconfigurable dielectric resonator antenna (DRA) employed a T-Shaped microstrip-fed structure in order to excite the dielectric resonator is presented. By carefully adjusting the location of the inverted U-shaped slot, switches, and length of arms, the proposed antenna can support WLAN wireless system. In addition, the presented DRA can be proper for cognitive radio because of availability switching between wideband and narrowband operation. The proposed reconfigurable DRA consisting of a Roger substrate with relative permittivity 3 and a size of 20 mm × 30 mm × 0.75 mm and a dielectric resonator (DR) with a thickness of 9 mm and the overall size of 18 mm × 18 mm. Moreover, the antenna has been fabricated and tested, which test results have enjoyed a good agreement with the simulated results. As well as this, the measured and simulated results show the reconfigurability that the proposed DRA provides a dual-mode operation and also three different resonance frequencies as a result of switching the place of arms.
The widely accepted OFDMA air interface technology has recently been adopted in most mobile standards by the wireless industry. However, similar to other frequency-time multiplexed systems, their performance is limited by inter-cell interference. To address this performance degradation, interference mitigation can be employed to maximize the potential capacity of such interference-limited systems. This paper surveys key issues in mitigating interference and gives an overview of the recent developments of a promising mitigation technique, namely, interference avoidance through inter-cell interference coordination (ICIC). By using optimization theory, an ICIC problem is formulated in a multi-cell OFDMA-based system and some research directions in simplifying the problem and associated challenges are given. Furthermore, we present the main trends of interference avoidance techniques that can be incorporated in the main ICIC formulation. Although this paper focuses on 3GPP LTE/LTE-A mobile networks in the downlink, a similar framework can be applied for any typical multi-cellular environment based on OFDMA technology. Some promising future directions are identified and, finally, the state-of-the-art interference avoidance techniques are compared under LTE-system parameters.
Femtocell is becoming a promising solution to face the explosive growth of mobile broadband usage in cellular networks. While each femtocell only covers a small area, a massive deployment is expected in the near future forming networked femtocells. An immediate challenge is to provide seamless mobility support for networked femtocells with minimal support from mobile core networks. In this paper, we propose efficient local mobility management schemes for networked femtocells based on X2 traffic forwarding under the 3GPP Long Term Evolution Advanced (LTE-A) framework. Instead of implementing the path switch operation at core network entity for each handover, a local traffic forwarding chain is constructed to use the existing Internet backhaul and the local path between the local anchor femtocell and the target femtocell for ongoing session communications. Both analytical studies and simulation experiments are conducted to evaluate the proposed schemes and compare them with the original 3GPP scheme. The results indicate that the proposed schemes can significantly reduce the signaling cost and relieve the processing burden of mobile core networks with the reasonable distributed cost for local traffic forwarding. In addition, the proposed schemes can enable fast session recovery to adapt to the self-deployment nature of the femtocells.
In this paper we present a novel distributed Inter-Cell Interference Coordination (ICIC) scheme for interference-limited heterogeneous cellular networks (HetNet). We reformulate our problem in such a way that it can be decomposed into a number of small sub-problems, which can be solved independently through an iterative subgradient method. The proposed dual decomposition method can also address problems with binary-valued variables. The proposed algorithm is compared with some reference schemes in terms of cell-edge and total cell throughput.
In this paper, we present a novel random access method for future mobile cellular networks that support machine type communications. Traditionally, such networks establish connections with the devices using a random access procedure, however massive machine type communication poses several challenges to the design of random access for current systems. State-of-the-art random access techniques rely on predicting the traffic load to adjust the number of users allowed to attempt the random access preamble phase, however this delays network access and is highly dependent on the accuracy of traffic prediction and fast signalling. We change this paradigm by using the preamble phase to estimate traffic and then adapt the network resources to the estimated load. We introduce Preamble Barring that uses a probabilistic resource separation to allow load estimation in a wide range of load conditions and combine it with multiple random access responses. This results in a load adaptive method that can deliver near-optimal performance under any load condition without the need for traffic prediction or signalling, making it a promising solution to avoid network congestion and achieve fast uplink access for massive MTC.
Cross-layer scheduling is a promising solution for improving the efficiency of emerging broadband wireless systems. In this tutorial, various cross-layer design approaches are organized into three main categories namely air interface-centric, user-centric and route-centric and the general characteristics of each are discussed. Thereafter, by focusing on the air interfacecentric approach, it is shown that the resource allocation problem can be formulated as an optimization problem with a certain objective function and some particular constraints. This is illustrated with the aid of a customer-provider model from the field of economics. Furthermore, the possible future evolution of scheduling techniques is described based on the characteristics of traffic and air interface in emerging broadband wireless systems. Finally, some further challenges are identified. © 2009 IEEE.
The emergence of the Internet of Things (IoT) has led to the production of huge volumes of real-world streaming data. We need effective techniques to process IoT data streams and to gain insights and actionable information from realworld observations and measurements. Most existing approaches are application or domain dependent. We propose a method which determines how many different clusters can be found in a stream based on the data distribution. After selecting the number of clusters, we use an online clustering mechanism to cluster the incoming data from the streams. Our approach remains adaptive to drifts by adjusting itself as the data changes. We benchmark our approach against state-of-the-art stream clustering algorithms on data streams with data drift. We show how our method can be applied in a use case scenario involving near real-time traffic data. Our results allow to cluster, label and interpret IoT data streams dynamically according to the data distribution. This enables to adaptively process large volumes of dynamic data online based on the current situation. We show how our method adapts itself to the changes. We demonstrate how the number of clusters in a real-world data stream can be determined by analysing the data distributions.
There has been a keen interest in detecting abrupt sequential changes in streaming data obtained from sensors in Wireless Sensor Networks (WSNs) for Internet of Things (IoT) applications such as fire/fault detection, activity recognition and environmental monitoring. Such applications require (near) online detection of instantaneous changes. This paper proposes an Online, adaptive Filtering-based Change Detection (OFCD) algorithm. Our method is based on a convex combination of two decoupled Least Mean Square (LMS) windowed filters with differing sizes. Both filters are applied independently on data streams obtained from sensor nodes such that their convex combination parameter is employed as an indicator of abrupt changes in mean values. An extension of our method (OFCD) based on a Cooperative scheme between multiple sensors (COFCD) is also presented. It provides an enhancement of both convergence and steady-state accuracy of the convex weight parameter. Our conducted experiments show that our approach can be applied in distributed networks in an online fashion. It also provides better performance and less complexity compared with the state-of-theart on both of single and multiple sensors.
It has been claimed that the filter bank multicarrier (FBMC) systems suffer from negligible performance loss caused by moderate dispersive channels in the absence of guard time protection between symbols. However, a theoretical and systematic explanation/analysis for the statement is missing in the literature to date. In this paper, based on one-tap minimum mean square error (MMSE) and zero-forcing (ZF) channel equalizations, the impact of doubly dispersive channel on the performance of FBMC systems is analyzed in terms of mean square error (MSE) of received symbols. Based on this analytical framework, we prove that the circular convolution property between symbols and the corresponding channel coefficients in the frequency domain holds loosely with a set of inaccuracies. To facilitate analysis, we first model the FBMC system in a vector/matrix form and derive the estimated symbols as a sum of desired signal, noise, inter-symbol interference (ISI), inter-carrier interference (ICI), inter-block interference (IBI) and estimation bias in the MMSE equalizer. Those terms are derived one-by-one and expressed as a function of channel parameters. The numerical results reveal that in harsh channel conditions, e.g., with large Doppler spread or channel delay spread, the FBMC system performance may be severely deteriorated and error floor will occur.
The current Web and data indexing and search mechanisms are mainly tailored to process text-based data and are limited in addressing the intrinsic characteristics of distributed, large-scale and dynamic Internet of Things (IoT) data networks. The IoT demands novel indexing solutions for large-scale data to create an ecosystem of system; however, IoT data are often numerical, multi-modal and heterogeneous. We propose a distributed and adaptable mechanism that allows indexing and discovery of real-world data in IoT networks. Comparing to the state-of-the-art approaches, our model does not require any prior knowledge about the data or their distributions. We address the problem of distributed, efficient indexing and discovery for voluminous IoT data by applying an unsupervised machine learning algorithm. The proposed solution aggregates and distributes the indexes in hierarchical networks. We have evaluated our distributed solution on a large-scale dataset, and the results show that our proposed indexing scheme is able to efficiently index and enable discovery of the IoT data with 71% to 92% better response time than a centralised approach.
As soon as 2020, network densification and spectrum extension will be the dominant theme to support enormous capacity and massive connectivity . However, this approach may not guarantee wide area coverage due to the poor propagation capabilities of high frequency bands. In addition, energy efficiency and signalling overhead will become critical considerations in ultra-dense deployment scenarios. This calls for a futuristic two layer RAN architecture with dual connectivity, where the high frequency bands are used for data services, complemented by a coverage layer at conventional cellular bands . This separation of control and data planes will enable a transition from always-on to always-available systems and could result in order of magnitude savings in energy and signalling overhead.
Energy efﬁciency (EE) is a key enabler for the next generation of communication systems. Equally, resource allocation and cooperative communication are effective tech-niques for improving communication system performance. In this paper, we propose an optimal energy-efﬁcient joint resource allocation method for the multi-hop multiple-input-multiple-output (MIMO) amplify-and-forward (AF) system. We deﬁne the joint source and multiple relays optimization problem and prove that its objective function, which is not generally quasiconvex, can be lower-bounded by a convex function. Moreover, all the minima of this objective function are strict minima. Based on these two properties, we then simplify the original multivariate optimization problem into a single variable problem and design a novel approach for optimally solving it in both the unconstraint and power constraint cases. In addition, we provide a sub-optimal approach with reduced complexity; the latter reduces the computational complexity by a factor of up to 40 with near-optimal performance. We ﬁnally utilize our novel approach for comparing the optimal energy-per-bit consumption of multi-hop MIMO-AF and MIMO systems; results indicate that MIMO-AF can help to save energy when the direct link quality is poor.
The SUCC~SS of the deployment of CPRS will be significantly influenced by the introduction of efieient and variable QoS management and supporting mechanisms. Although QoS profiles for a number of CPRS service classes has been specified by ETSI, implementation issues plays a major role in achieving that. This includes QoS management in the areas of trsfflc scheduling, traffic shaping and call admission control techniques. QoS in CPRS is defined as the collective etTect of service performances, which determines the degree of satisfaction of a user of the service. QoS enables the differentiation between provided services. Increasing demand and limited bandwidth available for mobile communication scrriees require efficient use of radio resources among diverse services. I n future wireless packet networks, it is anticipated that B wide variety of data applications, ranging from WWW browsing to Email, and real time sewices like paeketized voice and videoconference will be supported with varying levels of QoS. Therefore there is P need for packet and service scheduling schemes that effectively provide QoS guarantees and also are simple to implement This paper describes a novel dynamic admission control and scheduling technique based on genetic algorithms focusing on static and dynamic parameters of service classes I. Performance comparison of this technique on a CPRS system is evaluated against data services and also a trafiic mix comprising voice and data.
Device-to-device (D2D) communication has huge potential for capacity and coverage enhancements for next generation cellular networks. The number of potential nodes for D2D communication is an important parameter that directly impacts the system capacity. In this letter, we derive an analytic expression for average coverage probability of cellular user and corresponding number of potential D2D users. In this context, mature framework of stochastic geometry and Poisson point process have been used. The retention probability has been incorporated in Laplace functional to capture reduced path-loss and shortest distance criterion based D2D pairing. The numerical results show a close match between analytic expression and simulation setup.
This patent is based on our novel data discovery mechanism for large scale, highly distributed and heterogeneous data networks. Managing Big Data harvested from IoT environments is an example application
In this paper, we propose a rate-adaptive bit and power loading approach for the OFDM-based relaying communications. The cooperative relay operates in the half-duplex amplify-and-forward mode. The source and the relay has the separate power constraints. The maximum-ratio combining is employed at -the destination for maximizing the received SNR. Assuming the perfect channel knowledge available at all nodes, the proposed approach is to maximize the throughput (the number of bits/symbol) at the given power constraint and the target link performance. Unlike the water-filling method, the proposed approach does not need the iterative loading process, and can otTer the sub-optimum performance. Computer simulations are used to test the proposed approach for various scenarios with respect to the relay location or the distributed power allocation. © 2008 IEEE.
In this paper, hybrid relaying schemes are investigated in the two-way relay channel, where the relay node is able to adaptively switch between different forwarding schemes based on the current channel state and its decoding status and thus provides more flexibility as well as improved performance. The analysis is conducted from the energy efficiency perspective for two transmission protocols distinguished by whether exploiting the direct link between two main communicating nodes (the source and destination nodes, and vice versa since it is two way communication) or not. A realistic power model taking circuitry power consumption of all involved nodes into account is employed. The energy efficiency is optimized in terms of consumed energy per bit subject to the Quality of Service (QoS) constraint. Numerical results show that the hybrid schemes are able to achieve the highest energy efficiency due to its capability of adapting to the channel variations and the protocol where the direct link is exploited is more energy efficient.
The first generation of femtocells is evolving to the next generation with many more capabilities in terms of better utilisation of radio resources and support of high data rates. It is thus logical to conjecture that with these abilities and their inherent suitability for home environment, they stand out as an ideal enabler for delivery of high efficiency multimedia services. This paper presents a comprehensive vision towards this objective and extends the concept of femtocells from indoor to outdoor environments, and strongly couples femtocells to emergency and safety services. It also presents and identifies relevant issues and challenges that have to be overcome in realization of this vision.
While many studies have concentrated on providing theoretical analysis for the relay assisted compress-and-forward systems little effort has yet been made to the construction and evaluation of a practical system. In this paper a practical CF system incorporating an error-resilient multilevel Slepian-Wolf decoder is introduced and a novel iterative processing structure which allows information exchanging between the Slepian-Wolf decoder and the forward error correction decoder of the main source message is proposed. In addition, a new quantization scheme is incorporated as well to avoid the complexity of the reconstruction of the relay signal at the final decoder of the destination. The results demonstrate that the iterative structure not only reduces the decoding loss of the Slepian-Wolf decoder, it also improves the decoding performance of the main message from the source.
Doubly differential modem turns out to be a promising technology for coping with unknown frequency offsets with the pay of signal-to-noise ratio (SNR). In this paper, we propose to compensate the SNR loss by employing the detection-forward cooperative relay. The receiver can employ two kind of combiners to attain the achievable spatial diversity-gain. Performance analysis is carefully investigated for the Rayleigh-fading channel. It is shown that the SNR-compensation is satisfied for the large-SNR range.
The elasticity of transmission control protocol (TCP) traffic complicates attempts to provide performance guarantees to TCP flows. The existence of different types of networks and environments on the connections’ paths only aggravates this problem. In this paper, simulation is the primary means for investigating the specific problem in the context of bandwidth on demand (BoD) geostationary satellite networks. Proposed transport-layer options and mechanisms for TCP performance enhancement, studied in the single connection case or without taking into account the media access control (MAC)-shared nature of the satellite link, are evaluated within a BoD-aware satellite simulation environment. Available capabilities at MAC layer, enabling the provision of differentiated service to TCP flows, are demonstrated and the conditions under which they perform efficiently are investigated. The BoD scheduling algorithm and the policy regarding spare capacity distribution are two MAC-layer mechanisms that appear to be complementary in this context; the former is effective at high levels of traffic load, whereas the latter drives the differentiation at low traffic load. When coupled with transport layer mechanisms they can form distinct bearer services over the satellite network that increase the differentiation robustness against the TCP bias against connections with long round-trip times. We also explore the use of analytical, fixed-point methods to predict the performance at transport level and link level. The applicability of the approach is mainly limited by the lack of analytical models accounting for prioritization mechanisms at the MAC layer and the nonuniform distribution of traffic load among satellite terminals.
Decentralized joint transmit power and beam- forming selection for multiple antenna wireless ad hoc net- works operating in a multi-user interference environment is considered. An important feature of the considered environ- ment is that altering the transmit beamforming pattern at some node generally creates more signicant changes to in- terference scenarios for neighboring nodes than variation of the transmit power. Based on this premise, a good neighbor algorithm is formulated in the way that at the sensing node, a new beamformer is selected only if it needs less than the given portion of the transmit power required for the current beamformer. Otherwise, it keeps the current beamformer and achieves the performance target only by means of power adaptation. Equilibrium performance and convergence be- havior of the proposed algorithm compared to the best re- sponse and regret matching solutions is demonstrated by means of semi-analytic Markov chain performance analysis for small scale and simulations for large scale networks.
Energy efficiency (EE) is undoubtedly an important criterion for designing power-limited systems, and yet in a context of global energy saving, its relevance for power-unlimited systems is steadily growing. Equally, resource allocation is a well-known method for improving the performance of cellular systems. In this paper, we propose an EE optimization framework for the downlink of planar cellular systems over frequency-selective channels. Relying on this framework, we design two novel low-complexity resource allocation algorithms for the single-cell and coordinated multi-cell scenarios, which are EE-optimal and EE-suboptimal, respectively. We then utilize our algorithms for comparing the EE performance of the classic non-coordinated, orthogonal and coordinated multi-cell approaches in realistic power and system settings. Our results show that coordination can be a simple and effective method for improving the EE of cellular systems, especially for medium to large cell sizes. Indeed, by using a coordinated rather than a non-coordinated resource allocation approach, the per-sector energy consumption and transmit power can be reduced by up to 15% and more than 90%, respectively.
Cooperative communication is an effective approach for increasing the spectral efficiency and/or the coverage of cellular networks as well as reducing the cost of network deployment. However, it remains to be seen how energy efficient it is. In this paper, we assess the energy efficiency of the conventional Amplify-and- forward (AF) scheme in an in-building relaying scenario. This scenario simplifies the mutual information formulation of the AF system and allows us to express its channel capacity with a simple and accurate closed-form approximation. In addition, a framework for the energy efficiency analysis of AF system is introduced, which includes a power consumption model and an energy efficiency metric, i.e. the bit-per-joule capacity. This framework along with our closed-form approximation are utilized for assessing both the channel and bit-per-joule capacities of the AF system in an in-building scenario. Our results indicate that transmitting with maximum power is not energy efficient and that AF system is more energy efficient than point-to-point communication at low transmit powers and signal-to-noise ratios.
Hybrid networks consisting of both millimeter wave (mmWave) and microwave (μW) capabilities are strongly contested for next generation cellular communications. A similar avenue of current research is deviceto- device (D2D) communications, where users establish direct links with each other rather than using central base stations (BSs). However, a hybrid network, where D2D transmissions coexist, requires special attention in terms of efficient resource allocation. This paper investigates dynamic resource sharing between network entities in a downlink (DL) transmission scheme to maximize energy efficiency (EE) of the cellular users (CUs) served by either (μW) macrocells or mmWave small cells, while maintaining a minimum quality-of-service (QoS) for the D2D users. To address this problem, firstly a self-adaptive power control mechanism for the D2D pairs is formulated, subject to an interference threshold for the CUs while satisfying their minimum QoS level. Subsequently, a EE optimization problem, which is aimed at maximizing the EE for both CUs and D2D pairs, has been solved. Simulation results demonstrate the effectiveness of our proposed algorithm, which studies the inherent tradeoffs between system EE, system sum rate and outage probability for various QoS levels and varying density of D2D pairs and CUs.
When channel state information (CSI) is not available to the transmitter, outage events might happen and Automatic Repeat re-Quest (ARQ) is implemented to ensure reliable transmission in such case. In this paper, we consider a three nodes relay system with hybrid relay scheme, where the relay, based on it decoding status, could switch between decode-and-forward (DF) and compress-and-forward (CF) adaptively. We notice that CSI is required when CF is deployed and consider practical implementation issues by enhancing the ability of feedback channel from the destination to the relay to convey a few extra bits (only 2 bits in this paper) in addition to the ACK/NACK bit and propose a new ARQ scheme. The modified scheme allows the relay to utilize various relay schemes more flexibly according to its coding status and the extra feedback bits. ARQ strategies with hybrid relay schemes exhibits superior performance over direct transmission and pure DF, especially when the relay is close to the destination.
Simultaneous improvement of matching and isolation for a modified two-element microstrip patch antenna array is proposed. Two simple patch antennas in a linear array structure are designed, whereas, the impedance matching and isolation are improved without using any conventional matching networks. The presented low profile multifunctional via-less structure comprises of only two narrow T-shaped stubs connected to feed lines, a narrow rectangular stub between them, and a narrow rectangular slot on the ground plane. This design provides a simple, compact structure with low mutual coupling, low cost and no adverse effects on the radiation and resonance. To validate the design, a compact very-closely-spaced antenna array prototype is fabricated at 5.5 GHz which is suitable for multiple-input-multiple-output (MIMO) systems. The measured and simulated results are in good agreement with a 16 dB, and 40 dB of improvements in the matching and isolation, respectively.
This paper presents a novel approach for mobile positioning in IEEE 802.11a wireless LANs with acceptable computational complexity. The approach improves the positioning accuracy by utilizing the time and frequency domain channel information obtained from the orthogonal frequency-division multiplexing (OFDM) signals. The simulation results show that the proposed approach outperforms the multiple signal classification (MUSIC) algorithm, Ni's algorithm and achieve a positioning accuracy of 1 m with a 97% probability in an indoor scenario.
Conventional cellular systems are designed to ensure ubiquitous coverage with an always present wireless channel irrespective of the spatial and temporal demand of service. This approach raises several problems due to the tight coupling between network and data access points, as well as the paradigm shift towards data-oriented services, heterogeneous deployments and network densification. A logical separation between control and data planes is seen as a promising solution that could overcome these issues, by providing data services under the umbrella of a coverage layer. This article presents a holistic survey of existing literature on the control-data separation architecture (CDSA) for cellular radio access networks. As a starting point, we discuss the fundamentals, concept and general structure of the CDSA. Then, we point out limitations of the conventional architecture in futuristic deployment scenarios. In addition, we present and critically discuss the work that has been done to investigate potential benefits of the CDSA, as well as its technical challenges and enabling technologies. Finally, an overview of standardisation proposals related to this research vision is provided.
Along with spectral efficiency (SE), energy efficiency (EE) is becoming one of the key performance evaluation criteria for communication system. These two criteria, which are conflicting, can be linked through their trade-off. The EE-SE trade-off for the multi-input multi-output (MIMO) Rayleigh fading channel has been accurately approximated in the past but only in the low-SE regime. In this paper, we propose a novel and more generic closed-form approximation of this trade-off which exhibits a greater accuracy for a wider range of SE values and antenna configurations. Our expression has been here utilized for assessing analytically the EE gain of MIMO over single-input single-output (SISO) system for two different types of power consumption models (PCMs): the theoretical PCM, where only the transmit power is considered as consumed power; and a more realistic PCM accounting for the fixed consumed power and amplifier inefficiency. Our analysis unfolds the large mismatch between theoretical and practical MIMO vs. SISO EE gains; the EE gain increases both with the SE and the number of antennas in theory, which indicates that MIMO is a promising EE enabler; whereas it remains small and decreases with the number of transmit antennas when a realistic PCM is considered. © 2012 IEEE.
In this paper, we propose novel Hybrid Automatic Repeat re-Quest (HARQ) strategies used in conjunction with hybrid relaying schemes, named as H^2-ARQ-Relaying. The strategies allow the relay to dynamically switch between amplify-and-forward/compress-and-forward and decode-and-forward schemes according to its decoding status. The performance analysis is conducted from both the spectrum and energy efficiency perspectives. The spectrum efficiency of the proposed strategies, in terms of the maximum throughput, is significantly improved compared with their non-hybrid counterparts under the same constraints. The consumed energy per bit is optimized by manipulating the node activation time, the transmission energy and the power allocation between the source and the relay. The circuitry energy consumption of all involved nodes is taken into consideration. Numerical results shed light on how and when the energy efficiency can be improved in cooperative HARQ. For instance, cooperative HARQ is shown to be energy efficient in long distance transmission only. Furthermore, we consider the fact that the compress-and-forward scheme requires instantaneous signal to noise ratios of all three constituent links. However, this requirement can be impractical in some cases. In this regard, we introduce an improved strategy where only partial and affordable channel state information feedback is needed.
Scalable Resource Allocation (ScRA) algorithm is developed to improve the mobile network radio resource utilization [l]. The traditional mobile dimensioning is based on the “busy hour” traffic intensity, and this Static Resource Allocation (StRA) methodology does not seem to be able to provide efticient radio resource utilization for the future/present multi services environment, with their expected spatially and temporally varying loads. This is in hindrance for the introduction of wireless IP based services, for which the demand is rapidly increasing. This paper provides an extension analysis by incorporating single slot FIFO and single slot Round Robin (single slot RR), blocked-call cleared (BCC) and blocked-call delayed (BCD) strategies in the ScRA scheme. By employing the ScRA scheme in an example GSM and GPRS network, we specifically investigate and evaluate the system throughput for both the circuit-and packet-switched networks. The findings show that single slot FIFO ScRA and single slot RR ScRA schemes obtained no difference in system throughput. On the other hand, when BCD is implemented in ScRA scheme, there is significant throughput gain.
A device-to-device (D2D) ultra reliable low latency communications (URLLC) network is investigated in this paper. Specifically, a D2D transmitter opportunistically accesses the radio resource provided by a cellular network and directly transmits short packets to its destination. A novel performance metric is adopted for finite block-length code. We quantify the maximum achievable rate for the D2D network, subject to a probabilistic interference power constraint based on imperfect channel state information (CSI). First, we perform a convexity analysis which reveals that the finite block-length rate for the D2D pair in short-packet transmission is not always concave. To address this issue, we propose two effective resource allocation schemes using the successive convex approximation (SCA)-based iterative algorithm. To gain more insights, we exploit the mono- tonicity of the average finite block-length rate. By capitalizing on this property, an optimal power control policy is proposed, followed by closed-form expressions and approximations for the optimal average power and the maximum achievable average rate in the finite block-length regime. Numerical results are provided to confirm the effectiveness of the proposed resource allocation schemes and validate the accuracy of the derived theoretical results.
Network densification with small cell deployment is being considered as one of the dominant themes in the fifth generation (5G) cellular system. Despite the capacity gains, such deployment scenarios raise several challenges from mobility management perspective. The small cell size, which implies a small cell residence time, will increase the handover (HO) rate dramatically. Consequently, the HO latency will become a critical consideration in the 5G era. The latter requires an intelligent, fast and light-weight HO procedure with minimal signalling overhead. In this direction, we propose a memory-full context-aware HO scheme with mobility prediction to achieve the aforementioned objectives. We consider a dual connectivity radio access network architecture with logical separation between control and data planes because it offers relaxed constraints in implementing the predictive approaches. The proposed scheme predicts future HO events along with the expected HO time by combining radio frequency performance to physical proximity along with the user context in terms of speed, direction and HO history. To minimise the processing and the storage requirements whilst improving the prediction performance, a user-specific prediction triggering threshold is proposed. The prediction outcome is utilised to perform advance HO signalling whilst suspending the periodic transmission of measurement reports. Analytical and simulation results show that the proposed scheme provides promising gains over the conventional approach.
A Low complex interference cancellation via modified suboptimum search algorithm in conjunction with a primary stage of reduced rank linear (RRL) multiuser detector for the mobile uplink is proposed. Initial stage is improved through mathematical analysis via Gershgorin algorithm in linear algebra and RRLG detector is introduced. The complexity of the initial stage is evaluated and compared to the recently reported low-complex Fourier interference cancellation method. Depending on the value of the spreading factor of the active users in system, RRLG outperforms the Fourier algorithm, in terms of complexity. The structure of the RRLG method and the suboptimum search algorithm are well suited together and makes them collaboratively work without encountering a high level of complexity. Considering the power profile of the users in the suboptimum search algorithm leaded to even less complexity, yet keeping the performance almost the same. The performance of the structure is obtained via simulations and has been compared to partial parallel interference cancellation (PPIC) method. A good improvement in performance in the low SNR regions, which is difficult to achieve by conventional multiuser detectors and also important as the actual systems are likely to operate in these regions, has been achieved. All the techniques and their modifications introduced in this work consider the complexity as an important issue that enables them suitable for industry and implementation purposes. Another important feature is that the techniques perform on canonical matrix formulations of the system so they can be applied to MC-CDMA and MIMO systems, as well.
This letter presents a new posterior Cramér-Rao bound (PCRB) for inertial sensors enhanced mobile positioning, which performs hybrid data fusion of parameters including position estimates, pedestrian step size, pedestrian heading, and the knowledge of random walk motion model. Moreover, a non-matrix closed form of the PCRB is derived without position estimates. Finally, our numerical results show that when the accuracy of step size and heading measurements is high enough, the knowledge of random walk model becomes redundant.
This paper investigates the downlink handover (soft/softer/hard) performance of Wideband Code Division Multiple Access (WCDMA) based 3rd generation Universal Mobile Telecommunication System (UMTS), as it is known that the downlink capacity of UMTS is very sensitive to the extent of overlap area between adjacent cells and power margin between them. Factors influencing the handover performance such as the correlation between the multipath radio channels of the two links, limiting number of Rake fingers in a handset, imperfect channel estimation, etc. that cannot be modeled adequately in system-level simulations are investigated via link-level simulations. It is also shown that the geometry factor has an influence on the handover performance and exhibits a threshold value (which depends on the correlation between the multipath channels associated with the two links in a handover) above which the performance starts degrading. The variation of the handover gain with the closed loop power control (CLPC) stepsize and space-time transmit diversity (STTD) is also quantified. These comprehensive results can be used as guidelines for more accurate coverage and capacity planning of UMTS networks.
This paper presents a parallel computing approach that is employed to reconstruct original information bits from a non-recursive convolutional codeword in noise, with the goal of reducing the decoding latency without compromising the performance. This goal is achieved by means of cutting a received codeword into a number of sub-codewords (SCWs) and feeding them into a two-stage decoder. At the first stage, SCWs are decoded in parallel using the Viterbi algorithm or equivalently the brute force algorithm. Major challenge arises when determining the initial state of the trellis diagram for each SCW, which is uncertain except for the first one; and such results in multiple decoding outcomes for every SCW. To eliminate or more precisely exploit the uncertainty, an Euclidean-distance minimization algorithm is employed to merge neighboring SCWs; and this is called the merging stage, which can also run in parallel. Our work reveals that the proposed two-stage decoder is optimal and has its latency growing logarithmically, instead of linearly as for the Viterbi algorithm, with respect to the codeword length. Moreover, it is shown that the decoding latency can be further reduced by employing artificial neural networks for the SCW decoding. Computer simulations are conducted for two typical convolutional codes, and the results confirm our theoretical analysis.
In this paper, we analyze the block error rate (BLER) and bit error rate (BER) of soft decode-and-forward (SDF) using distributed Turbo codes, which is proposed recently to mitigate error propagation caused by decoding error in the relay node. Union bound (UB) in fading channels is derived and compared with simulation results. In order to get tight bounds for block fading case, limit-before-average technique is used. Furthermore, we extend our analysis to the space-time cooperation framework. The analysis and simulation show that the derived bound is very tight.
Filtered orthogonal frequency division multiplexing (F-OFDM) system is a promising waveform for 5G and beyond to enable multi-service system and spectrum efficient network slicing. However, the performance for F-OFDM systems has not been systematically analyzed in literature. In this paper, we first establish a mathematical model for F-OFDM system and derive the conditions to achieve the interference-free one-tap channel equalization. In the practical cases (e.g., insufficient guard interval, asynchronous transmission, etc.), the analytical expressions for inter-symbol-interference (ISI), inter-carrier-interference (ICI) and adjacent-carrier-interference (ACI) are derived, where the last term is considered as one of the key factors for asynchronous transmissions. Based on the framework, an optimal power compensation matrix is derived to make all of the subcarriers having the same ergodic performance. Another key contribution of the paper is that we propose a multi-rate F-OFDM system to enable low complexity low cost communication scenarios such as narrow band Internet of Things (IoT), at the cost of generating intersubband- interference (ISubBI). Low computational complexity algorithms are proposed to cancel the ISubBI. The result shows that the derived analytical expressions match the simulation results, and the proposed ISubBI cancelation algorithms can significantly save the original F-OFDM complexity (up to 100 times) without significant performance loss.
this paper presents a novel approach in targeting load balancing in ad hoc networks utilizing the properties of quantum game theory. This approach benefits from the instantaneous and information-less capability of entangled particles to synchronize the load balancing strategies in ad hoc networks. The Quantum Load Balancing (QLB) algorithm proposed by this work is implemented on top of OLSR as the baseline routing protocol; its performance is analyzed against the baseline OLSR, and considerable gain is reported regarding some of the main QoS metrics such as delay and jitter. Furthermore, it is shown that QLB algorithm supports a solid stability gain in terms of throughput which stands a proof of concept for the load-balancing properties of the proposed theory.
In this paper, a novel low-complexity and spectrally efficient modulation scheme for visible light communication (VLC) is proposed. Our new spatial quadrature modulation (SQM) is designed to efficiently adapt traditional complex modulation schemes to VLC; i.e. converting multi-level quadrature amplitude modulation (M-QAM), to real-unipolar symbols, making it suitable for transmission over light intensity. The proposed SQM relies on the spatial domain to convey the orthogonality and polarity of the complex signals, rather than mapping bits to symbol as in existing spatial modulation (SM) schemes. The detailed symbol error analysis of SQM is derived and the derivation is validated with link level simulation results. Using simulation and derived results, we also provide a performance comparison between the proposed SQM and SM. Simulation results demonstrate that SQM could achieve a better symbol error rate (SER) and/or data rate performance compared to the state of the art in SM; for instance a Eb/No gain of at least 5 dB at a SER of 10 4.
The aim of this paper is to handle the multifrequency synchronization problem inherent in orthogonal frequency-division multiple access (OFDMA) uplink communications, where the carrier frequency offset (CFO) for each user may be different, and they can be hardly compensated at the receiver side. Our major contribution lies in the development of a novel OFDM receiver that is resilient to unknown random CFO thanks to the use of a CFO-compensator bank. Specifically, the whole CFO range is evenly divided into a set of sub-ranges, with each being supported by a dedicated CFO compensator. Given the optimization for CFO compensator a NP-hard problem, a machine deep-learning approach is proposed to yield a good sub-optimal solution. It is shown that the proposed receiver is able to offer inter-carrier interference free performance for OFDMA systems operating at a wide range of SNRs.
In time-division-duplexing (TDD) massive multipleinput multiple-output (MIMO) systems, channel reciprocity is exploited to overcome the overwhelming pilot training and the feedback overhead. However, in practical scenarios, the imperfections in channel reciprocity, mainly caused by radiofrequency mismatches among the antennas at the base station side, can significantly degrade the system performance and might become a performance limiting factor. In order to compensate for these imperfections, we present and investigate two new calibration schemes for TDD-based massive multi-user MIMO systems, namely, relative calibration and inverse calibration. In particular, the design of the proposed inverse calibration takes into account a compound effect of channel reciprocity error and channel estimation error. We further derive closedform expressions for the ergodic sum rate, assuming maximum ratio transmissions with the compound effect of both errors. We demonstrate that the inverse calibration scheme outperforms the traditional relative calibration scheme. The proposed analytical results are also verified by simulated illustrations.
This paper presents a novel mechanism which increases mobile terminal battery performance. It supports a cell reselection algorithm which decides on which cell, user equipment (UE) is camped on when in idle mode (there is no active radio connection with a mobile network). Study is based on real 3G UTRA network measurements. Authors propose a technique to reduce UE current consumption in idle mode based on dynamic Sintrasearch neighbour cell measurements threshold optimization. System analysis covers both UTRA and E-UTRA - Long Term Evolution (LTE) technology.
Due to dynamic wireless network conditions and heterogeneous mobile web content complexities, web-based content services in mobile network environments always suffer from long loading time. The new HTTP/2.0 protocol only adopts one single TCP connection, but recent research reveals that in real mobile environments, web downloading using single connection will experience long idle time and low bandwidth utilization, in particular with dynamic network conditions and web page characteristics. In this paper, by leveraging the Mobile Edge Computing (MEC) technique, we present the framework of Mobile Edge Hint (MEH), in order to enhance mobile web downloading performances. Specifically, the mobile edge collects and caches the meta-data of frequently visited web pages and also keeps monitoring the network conditions. Upon receiving requests on these popular webpages, the MEC server is able to hint back to the HTTP/2.0 clients on the optimized number of TCP connections that should be established for downloading the content. From the test results on real LTE testbed equipped with MEH, we observed up to 34.5% time reduction and in the median case the improvement is 20.5% compared to the plain over-the-top (OTT) HTTP/2.0 protocol.
Millimeter wave (mmWave) systems with effective beamforming capability play a key role in fulfilling the high data-rate demands of current and future wireless technologies. Hybrid analog-todigital beamformers have been identified as a cost-effective and energy-efficient solution towards deploying such systems. Most of the existing hybrid beamforming architectures rely on a subconnected phase shifter network with a large number of antennas. Such approaches, however, cannot fully exploit the advantages of large arrays. On the other hand, the current fully-connected beamformers accommodate only a small number of antennas, which substantially limits their beamforming capabilities. In this paper, we present a mmWave hybrid beamformer testbed with a fully-connected network of phase shifters and adjustable attenuators and a large number of antenna elements. To our knowledge, this is the first platform that connects two RF inputs from the baseband to a 16 8 antenna array, and it operates at 26 GHz with a 2 GHz bandwidth. It provides a wide scanning range of 60, and the flexibility to control both the phase and the amplitude of the signals between each of the RF chains and antennas. This beamforming platform can be used in both short and long-range communications with linear equivalent isotropically radiated power (EIRP) variation between 10 dBm and 60 dBm. In this paper, we present the design, calibration procedures and evaluations of such a complex system as well as discussions on the critical factors to consider for their practical implementation.
Choice of a suitable waveform is a key factor in the design of 5G physical layer. New waveform/s must be capable of supporting a greater density of users, higher data throughput and should provide more efficient utilization of available spectrum to support 5G vision of “everything everywhere and always connected” with “perception of infinite capacity”. Although orthogonal frequency division multiplexing (OFDM) has been adopted as the transmission waveform in wired and wireless systems for years, it has several limitations that make it unsuitable for use in future 5G air interface. In this chapter, we investigate and analyse alternative waveforms that are promising candidate solutions to address the challenges of diverse applications and scenarios in 5G.
Along with spectral efficiency (SE), Energy efficiency (EE) is becoming one of the main performance evaluation criteria in communication. These two criteria, which are conflicting, can be linked through their trade-off. As far as MIMO is concerned, a closed-form approximation of the EE-SE trade-off has recently been proposed and it proved useful for analyzing the impact of using multiple antennas on the EE. In this paper, we use this closed-form approximation for assessing and comparing the EE gain of MIMO over SISO system when different power consumption models (PCMs) are considered at the transmitter. The EE of a communication system is closely related to its power consumption. In theory only the transmit power is considered as consumed power, whereas in a practical setting, the consumed power is the addition of two terms; the fixed consumed power, which accounts for cooling, processing, etc., and the variable consumed power, which varies as a function of the transmit power. Our analysis unveils the large mismatch between theoretical and practical EE gain of MIMO over SISO system; In theory, the EE gain increases both with the SE and the number of antennas, and, hence the potential of MIMO for EE improvement is very large in comparison with SISO; On the contrary, the EE gain is small and decreases as the number of transmit antennas increases when realistic PCMs are considered.
Network scenarios beyond 3G assume the cooperation of operators with wireless access networks of different technologies in order to improve scalability and provide enhanced services to their mobile customers. While the selection of an optimised delivery path in such scenarios with multiple access networks is already a challenging task for unicast delivery, the problem becomes more severe for multicast services, where a potentially large group of heterogeneous receivers has to be served simultaneously via shared resources. In this paper we study the problem of selecting the optimal bearer paths for multicast services with groups of heterogeneous receivers in wireless networks with overlapping coverage. We propose an algorithm for bearer selection with different optimisation goals, demonstrating the existing tradeoff between user preference and resource efficiency.
This paper investigates adaptive implementation of the linear minimum mean square error (MMSE) detector in code division multiple access (CDMA). From linear algebra, Cimmino's reflection method is proposed as a possible way of achieving the MMSE solution blindly. Simulation results indicate that the proposed method converges four times faster than the blind least mean squares (LMS) algorithm and has roughly the same convergence performance as the blind recursive least squares (RLS) algorithm. Moreover the proposed algorithm is numerically more stable than the RLS algorithm and also exhibits parallelism for pipelined implementation. © 2009 IEEE.
This paper proposes an analytical model for the throughput of the Enhanced Distributed Channel Access (EDCA)mechanism in IEEE 802.11p MAC sub-layer. Features in EDCA such as different Contention Windows (CW) and Arbitration Interframe Space (AIFS) for each Access Category (AC), and internal collisions are taken into account. The analytical model is suitable for both basic access and the Request-To-Send/Clear-To-Send (RTS/CTS) access mode. The proposed analytical model is validated against simulation results to demonstrate its accuracy
Previous work about cooperative localization in cellular networks usually consider a centralized processor (CP) is available for location estimation. This paper consider cooperative localization in a distributed base station (BS) scenario, where there is no CP, and the distributed BSs are responsible for location estimation. In this scenario, Global Positioning System (GPS) enable mobile terminals (MTs), i.e., located MTs, are employed as reference nodes. Then, several located MTs can help to find the locations of an un-located MT, by estimating the distance between the un-located MT using received signal strength techniques. Two localization approaches are proposed, the first approach requires only one BS to collect all the assistance information for localization and estimate the location. The second approach distribute the location estimation task to several BSs. The communication overhead between distributed BSs are investigated for these two approaches. Moreover, by taking into account the effect of imperfect location knowledge of the located MTs, the accuracy limits of both approaches are derived. The simulation results shows that compared with the first approach, the second approach can reduce the communication overhead between distributed BSs with the paid of accuracy. © 2011 IEEE.
In this paper, we propose a data cell outage detection scheme for heterogeneous networks (HetNets) with separated control and data plane. We consider a HetNet where the Control Network Layer (CNL) provides ubiquitous network access while Data Network Layer (DNL) provides high data rate transmission to low mobility User Terminals (UTs). Furthermore, network functionalities such as paging and system information broadcast are provided by the CNL to all active UTs, hence, the CNL is aware of all active UTs association. Based on this observation, we categorize our data cell outage detection scheme into the trigger phase and detection phase. In the former, the CNL monitors all UT-data base station association and triggers detection when irregularities occurs in the association, while the later utilizes a grey prediction model on the UTs’ reference signal received power (RSRP) statistics to determine the existence of an outage. The simulation results indicate that the proposed scheme can detect the data cell outage problem in a reliable manner.
Energy efﬁciency (EE) is a key ﬁgure of merit for designing the next generation of communication systems. Meanwhile, relay-based cooperative communication, through machine-to-machine and other related technologies, is also playing an important part in the development of these systems. This paper designs an energy efﬁcient precoding method for optimizing the EE/energy consumption of two-way multi-input multi-output (MIMO)-amplify-and-forward (AF) relay systems by using pseudo-convexity analysis to design EE-optimal precoding matrices. More precisely, we derive an EE-optimal source precoding matrix in closed-form, design a numerical approach for obtaining an optimal relay precoding matrix, prove the optimality of these matrices, when treated separately, and provide lowcomplexity bespoke algorithms to generate them. These matrices are then jointly optimized through an alternating optimization process that is proved to be systematically convergent. Performance evaluation indicates that our method can be globally optimal in some scenarios and that it is signiﬁcantly more energy efﬁcient (i.e. up to 60% more energy efﬁcient) than existing EEbased one-way or two-way MIMO-AF precoding methods.
Data discovery for sensor data in an M2M network uses probabilistic models, such as Gaussian Mixing Models (GMMs) to represent attributes of the sensor data. The parameters of the probabilistic models can be provided to a discovery server (DS) that respond to queries concerning the sensor data. Since the parameters are compressed compared to the attributes of the sensor data itself, this can simplify the distribution of discovery data. A hierarchical arrangement of discovery servers can also be used with multiple levels of discovery servers where higher level discovery servers using more generic probabilistic models.
In this paper, metamaterial loading on loop and open loop microstrip filters is investigated where both rectangular loop and open loop structures are considered. Spiral resonators are loaded on the four sides of the square loop and result in higher size reduction compared to the conventional split ring resonators with identical structural parameters. It is shown that, for both proposed filters, metamaterial loading provides size reduction, due to possessing lower resonant frequency of spiral resonators. The structures are analytically investigated through the transmission matrix method. In the designed rectangular loop filters, there are two nulls on both sides of the pass-band, which provide high out-of-band rejection and is preserved in the corresponding miniaturized metamaterial loaded structures. However open loop resonators provide lower resonant frequencies or more compact size filters. The proposed filter is fabricated and tested and measured results are in good agreement with simulation ones.
Energy savings are becoming a global trend, hence the importance of energy efficiency (EE) as an alternative performance evaluation metric. This paper proposes an EE based resource allocation method for the broadcast channel (BC), where a linear power model is used to characterize the power consumed at the base station (BS). Having formulated our EE based optimization problem and objective function, we utilize standard convex optimization techniques to show the concavity of the latter, and thus, the existence of a unique globally optimal energy-efficient rate and power allocation. Our EE based resource allocation framework is also extended to incorporate fairness, and provide a minimum user satisfaction in terms of spectral efficiency (SE). We then derive the generic equation of the EE contours and use them to get insights about the EE-SE trade-off over the BC. The performances of the aforementioned resource allocation schemes are compared for different metrics against the number of users and cell radius. Results indicate that the highest EE improvement is achieved by using the unconstrained optimization scheme, which is obtained by significantly reducing the total transmit power. Moreover, the network EE is shown to increase with the number of users and decrease as the cell radius increases.
In this paper, we investigate the design of a radio resource control (RRC) protocol in the framework of long-term evolution (LTE) of the 3rd Generation Partnership Project regarding provision of low cost/complexity and low energy consumption machine-type communication (MTC), which is an enabling technology for the emerging paradigm of the Internet of Things. Due to the nature and envisaged battery-operated long-life operation of MTC devices without human intervention, energy efficiency becomes extremely important. This paper elaborates the state-of-the-art approaches toward addressing the challenge in relation to the low energy consumption operation of MTC devices, and proposes a novel RRC protocol design, namely, semi-persistent RRC state transition (SPRST), where the RRC state transition is no longer triggered by incoming traffic but depends on pre-determined parameters based on the traffic pattern obtained by exploiting the network memory. The proposed RRC protocol can easily co-exist with the legacy RRC protocol in the LTE. The design criterion of SPRST is derived and the signalling procedure is investigated accordingly. Based upon the simulation results, it is shown that the SPRST significantly reduces both the energy consumption and the signalling overhead while at the same time guarantees the quality of service requirements.
The concept of Ultra Dense Networks (UDNs) is often seen as a key enabler of the next generation mobile networks. The massive number of BSs in UDNs represents a challenge in deployment, and there is a need to understand the performance behaviour and benefit of a network when BS locations are carefully selected. This can be of particular importance to the network operators who deploy their networks in large indoor open spaces such as exhibition halls, airports or train stations where locations of BSs often follow a regular pattern. In this paper we study performance of UDNs in downlink for regular network produced by careful BS site selection and compare to the irregular network with random BS placement. We first develop an analytical model to describe the performance of regular networks showing many similar performance behaviour to that of the irregular network widely studied in the literature. We also show the potential performance gain resulting from proper site selection. Our analysis further shows an interesting finding that even for over-densified regular networks, a nonnegligible system performance could be achieved.
A Ka-band inset-fed microstrip patches linear antenna array is presented for the fifth generation (5G) applications in different countries. The bandwidth is enhanced by stacking parasitic patches on top of each inset-fed patch. The array employs 16 elements in an H-plane new configuration. The radiating patches and their feed lines are arranged in an alternating out-of-phase 180-degree rotating sequence to decrease the mutual coupling and improve the radiation pattern symmetry. A (24.4%) measured bandwidth (24.35 to 31.13 GHz)is achieved with -15 dB reflection coefficients and 20 dB mutual coupling between the elements. With uniform amplitude distribution, a maximum broadside gain of 19.88 dBi is achieved. Scanning the main beam to 49.5° from the broadside achieved 18.7 dBi gain with -12.1 dB sidelobe level (SLL). These characteristics are in good agreement with the simulations, rendering the antenna to be a good candidate for 5G applications.
Future wireless local area networks (WLANs) are expected to serve thousands of users in diverse environments. To address the new challenges that WLANs will face, and to overcome the limitations that previous IEEE standards introduced, a new IEEE 802.11 amendment is under development. IEEE 802.11ax aims to enhance spectrum efficiency in a dense deployment; hence system throughput improves. Dynamic Sensitivity Control (DSC) and BSS Color are the main schemes under consideration in IEEE 802.11ax for improving spectrum efficiency In this paper, we evaluate DSC and BSS Color schemes when physical layer capture (PLC) is modelled. PLC refers to the case that a receiver successfully decodes the stronger frame when collision occurs. It is shown, that PLC could potentially lead to fairness issues and higher throughput in specific cases. We study PLC in a small and large scale scenario, and show that PLC could also improve fairness in specific scenarios.
Coping with the extreme growth of the number of users is one of the main challenges for the future IEEE 802.11 networks. The high interference level, along with the conventional standardized carrier sensing approaches, will degrade the network performance. To tackle these challenges, the Dynamic Sensitivity Control (DSC) and the BSS Color scheme are considered in IEEE 802.11ax and IEEE 802.11ah, respectively. The main purpose of these schemes is to enhance the network throughput and improve the spectrum efficiency in dense networks. In this paper, we evaluate the DSC and the BSS Color scheme along with the PARTIAL-AID (PAID) feature introduced in IEEE 802.11ac, in terms of throughput and fairness. We also, exploit the performance when the aforementioned techniques are combined. The simulations show a significant gain in total throughput when these techniques are applied.
5G New Radio (NR) Release 15 has been specified in June 2018. It introduces numerous changes and potential improvements for physical layer data transmissions, although only point-to-point (PTP) communications are considered. In order to use physical data channels such as the Physical Downlink Shared Channel (PDSCH), it is essential to guarantee a successful transmission of control information via the Physical Downlink Control Channel (PDCCH). Taking into account these two aspects, in this paper, we first analyze the PDCCH processing chain in NR PTP as well as in the state-of-the-art Long Term Evolution (LTE) point-to-multipoint (PTM) solution, i.e., evolved Multimedia Broadcast Multicast Service (eMBMS). Then, via link level simulations, we compare the performance of the two technologies, observing the Bit/Block Error Rate (BER/BLER) for various scenarios. The objective is to identify the performance gap brought by physical layer changes in NR PDCCH as well as provide insightful guidelines on the control channel configuration towards NR PTM scenarios.
Physical layer security (PLS) technologies have attracted much attention in recent years for their potential to provide information-theoretically secure communications. Artificial Noise (AN)-aided transmission is considered as one of the most practicable PLS technologies, as it can realize secure transmission independent of the eavesdropper’s channel status. In this paper, we reveal that AN transmission has the dependency of eavesdropper’s channel condition by introducing our proposed attack method based on a supervised-learning algorithm which utilizes the modulation scheme, available from known packet preamble and/or header information, as supervisory signals of training data. Numerical simulation results with the comparison to conventional clustering methods show that our proposed method improves the success probability of attack from 4.8% to at most 95.8% for the QPSK modulation. It implies that the transmission to the receiver in the cell-edge with low order modulation will be cracked if the eavesdropper’s channel is good enough by employing more antennas than the transmitter. This work brings new insights into the effectiveness of AN schemes and provides useful guidance for the design of robust PLS techniques for practical wireless systems.
This paper presents a new multi-channel MAC protocol for Vehicular Ad Hoc Networks, namely, Asynchronous Multi-Channel MAC (AMCMAC). The AMCMAC supports simultaneous transmissions on different service channels, as well as, allowing other nodes to make rendezvous with their provider/receiver or broadcast emergency messages on the control channel. We compare the performance of the proposed protocol with that of IEEE 1609.4 and Asynchronous Multichannel Coordination Protocol (AMCP), in terms of throughput on control and service channels, channel utilization, and the penetration rate of successfully broadcast emergency messages. We demonstrate that AMCMAC outperforms IEEE 1609.4 and AMCP in terms of system throughput by increasing the utilization of control channel and service channels. In addition, AMCMAC mitigates both the multi-channel hidden terminal and missing receiver problems which occur in asynchronous multichannel MAC protocols. © 2011 IEEE.
Many method has been applied previously to improve the fairness of a wireless communication system. In this paper, we propose using hybrid schemes, where more than one transmission scheme are used in one system, to achieve this objective. These schemes consist of cooperative transmission schemes, maximal ratio transmission and interference alignment, and non-cooperative schemes, orthogonal and non-orthogonal schemes used alongside and in combinations in the same system to improve the fairness. We provide different weight calculation methods to vary the output of the fairness problem. We show the solution of the radio resource allocation problem for the transmission schemes used. Finally, simulation results is provided to show fairness achieved, in terms of Jain's fairness index, by applying the hybrid schemes proposed and the different weight calculation methods at different inter-site distances.
Energy efficiency (EE) is growing in importance as a key performance indicator for designing the next generation of communication systems. Equally, resource allocation is an effective approach for improving the performance of communication systems. In this paper, we propose a low-complexity energyefficient resource allocation method for the orthogonal multiantenna multi-carrier channel. We derive explicit formulations of the optimal rate and energy-per-bit consumption for the per-antenna transmit power constrained and per-antenna rate constrained EE optimization problems as well as provide a lowcomplexity algorithm for optimally allocating resources over the orthogonal multi-antenna multi-carrier channel. We then compare our approach against a classic optimization tool in terms of energy efficiency as well as complexity, and results indicate the optimality and low-complexity of our approach. Comparing EE-optimal with spectral efficiency and power optimal allocation approaches over the orthogonal multi-antenna multi-carrier channel indicates that the former provides a good trade-off between power consumption and sum-rate performances.
Spectrum sensing is one of key enabling techniques to advanced radio technologies such as cognitive radios and ALOHA. This paper presents a novel non-cooperative spectrum sensing approach that can achieve a good trade-off between latency, reliability and computational complexity. Our major idea is to exploit the first-order cyclostationarity of the primary user's signal to reduce the noise-uncertainty problem inherent in the conventional energy detection approach. It is shown that the proposed approach is suitable for detecting the primary user's activity in the interweave paradigm of cognitive spectrum sharing, while the active primary user is periodically sending training sequence. Computer simulations are carried out for the typical IEEE 802.11g system. It is observed that the proposed approach outperforms both the energy detection and the second-order cyclostationarity approach when the observation period is more than 10 frames corresponding to 0.56 ms. ©2010 IEEE.
Energy efficiency has become an important aspect of wireless communication, both economically and environmentally. This letter investigates the energy efficiency of downlink AWGN channels by employing multiple decoding policies. The overall energy efficiency of the system is based on the bits-per-joule metric, where energy efficiency contours are used to locate the optimal operating points based on the system requirements. Our novel approach uses a linear power model to define the total power consumed at the base station, encompassing the circuit and processing power, and amplifier efficiency, and ensures that the best energy efficiency value can be achieved whilst satisfying other system targets such as QoS and rate-fairness.
This paper consider cooperative localization in cellular networks. In this scenario, several located mobile terminals (MTs) are employed as reference nodes to find the location of an un-located MT. The located MTs sent training sequences in the uplink, then the un-located MT perform distance estimation using received signal strength techniques. The localization accuracy of the un-located MT is characterized in terms of squared position error bound (SPEB) . By taking into account the imperfect a priori location knowledge of the located MTs, the SPEB is derived in a closed-form. The closed-form indicate that the effect of the imperfect location knowledge on SPEB is equivalent to the increase of the variance of distance estimation. Moreover, based on the obtained closed-form, a MT selection scheme is proposed to decrease the number of located MTs sending training sequences, thus reduce the training overhead for localization. The simulation results show that the proposed scheme can reduce the training overhead with the paid of accuracy. and with the same training overhead, the accuracy of the proposed scheme is better than that of random selection. © 2011 IEEE.
One of the major challenges of Cellular network based localization techniques is lack of hearability between mobile terminals (MTs) and base stations (BSs), thus the number of available anchors is limited. In order to solve the hearability problem, previous work assume some of the MTs have their location information via Global Positioning System (GPS). These located MT can be utilized to find the location of an un-located MT without GPS receiver. However, its performance is still limited by the number of available located MTs for cooperation. This paper consider a practical scenario that hearability is only possible between a MT and its home BS. Only one located MT together with the home BS are utilized to find the location of the un-located MT. A hybrid cooperative localization approach is proposed to combine time-of-arrival and received signal strength based fingerprinting techniques. It is shown in simulations that the proposed hybrid approach outperform the stand-alone time-of-arrival techniques or received signal strength based fingerprinting techniques in the considered scenario. It is also found that the proposed approach offer better accuracy with larger distance between the located MT and the home BS. © 2011 IEEE.
The conventional transmit diversity schemes, such as Alamouti scheme, use several radio frequency (RF) chains to transmit signals simultaneously from multiple antennas. In this paper, we propose a low-complexity repetition time-switched (RTSTD) transmit diversity algorithm, which employs only one RF chain as well as a low-complexity switch for transmission. A mathematical model is developed to assess the performance of the proposed scheme. In order to make it applicable for practical applications, we also investigate its joint application with orthogonal frequency division multiplexing (OFDM) and channel coding techniques to combat frequency selective fading. © 2011 IEEE.
This paper presents a novel frequency-domain energy detection scheme based on extreme statistics for robust sensing of OFDM sources in the low SNR region. The basic idea is to exploit the frequency diversity gain inherited by frequency selective channels with the aid of extreme statistics of the differential energy spectral density (ESD). Thanks to the differential stage the proposed spectrum sensing is robust to noise uncertainty problem. The low computational complexity requirement of the proposed technique makes it suitable for even machine-to-machine sensing. Analytical performance analysis is performed in terms of two classical metrics, i.e. probability of detection and probability of false alarm. The computer simulations carried out further show that the proposed scheme outperforms energy detection and second order cyclostationarity based approach for up to 10 dB gain in the low SNR range. © 2011 IEEE.
IEEE 802.11ax Spatial Reuse (SR) is a new category in the IEEE 802.11 family, aiming at improving the spectrum efficiency and the network performance in dense deployments. The main and perhaps the only SR technique in that amendment is the Basic Service Set (BSS) Color. It aims at increasing the number of concurrent transmissions in a specific area, based on a newly defined Overlapping BSS/Preamble-Detection (OBSS/PD) threshold and the Received Signal Strength Indication (RSSI) from Overlapping BSSs (OBSSs). In this paper, we propose a Control OBSS/PD Sensitivity Threshold (COST) algorithm for adjusting OBSS/PD threshold based on the interference level and RSSI from the associated recipient(s). In contrast to the Dynamic Sensitivity Control (DSC) algorithm that was proposed for setting OBSS/PD, COST is fully aware of any changes in OBSSs and can be applied to any IEEE 802.11ax node. Simulation results in various scenarios, show a clear performance improvement of up to 57% gain in throughput over a conservative fixed OBSS/PD for the legacy BSS Color and DSC.
In broadcast wireless networks, the options for reliable delivery are limited when there is no return link or a return link is not deemed cost-efficient due to the system resource requirements it introduces. In this paper we focus our attention on two reliable transport mechanisms that become relevant for the non real time delivery of files: packet-level Forward Error Correction (FEC) and data carousels. Both techniques perform error recovery at the expense of redundant data transmission and content repetition respectively. We demonstrate that their joint design may lead to significant resource savings.
The IEEE 802.15.4 protocol is widely adopted as the MAC sub-layer standard for wireless sensor networks, with two available modes: beacon-enabled and non-beacon-enabled. The non-beacon-enabled mode is simpler and does not require time synchronisation; however, it lacks an explicit energy saving mech-anism that is crucial for its deployment on energy-constrained sensors. This paper proposes a distributed sleep mechanism for non-beacon-enabled IEEE 802.15.4 networks which provides energy savings to energy-limited nodes. The proposed mechanism introduces a sleep state that follows each successful packet transmission. Besides energy savings, the mechanism produces a trafﬁc shaping effect that reduces the overall contention in the network, effectively improving packet delivery ratio. Based on trafﬁc arrival rate and the level of network contention, a node can adjust its sleep period to achieve the highest packet delivery ratio. Performance results obtained by ns3 simulations validate these improvements as compared to the IEEE 802.15.4 standard.
Due to the rise of the energy efficiency (EE) as a system performance evaluation criterion, the EE-spectral efficiency (SE) trade-off is becoming a key tool for getting insight on how to efficiently design future communication system. As far as the single-input single-output (SISO) Rayleigh fading channel is concerned, the EE-SE trade-off has been accurately approximated in the past but only at low-SE. In this paper, we propose a novel and more generic closed-form approximation (CFA) of this EE-SE trade-off which is very accurate for any SE values. We compare our CFA with two existing CFAs and show the great accuracy of the former for a wider range of SE in comparison with the latter. As an application, we use our CFA to study the variation of EE-SE trade-off when a realistic power model is assumed and to compare the energy consumption of SISO against a 2x2 multi-input multi-output (MIMO) system over the Rayleigh fading channel.
This paper aims to investigate an intra-cell overlay opportunistic spectrum sharing scheme by employing 1-bit feedback beamforming. The work of interests is that base station broadcasts independent signal messages to two relay stations (RS-1 and RS-2). RS-2 decodes the signal messages in subcell 2 and attempts to share the spectrum of sub-cell 1 for its own transmission. For this reason, RS-2 makes a deal with RS-1 in sub-cell 1 to help RS-1 send its signal messages. As presented in the paper, by employing 1-bit feedback transmit beamforming, RS-2 can further improve RS-1's achievable rate and automatically eliminate the interference from RS-2 to subcell 1. Meanwhile, the achievable sum-rate upper bound of RS-2 is also analyzed. © VDE VERLAG GMBH.
The random access (RA) mechanism of Long Term Evolution (LTE) networks is prone to congestion when a large number of devices attempt RA simultaneously, due to the limited set of preambles. If each RA attempt is made by means of transmission of multiple consecutive preambles (codewords) picked from a subset of preambles, as proposed in , collision probability can be significantly reduced. Selection of an optimal preamble set size  can maximise RA success probability in the presence of a trade-off between codeword ambiguity and code collision probability, depending on load conditions. In light of this finding, this paper provides an adaptive algorithm, called Multipreamble RA, to dynamically determine the preamble set size in different load conditions, using only the minimum necessary uplink resources. This provides high RA success probability, and makes it possible to isolate different network service classes by separating the whole preamble set into subsets each associated to a different service class; a technique that cannot be applied effectively in LTE due to increased collision probability. This motivates the idea that preamble allocation could be implemented as a virtual network function, called vPreamble, as part of a random access network (RAN) slice. The parameters of a vPreamble instance can be configured and modified according to the load conditions of the service class it is associated to.
Institute of Electrical and Electronics Engineers (IEEE) 802.11ax Spatial Reuse (SR) is a new feature in the IEEE 802.11 family, aiming at improving the spectrum efficiency and the network performance in dense deployments. The main and perhaps the only SR technique in that amendment is the Basic Service Set (BSS) Color. It aims at increasing the number of concurrent transmissions in a specific area, based on a newly defined Overlapping BSS/Preamble-Detection threshold. In this paper, we overview the latest developments introduced in the IEEE 802.11ax for the SR and propose a rate control algorithm developed to exploit the BSS Color scheme. Our proposed algorithm, Damysus is specifically designed to function in dense environments where other off-the-shelf algorithms show poor performance. Simulation results in various dense scenarios, show a clear performance improvement of up to 113% gain in throughput over the well known MinstrelHT algorithm.
This paper presents two contributions towards incremental decode-forward relaying over asymmetric fading channels. One is about the outage probability of incremental relay network accommodating i.n.d. cooperative paths. Our contribution is mainly on formulating a closed-form of the outage probability through employment of the Inverse Laplace Transform and Eular Summation. The other is about the proposal of transmit-power efficient relay-selection strategy through exploitation of the relationship between position of relays and the outage probability.
Considering a densely populated area where a mobile device, with a single RF chain, shares its message with a set of mobile devices through narrowband mmWave channel, an analogue-beam splitting approach is proposed to achieve a good capacity and coverage trade-off. The proposed approach aims at maximizing the capacity of the mmWave multicast channel through antenna-element grouping and adaptive phase shifting, which takes into account of the inter-beam interference. When receivers are randomly distributed on a circle centered at the transmitter, according to the uniform distribution, it is found that the impact of inter-beam interference on the channel capacity can be negligibly small, and thus the analoguebeam splitting approach can be largely simplified in practice. Computer simulations are carried out to elaborate our theoretical study and demonstrate considerable advantages of the proposed analogue-beam splitting approach.
The mutual information (MI) of multiple-input multiple-output (MIMO) system over Rayleigh fading channel is known to asymptotically follow a normal probability distribution. In this paper, we first prove that the MI of distributed MIMO (DMIMO) system is also asymptotically equivalent to a Gaussian random variable (RV) by deriving its moment generating function (MGF) and by showing its equivalence with the MGF of a Gaussian RV. We then derive an accurate closed-form approximation of the outage probability for DMIMO system by using the mean and variance of the MI and show the uniqueness of its formulation. Finally, several applications for our analysis are presented.
Low Density Signature-Orthogonal Frequency Division Multiplexing (LDS-OFDM) has been introduced recently as an efficient multiple access technique. In this paper, we focus on the subcarrier and power allocation scheme for uplink LDS-OFDM system. Since the resource allocation problem is not convex due to the discrete nature of subcarrier allocation, the complexity of finding the optimal solutions is extremely high. We propose a heuristic subcarrier and power allocation algorithm to maximize the weighted sum-rate. The simulation results show that the proposed algorithm can significantly increase the spectral efficiency of the system. Furthermore, it is shown that LDS-OFDM system can achieve an outage probability much less than that for OFDMA system.
Decentralized dynamic spectrum allocation (DSA) that exploit adaptive antenna array interference mitigation (IM) diversity at the receiver, is studied for interference-limited environments with high level of frequency reuse. The system consists of base stations (BSs) that can optimize uplink frequency allocation to their user equipments (UEs) to minimize impact of interference on the useful signal, assuming no control over band allocation of other BSs sharing the same bands. To this end, “good neighbor” (GN) rules allow effective trade off between the equilibrium and transient decentralized DSA behavior if the performance targets are adequate to the interference scenario. In this paper, we extend the GN rules by including a spectrum occupation control that allows adaptive selection of the performance targets corresponding to the potentially “interference free” DSA; define the semi-analytic absorbing Markov chain model for the GN DSA with occupation control and study the convergence properties including effects of possible breaks of the GN rules; and for higher-dimension networks, develop the simplified search GN algorithms with occupation and power control (PC) and demonstrate their efficiency by means of simulations in the scenario with unlimited requested network occupation.
Mobile Ad Hoc Networks: Challenges and Solutions for Providing Quality of Service Assurances Lajos Hanzo (II.)1 and Rahim Tafazolli University of Surrey, ...
Multicarrier-Low Density Spreading Multiple Access (MC-LDSMA) is a promising technique for high data rate mobile communications. In this paper, the suitability of using MC-LDSMA in the uplink for next generation cellular systems is investigated. The performance of MC-LDSMA is evaluated and compared with current multiple access techniques, OFDMA and SC-FDMA. Specifically, Peak to Average Power Ratio (PAPR), Bit Error Rate (BER), spectral efficiency and fairness are considered as performance metrics. The link and system-level simulation results show that MC-LDSMA has significant performance improvements over SC-FDMA and OFDMA. It is shown that using MC-LDSMA can significantly improve the system performance in terms of required transmission power, spectral efficiency and fairness among the users.
Multi-band and multi-tier network densification is being considered as the most promising solution to overcome the capacity crunch problem of cellular networks. In this direction, small cells (SCs) are being deployed within the macro cell (MC) coverage, to off-load some of the users associated with the MCs. This deployment scenario raises several problems. Among others, signalling overhead and mobility management will become critical considerations. Frequent handovers (HOs) in ultra dense SC deployments could lead to a dramatic increase in signalling overhead. This suggests a paradigm shift towards a signalling conscious cellular architecture with smart mobility management. In this regards, the control/data separation architecture (CDSA) with dual connectivity is being considered for the future radio access. Considering the CDSA as the radio access network (RAN) architecture, we quantify the reduction in HO signalling w.r.t. the conventional approach. We develop analytical models which compare the signalling generated during various HO scenarios in the CDSA and conventionally deployed networks. New parameters are introduced which can with optimum value significantly reduce the HO signalling load. The derived model includes HO success and HO failure scenarios along with specific derivations for continuous and non-continuous mobility users. Numerical results show promising CDSA gains in terms of saving in HO signalling overhead.
The effect of vehicle’s proximity on the radiation pattern when the RADAR’s antenna is mounted on the body of autonomous cars is analysed. Two directional radiation patterns with different specifications are placed at different locations of a realistic car body model. The simulation is performed based on ray-tracing method at 77 GHz, the standard frequency for self-driving applications. It is shown that to obtain a robust RADAR sensor, the antenna radiation pattern is better to have relatively higher gain and lower side-lobe-level (SLL), than narrower halfpower- beamwidth (HPBW) and higher front-to-back (F/B) ratio. Both academia and industry can benefit from this study.
Channel reciprocity in time-division duplexing (TDD) massive MIMO (multiple-input multiple-output) systems can be exploited to reduce the overhead required for the acquisition of channel state information (CSI). However, perfect reciprocity is unrealistic in practical systems due to random radio-frequency (RF) circuit mismatches in uplink and downlink channels. This can result in a significant degradation in the performance of linear precoding schemes which are sensitive to the accuracy of the CSI. In this paper, we model and analyse the impact of RF mismatches on the performance of linear precoding in a TDD multi-user massive MIMO system, by taking the channel estimation error into considerations. We use the truncated Gaussian distribution to model the RF mismatch, and derive closed-form expressions of the output SINR (signal-to-interference-plus-noise ratio) for maximum ratio transmission and zero forcing precoders. We further investigate the asymptotic performance of the derived expressions, to provide valuable insights into the practical system designs, including useful guidelines for the selection of the effective precoding schemes. Simulation results are presented to demonstrate the validity and accuracy of the proposed analytical results.
Beside the well-established spectral-efficiency (SE), energy-efficiency (EE) is currently becoming an important performance evaluation metric, which in turn makes the EE-SE trade-off as a prominent criterion for efficiently designing future communication systems. In this letter, we propose a very tight closed-form approximation (CFA) of this trade-off over the single-input single-output (SISO) Rayleigh flat fading channel. We first derive an improved approximation of the SISO ergodic capacity by means of a parametric function and then utilize it for obtaining our novel EE-SE trade-off CFA, which is also generalized for the symmetric multi-input multi-output channel. We compare our CFA with existing CFAs and show its improved accuracy in comparison with the latter.
This paper presents details of the indoor wideband and directional propagation measurements at 26 GHz in which a wideband channel sounder using a millimeter wave (mmWave) signal analyzer and vector signal generator was employed. The setup provided 2 GHz bandwidth and the mechanically steerable directional lens antenna with 5 degrees beamwidth provides 5 degrees of directional resolution over the azimuth. Measurements provide path loss, delay and spatial spread of the channel. Angular and delay dispersion are presented for line-of-sight (LoS) and non-line-of-sight (NLoS) scenarios.
This paper presents a novel design of trapped microstrip-ridge gap waveguide by using partially filled air gaps in a conventional microstrip-ridge gap waveguide. The proposed method offers an applicable solution to obviate frustrating assembly processes for standalone high-frequency circuits employing the low temperature co-fired ceramics technology which supports buried cavities. To show the practicality of the proposed approach, propagation characteristics of both trapped microstrip and microstrip-ridge gap waveguide are compared first. Then, a right-angle bend is introduced, followed by designing a power divider. These components are used to feed a linear 4-element array antenna. The bandwidth of the proposed array is 13 GHz from 64~76 GHz and provides the realized gain of over 10 dBi and the total efficiency of about 80% throughout the operational band. The antenna is an appropriate candidate for upper bands of WiGig (63.72~70.2) and FCC-approved 70 GHz band (71~76 GHz) applications.
Energy is a critical resource in the design of wireless networks since wireless devices are usually powered by batteries. Without any new approaches for energy saving, 4G mobile users will relentlessly be searching for power outlets rather than network access, and becoming once again bound to a single location. To avoid the so called 4G "energy trap" and to help wireless devices become more environment friendly, there is a clear need for disruptive strategies to address all aspects of power efficiency from the user devices through to the core infrastructure of the network and how these devices and equipment interact with each other. The ICT-C2POWER project is the vehicle that will address these issues through cognitive techniques and cooperation. The C2POWER case study is to research, develop and demonstrate energy saving technologies for multi-standard wireless mobile devices, exploiting the combination of cognitive radio and cooperative strategies, while still enabling the required performance in terms of data rate and QoS to support active applications. Copyright © 2010 The authors.
In this paper, the mutual information transfer characteristics of turbo Multiuser Detector (MUD) for a novel air interface scheme, called Low Density Signature Orthogonal Frequency Division Multiplexing (LDS-OFDM) are investigated using Extrinsic Information Transfer (EXIT) charts. LDS-OFDM uses Low Density Signature structure for spreading the data symbols in frequency domain. This technique benefits from frequency diversity besides its ability of supporting parallel data streams more than the number of subcarriers (overloaded condition). The turbo MUD couples the data symbols' detector of LDS scheme with users' FEC (Forward Error Correction) decoders through the message passing principle. The effect of overloading on LDS scheme's performance is evaluated using EXIT chart. The results show that at Eb/N0 as low as 0.3, LDS-OFDM can support loads up to 300%.
Minimization of drive test (MDT) has recently been standardized by 3GPP as a key self organizing network (SON) feature. MDT allows coverage to be estimated at the base station (BS) using user equipment (UE) measurement reports with the objective to eliminate the need for drive tests. However, most MDT based coverage estimation methods recently proposed in literature assume that UE position is known at the BS with 100% accuracy, an assumption that does not hold in reality. In this paper we develop an analytical model that allows the quantification of error in MDT based autonomous coverage estimation (ACE) as a function of error in UE as well as BS positioning. Our model also allows characterization of error in ACE as function of standard deviation of shadowing.
Underlay cognitive beamforming allows secondary transmitters to suppress interferences to the primary users, whilst maintain their own quality of services. This paper aims at investigating joint power and interference trade-off inherent in the underlay cognitive beamforming scheme. It is shown that the work of interests leads to a non-convex optimization problem, which can be resolved by employing the second-order cone programming. It is theoretically proved that introducing zero-interference to the primary user does not always lead to the system optimality; moreover, we exhibit two conditions, for which the interference should be treated as noise in order to maximize the sum-rate of the considered beamforming system. © VDE VERLAG GMBH.
This letter proposes a novel carrier frequency offset (CFO) estimation method for generalized multicarrier code-division multiple access systems in unknown frequency-selective channels utilizing hidden pilots. It is established that CFO is identifiable in the frequency domain by employing cyclic statistics (CS) and linear regression (LR) algorithms. We show that the CS-based estimator is capable of mitigating the normalized CFO (NCFO) to a small error value. Then, the LR-based estimator can be employed to offer more accurate estimation by removing the residual quantization error after the CS-based estimator. Simulation results are presented together with the theoretical analysis, and a good match between them is observed.
In this paper, single-input multiple-output (SIMO) system when employing massive binary array-receiver has been investigated while constructive noise has been observed in the single user system to detect the higher-order QAM modulated signals. To fully understand the interesting phenomenon, mathematical model has been established and analyzed in this paper. Theorems of the signal detectability are studied to understand the best operating signal-to-noise ratio (SNR) range based on the error behaviours of the single user SIMO system. Within the observation and analysis, a novel new multiuser SIMO with binary array-receiver structure has been proposed and can be considered as a solution to deal with the high complexity problem that the traditional model has when using maximum likelihood (ML) detection. The key idea of this approach is to set up the multiuser multiple-input multiple-output (MIMO) model into a frequency division multiple access (FDMA) scenario and regard each user as single user SIMO to achieve the goal of decreasing the exponentially increased complexity of ML detection method to the number of users. It is shown by numerical results that each user in this system can achieve a promising error behaviour in the specific best operating SNR range.
This paper proposes a novel approach for enhancing the video popularity prediction models. Using the proposed approach, we enhance three popularity prediction techniques that outperform the accuracy of the prior state-of-the-art solutions. The major components of the proposed approach are two novel mechanisms for "user grouping" and "content classification". The user grouping method is an unsupervised clustering approach that divides the users into an adequate number of user groups with similar interests. The content classification approach identifies the classes of videos with similar popularity growth trends. To predict the popularity of the newly-released videos, our proposed popularity prediction model trains its parameters in each user group and its associated video popularity classes. Evaluations are performed through a 5-fold cross validation and on a dataset containing one month video request records of 26,706 number of BBC iPlayer users. Using the proposed grouping technique, user groups of similar interest and up to 2 video popularity classes for each user group were detected. Our analysis shows that the accuracy of the proposed solution outperforms the state-of-the-art including SH, ML, MRBF models on average by 45%, 33% and 24%, respectively. Finally, we discuss how various systems in the network and service management domain such as cache deployment, advertising and video broadcasting technologies benefit from our findings to illustrate the implications.
A pilot-based spectrum sensing approach in the presence of unknown timing and frequency offset is proposed in this paper. Our major idea is to utilize the second order statistics of the received samples, such as autocorrelation, to avoid the frequency offset problem. Base on the property of the pilot symbols, where the different symbol blocks usually have the same pilot symbols, some nonzero terms will appear in the frequency domain. To test the proposed approach, computer simulations are carried out for the typical Orthogonal frequency-division multiplexing (OFDM) system. It is observed that the proposed approach always outperforms the classic time domain Neyman-Pearson approach at least 4dB. Moreover, the proposed approach get the same performance as the weighted linear combination based approach when the transmitted data block size is equal to 2048, while a small computational cost is keep at the same time. Therefore, it can be said that the proposed approach can achieve a good trade-off between reliability, latency and the computational cost, when the transmitted data block size of the primary system is larger than 1000. © VDE VERLAG GMBH.
Energy efficiency (EE) is becoming an important performance indicator for ensuring both the economical and environmental sustainability of the next generation of communication networks. Equally, cooperative communication is an effective way of improving communication system performances. In this paper, we propose a near-optimal energy-efficient joint resource allocation algorithm for multi-hop multiple-input-multiple-output (MIMO) amplify-and-forward (AF) systems. We first show how to simplify the multivariate unconstrained EE-based problem, based on the fact that this problem has a unique optimal solution, and then solve it by means of a low-complexity algorithm. We compare our approach with classic optimization tools in terms of energy efficiency as well as complexity, and results indicate the near-optimality and low-complexity of our approach. As an application, we use our approach to compare the EE of multihop MIMO-AF with MIMO systems and our results show that the former outperforms the latter mainly when the direct link quality is poor.
We derive the uplink system model for In-band and Guard-band narrowband Internet of Things (NB-IoT). The results reveal that the actual channel frequency response (CFR) is not a simple Fourier transform of the channel impulse response, due to sampling rate mismatch between the NB-IoT user and Long Term Evolution (LTE) base station. Consequently, a new channel equalization algorithm is proposed based on the derived effective CFR. In addition, the interference is derived analytically to facilitate the co-existence of NB-IoT and LTE signals. This work provides an example and guidance to support network slicing and service multiplexing in the physical layer.
In this paper, we study an enhanced subspace based approach for the mitigation of multiple access interference (MAI) in direct-sequence code-division multiple-access (DS-CDMA) systems over frequency-selective channels. Blind multiuser detection based on signal subspace estimation is of special interest in mitigating MAI in CDMA systems since it is impractical to assume perfect knowledge of parameters such as spreading codes, time delays and amplitudes of all the users in a rapidly changing mobile environment. We develop a new blind multiuser detection scheme which only needs the priori knowledge of the signature waveform and timing of the user of interest. By exploiting the improper nature of multiple access interference (MAI) and intersymbol interference (ISI), the enhanced detector shows clear superiority to the conventional subspace-based blind multiuser detector. The performance advantages are shown to be more obvious in heavily loaded systems when the number of active users is large. © 2011 IEEE.
This paper proposes a low-complexity joint source and relay energy-efficient resource allocation scheme for the two-hop multiple-input- multiple-output (MIMO) amplify-and-forward (AF) system when channel state information is available. We first simplify the multivariate unconstrained energy efficiency (EE)-based problem and derive a convex closed-form approximation of its objective function as well as closed-form expressions of subchannel rates in both the unconstrained and power constraint cases. We then rely on these expressions for designing a low-complexity energy-efficient joint resource allocation algorithm. Our approach has been compared with a generic nonlinear constrained optimization solver and results have indicated the low-complexity and accuracy of our approach. As an application, we have also compared our EE-based approach against the optimal spectral efficiency (SE)-based joint resource allocation approach and results have shown that our EE-based approach provides a good trade-off between power consumption and SE. © 2014 IEEE.
Flexibly supporting multiple services, each with different communication requirements and frame structure, has been identified as one of the most significant and promising characteristics of next generation and beyond wireless communication systems. However, integrating multiple frame structures with different subcarrier spacing in one radio carrier may result in significant inter-service-band-interference (ISBI). In this paper, a framework for multi-service (MS) systems is established based on subband filtered multi-carrier system. The subband filtering implementations and both asynchronous and generalized synchronous (GS) MS subband filtered multi-carrier (SFMC) systems have been proposed. Based on the GS-MS-SFMC system, the system model with ISBI is derived and a number of properties on ISBI are given. In addition, low-complexity ISBI cancelation algorithms are proposed by precoding the information symbols at the transmitter. For asynchronous MS-SFMC system in the presence of transceiver imperfections including carrier frequency offset, timing offset and phase noise, a complete analytical system model is established in terms of desired signal, intersymbol-interference, inter-carrier-interference, ISBI and noise. Thereafter, new channel equalization algorithms are proposed by considering the errors and imperfections. Numerical analysis shows that the analytical results match the simulation results, and the proposed ISBI cancelation and equalization algorithms can significantly improve the system performance in comparison with the existing algorithms.
In this paper, we evaluate the performance of Multicarrier-Low Density Spreading Multiple Access (MC-LDSMA) as a multiple access technique for mobile communication systems. The MC-LDSMA technique is compared with current multiple access techniques, OFDMA and SC-FDMA. The performance is evaluated in terms of cubic metric, block error rate, spectral efficiency and fairness. The aim is to investigate the expected gains of using MC-LDSMA in the uplink for next generation cellular systems. The simulation results of the link and system-level performance evaluation show that MC-LDSMA has significant performance improvements over SC-FDMA and OFDMA. It is shown that using MC-LDSMA can considerably reduce the required transmission power and increase the spectral efficiency and fairness among the users.
In this paper, a high-gain phased array antenna with wide-angle beam-scanning capability is proposed for fifth- generation (5G) millimeter-wave applications. First, a novel, end-fire, dual-port antenna element with dual functionalities of radiator and power splitter is designed. The element is composed a substrate integrated cavity (SIC) and a dipole based on it. The resonant frequencies of the SIC and dipole can be independently tuned to broaden the impedance bandwidth. Based on this dual-port element, a 4-element subarray can be easily constructed without resorting to a complicated feeding network. The end-fire subarray features broad beam-width of over 180 degrees, high isolation, and low profile, rendering it suitable for wide-angle beam-scanning applications in the H-plane. In addition, the methods of steering the radiation pattern downwards or upwards in the E-plane are investigated. As a proof-of-concept, two phased array antennas each consisting of eight subarrays are designed and fabricated to achieve the broadside and wide-angle beam-scanning radiation. Thanks to the elimination of surface wave, the mutual coupling between the subarrays can be reduced for improving the scanning angle while suppressing the side-lobe level. The experimental predictions are validated by measurement results, showing that the beam of the antenna can be scanned up to 65 degrees with a scanning loss only 3.7 dB and grating lobe less than -15 dB.
Conventional mobility management schemes tend to hit the core network with increased signaling load when the cell size is shrinking and the user mobility speed increases. To mitigate this problem research community has proposed various intelligent mobility management schemes that take advantage of the predictability of the users mobility pattern. However, most of the proposed solutions are only focused on signaling of the active-state (i.e., handover signaling) and proposals on improvement of the idle-state signaling has been limited and were not well received from the industrial practitioners. This paper first surveys the major shortcomings of the existing proposals for the idle mode mobility management and then proposes a new architecture, namely predictive mobility management (PrMM) to mitigate the identified challenges. An analytical framework is developed and a closed form solution for the expected signaling overhead of the PrMM is presented. The results of numerical evaluations confirm that, depending on user mobility and network configuration, the PrMM efficiency can surpass the long term evolution (LTE) 4G signaling scheme by over 90%. Analysis of the results shows that the best performance is achieved at highly dense paging areas and lower cell crossing rates.
In this paper, we propose a novel position-based routing protocol designed to anticipate the characteristics of an urban VANET environment. The proposed algorithm utilizes the prediction of the node's position and navigation information to improve the efficiency of routing protocol in a vehicular network. In addition, we use the information about link layer quality in terms of SNIR and MAC frame error rate to further improve the efficiency of the proposed routing protocol. This in particular helps to decrease end-to-end delay. Finally, carry-n-forward mechanism is employed as a repair strategy in sparse networks. It is shown that use of this technique increases packet delivery ratio, but increases end-to-end delay as well and is not recommended for QoS constraint services. Our results suggest that compared with GPSR, our proposal demonstrates better performance in the urban environment. © 2011 IEEE.
Orthogonal relay based cooperative communication enjoys distributed spatial diversity gain at the price of spectral efficiency. This work aims at improving the spectral efficiency for orthogonal opportunistic decode-and-forward (DF) relaying through employment of novel adaptive modulation scheme. The proposed scheme allows source and relay to transmit information in different modulation formats, while the MAP receiver is employed at destination for the diversity combining. Given the individual power constraint and target bit-error-rate (BER), the proposed scheme can significantly improve the spectral efficiency in comparison with the non-adaptive DF relaying and adaptive direct transmission.
Automatic Repeat re-Quest (AQR) is implemented to ensure reliable transmission when channel state information (CSI) is not available to the source and the selected transmission rate is not supported by the current channel realization. We consider a relay system with hybrid relay scheme, where the relay switches between decode-and-forward (DF) and compress-and-forward (CF) adapting to the decoding status. In such case, we propose new ARQ strategy and analyze its performance in terms of maximum throughput, average reward and inter-renewal time. Compared with pure DF, the hybrid relay schemes show considerable gain.
Cooperative Transmission can be used in a multicell scenario where base stations are connected to a central processing unit. This cooperation can be used to improve the fairness for users with bad channel conditions–critical users. This paper will look into using cooperative transmission alongside the orthogonal OFDM scheme to improve fairness by careful selection of critical users and a resource allocation and resource division between the two schemes. A solution for power and subcarrier allocations is provided together with a solution for the selection of the critical users. Simulation results is provided to show the fairness achieved by the proposed critical users selection method, resource allocation and the resource division method applied under the stated assumptions.
Hybrid systems, where more than one transmission scheme are used within the same cluster, can be used as a way to improve spectral efficiency for the system as a whole and, more importantly, for the cell-edge users. In this paper, we will propose frequency reuse method by grouping the users into two groups, critical and non-critical users. Each user group is served with a transmission scheme, where the most vulnerable users are served by transmission scheme that avoid, make use of, and orthogo-nalise the interference. These schemes include the cooperative maximal ratio transmission and the non-cooperative orthogonal and non-orthogonal schemes. Radio resource allocation is studied and a solution is given for maximal ratio transmission and interference alignment. Simulation results are given, and showing the performance of each scheme when all users are considered critical and one scheme is used. Moreover, results showing the performance of our proposed frequency reuse scheme where different percentage of users considered critical.
Recent research on Frequency Reuse (FR) schemes for OFDM/OFDMA based cellular networks (OCN) suggest that a single fixed FR cannot be optimal to cope with spatiotemporal dynamics of traffic and cellular environments in a spectral and energy efficient way. To address this issue this paper introduces a novel Self Organizing framework for adaptive Frequency Reuse and Deployment (SO-FRD) for future OCN including both cellular (e.g. LTE) and relay enhanced cellular networks (e.g. LTE Advance). In this paper, an optimization problem is first formulated to find optimal frequency reuse factor, number of sectors per site and number of relays per site. The goal is designed as an adaptive utility function which incorporates three major system objectives; 1) spectral efficiency 2) fairness, and 3) energy efficiency. An appropriate metric for each of the three constituent objectives of utility function is then derived. Solution is provided by evaluating these metrics through a combination of analysis and extensive system level simulations for all feasible FRD's. Proposed SO-FRD framework uses this flexible utility function to switch to particular FRD strategy, which is suitable for system's current state according to predefined or self learned performance criterion. The proposed metrics capture the effect of all major optimization parameters like frequency reuse factor, number of sectors and relay per site, and adaptive coding and modulation. Based on the results obtained, interesting insights into the tradeoff among these factors is also provided.
Despite years of physical-layer research, the capacity enhancement potential of relays is limited by the additional spectrum required for Base Station (BS)-Relay Station (RS) links. This paper presents a novel distributed solution by exploiting a system level perspective instead. Building on a realistic system model with impromptu RS deployments, we develop an analytical framework for tilt optimization that can dynamically maximize spectral efficiency of both the BS-RS and BS-user links in an online manner. To obtain a distributed self-organizing solution, the large scale system-wide optimization problem is decomposed into local small scale subproblems by applying the design principles of self-organization in biological systems. The local subproblems are non-convex, but having a very small scale, can be solved via standard nonlinear optimization techniques such as sequential quadratic programming. The performance of the developed solution is evaluated through extensive simulations for an LTE-A type system and compared against a number of benchmarks including a centralized solution obtained via brute force, that also gives an upper bound to assess the optimality gap. Results show that the proposed solution can enhance average spectral efficiency by up to 50% compared to fixed tilting, with negligible signaling overheads. The key advantage of the proposed solution is its potential for autonomous and distributed implementation.
Most of the existing distributed beamforming algorithms for relay networks require global channel state information (CSI) at relay nodes and the overall computational complexity is high. In this paper, a new class of adaptive algorithms is proposed which can achieve a globally optimum solution by employing only local CSI. A reference signal based (RSB) scheme is first derived, followed by a constant modulus (CM) based scheme when the reference signal is not available. Considering individual power transmission constraint at each relay node, the corresponding constrained adaptive algorithms are also derived as an extension. An analysis of the overhead and stepsize range for the derived algorithms are then provided and the excess mean square error (EMSE) for the RSB case is studied based on the energy reservation method. As demonstrated by our simulation results, a better performance has been achieved by our proposed algorithms and they have a very low computational complexity and can be implemented on low cost and low processing power devices.
Soaring capacity and coverage demands dictate that future cellular networks need to migrate soon toward ultra-dense networks. However, network densification comes with a host of challenges that include compromised energy efficiency, complex interference management, cumbersome mobility management, burdensome signaling overheads, and higher backhaul costs. Interestingly, most of the problems that beleaguer network densification stem from legacy networks' one common feature, i.e., tight coupling between the control and data planes regardless of their degree of heterogeneity and cell density. Consequently, in wake of 5G, control and data planes separation architecture (SARC) has recently been conceived as a promising paradigm that has potential to address most of the aforementioned challenges. In this survey, we review various proposals that have been presented in the literature so far to enable SARC. More specifically, we analyze how and to what degree various SARC proposals address the four main challenges in network densification, namely: energy efficiency, system level capacity maximization, interference management, and mobility management. We then focus on two salient features of future cellular networks that have not yet been adapted in legacy networks at wide scale and thus remain a hallmark of 5G, i.e., coordinated multipoint (CoMP) and device-to-device (D2D) communications. After providing necessary background on CoMP and D2D, we analyze how SARC can particularly act as a major enabler for CoMP and D2D in context of 5G. This article thus serves as both a tutorial as well as an up-to-date survey on SARC, CoMP, and D2D. Most importantly, this survey provides an extensive outlook of challenges and opportunities that lie at the crossroads of these three mutually entangled emerging technologies.
New modified 2 × 2 and 3 × 3 series-fed patch antenna arrays with beam-steering capability are designed and fabricated for 28-GHz millimeter-wave applications. In the designs, the patches are connected to each other continuously and in symmetric 2-D format using the high-impedance microstrip lines. In the first design, 3-D beam-scanning range of ± 25° and good radiation and impedance characteristics were attained by using only one phase shifter. In the second one, a new mechanism is introduced to reduce the number of the feed ports and the related phase shifters (from default number 2 N to the reduced number N + 1 in the serial feed (here N = 3) and then the cost, complexity, and size of the design. Here, good scanning performance of a range of ± 20°, acceptable sidelobe level, and gain of 15.6 dB are obtained. These features allow to use additional integrated circuits to improve the gain and performance. A comparison to the conventional array without modification is done. The measured and simulated results and discussions are presented.
In this paper, we propose a novel energy-aware adaptive sectorisation strategy, where the base stations are able to adapt themselves to the temporal traffic variation by switching off some sectors and changing the beam-width of the remaining sectors. An event based user traffic model is established according to Markov-Modulated Poisson Process (MMPP). Adaptation is performed while taking into account the the target Quality of Service (QoS), in terms of blocking probability. In addition, coverage requirement is also considered. This work targets at future cellular systems, in particular LTE systems. The results show that at least 21% energy consumption can be reduced by using the proposed adaptive sectorisation strategy.
Power consumption in Information and Communication Technology (ICT) is 10% of total energy consumed in industrial countries. According to the latest measurements, this amount is increasing rapidly in recent years. In the literature, a variety of new schemes have been proposed to save energy in operational communication networks. In this paper, we propose a novel optimization algorithm for network virtualization environment, by sleeping reconfiguration on the maximum number of physical links during off-peak hours, while still guaranteeing the connectivity and off-peak bandwidth availability for supporting parallel virtual networks over the top. Simulation results based on the GÉANT network topology show our novel algorithm is able to put notable number of physical links to sleep during off-peak hours while still satisfying the bandwidth demands requested by ongoing traffic sessions in the virtual networks. © 2012 IEEE.
The parameters of Physical (PHY) layer radio frame for 5th Generation (5G) mobile cellular systems are expected to be flexibly configured to cope with diverse requirements of different scenarios and services. This paper presents a frame structure and design which is specifically targeting Internet of Things (IoT) provision in 5G wireless communication systems. We design a suitable radio numerology to support the typical characteristics, that is, massive connection density and small and bursty packet transmissions with the constraint of low cost and low complexity operation of IoT devices. We also elaborate on the design of parameters for Random Access Channel (RACH) enabling massive connection requests by IoT devices to support the required connection density. The proposed design is validated by link level simulation results to show that the proposed numerology can cope with transceiver imperfections and channel impairments. Furthermore, results are also presented to show the impact of different values of guard band on system performance using different subcarrier spacing sizes for data and random access channels, which show the effectiveness of the selected waveform and guard bandwidth. Finally, we present system level simulation results that validate the proposed design under realistic cell deployments and inter-cell interference conditions.
This paper provides an efficient key management scheme for large scale personal networks (PN) and introduces the Certified PN Formation Protocol (CPFP) based on a personal public key infrastructure (personal PKI) concept and Elliptic Curve Cryptography (ECC) techniques.
Mobile communications are increasingly contributing to global energy consumption. The EARTH (Energy Aware Radio and neTworking tecHnologies) project tackles the important issue of reducing CO emissions by enhancing the energy efficiency of cellular mobile networks. EARTH is a holistic approach to develop a new generation of energy efficient products, components, deployment strategies and energy-aware network management solutions. In this paper the holistic EARTH approach to energy efficient mobile communication systems is introduced. Performance metrics are studied to assess the theoretical bounds of energy efficiency as well as the practical achievable limits. A vast potential for energy savings lies in the operation of radio base stations. In particular, base stations consume a considerable amount of the available power budget even when operating at low load. Energy efficient radio resource management (RRM) strategies need to take into account slowly changing daily load patterns, as well as highly dynamic traffic fluctuations. Moreover, various deployment strategies are examined focusing on their potential to reduce energy consumption, whilst providing uncompromised coverage and user experience. This includes heterogeneous networks with a sophisticated mix of different cell sizes, which may be further enhanced by energy efficient relaying and base station cooperation technologies. Finally, scenarios leveraging the capability of advanced terminals to operate on multiple radio access technologies (RAT) are discussed with respect to their energy savings potential. ©2010 IEEE.
Energy consumption of sensor nodes is a key factor affecting the lifetime of wireless sensor networks (WSNs). Prolonging network lifetime not only requires energy efficient operation, but also even dissipation of energy among sensor nodes. On the other hand, spatial and temporal variations in sensor activities create energy imbalance across the network. Therefore, routing algorithms should make an appropriate trade-off between energy efficiency and energy consumption balancing to extend the network lifetime. In this paper, we propose a Distributed Energy-aware Fuzzy Logic based routing algorithm (DEFL) that simultaneously addresses energy efficiency and energy balancing. Our design captures network status through appropriate energy metrics and maps them into corresponding cost values for the shortest path calculation. We seek fuzzy logic approach for the mapping to incorporate human logic. We compare the network lifetime performance of DEFL with other popular solutions including MTE, MDR and FA. Simulation results demonstrate that the network lifetime achieved by DEFL exceeds the best of all tested solutions under various traffic load conditions. We further numerically compute the upper bound performance and show that DEFL performs near the upper bound.
5G definition and standardization projects are well underway, and governing characteristics and major challenges have been identified. A critical network element impacting the potential performance of 5G networks is the backhaul, which is expected to expand in length and breadth to cater to the exponential growth of small cells while offering high throughput in the order of Gbps and less than one-millisecond latency with high resilience and energy efficiency. Such performance may only be possible with direct optical fibre connections which are often not available countrywide and are cumbersome and expensive to deploy. On the other hand, a prime 5G characteristic is diversity, which describes the radio access network, the backhaul, and also the types of user applications and devices. Thus, we propose a novel, distributed, selfoptimized, end-to-end user-cell-backhaul association scheme that intelligently associates users with potential cells based on corresponding dynamic radio and backhaul conditions while abiding by users’ requirements. Radio cells broadcast multiple bias factors, each reflecting a dynamic performance indicator (DPI) of the endto-end network performance such as capacity, latency, resilience, energy consumption, etc. A given user would employ these factors to derive a user-centric cell ranking that motivates it to select the cell with radio and backhaul performance that conforms to the user requirements. Reinforcement learning is used at the radio cell to optimize the bias factors for each DPI in a way that maximizes the system throughput while minimizing the gap between the users’ achievable and required end-to-end quality of experience (QoE). Preliminary results show considerable improvement in users QoE and cumulative system throughput when compared to state-of-theart user-cell association schemes.
This paper surveys the literature relating to the application of machine learning to fault management in cellular networks from an operational perspective. We summarise the main issues as 5G networks evolve, and their implications for fault management. We describe the relevant machine learning techniques through to deep learning, and survey the progress which has been made in their application, based on the building blocks of a typical fault management system. We review recent work to develop the abilities of deep learning systems to explain and justify their recommendations to network operators. We discuss forthcoming changes in network architecture which are likely to impact fault management and offer a vision of how fault management systems can exploit deep learning in the future. We identify a series of research topics for further study in order to achieve this.
Autonomous monitoring of key performance indicators, which are obtained from measurement reports, is well established as a necessity for enabling self-organising networks. However, this reports are usually tagged with geographical location information which are obtained from positioning techniques and are therefore prone to errors. In this paper, we investigate the impact position estimation errors on the cell coverage probability that can be estimated from autonomous coverage estimation (ACE). We derive novel and accurate expressions of the actual cell coverage probability of such scheme while considering: errors in user equipment (UE) location and; errors in both UE and base station location. We present generic expressions for channel modelled with path-loss and shadowing, and much simplified expressions for the path-loss dominant channel model. Our results reveal that the ACE scheme will be suboptimal as long as there are errors in the reported geographical location information. Hence, appropriate coverage margins must be considered when utilising ACE.
In future heterogeneous cellular networks (HCN), cognitive radio (CR) compatible with device to device communication (D2D) technique can be an aid to further enhance system spectral and energy efficiency. The unlicensed smart devices (SDs) are allowed to detect the available licensed spectrum and utilise the spectrum resource which is detected as not being used by the licensed users (LUs). In this work, we propose such a system and provide comprehensive analysis of the effect of selection of SDs’ frame structure on the energy efficiency, throughput and interference. Moreover, uplink power control strategy is also considered where the LUs and SDs adapt the transmit power based on the distance from their reference receivers. The optimal frame structure with power control is investigated under high SNR and low SNR network environments. The impact of power control and optimal sensing time and frame length, on the achievable energy efficiency, throughput and interference are illustrated and analysed by simulation results. It has been also shown that the optimal sensing time and frame length which maximizes the energy efficiency of SDs strictly depends on the power control factor employed in the underlying network such that the considered power control strategy may decrease the energy efficiency of SDs under very low SNR regime.
It is well-established that transmitting at full power is the most spectral-efficient power allocation strategy for point-to-point (P2P) multi-input multi-output (MIMO) systems, however, can this strategy be energy efficient as well? In this letter, we address the most energy-efficient power allocation policy for symmetric P2P MIMO systems by accurately approximating in closed-form their optimal transmit power when a realistic MIMO power consumption model is considered. In most cases, being energy efficient implies a reduction in transmit and overall consumed powers at the expense of a lower spectral efficiency.
In this paper, an ultra-wideband, Dielectric Resonator Antenna (DRA) has been proposed. The proposed antenna is based on isosceles triangular DRA (TDRA), which is fed from the base side using a 50Ω probe. For bandwidth enhancement and radiation characteristics improvement, a partially cylindrical-shape hole is etched from its base side which approached probe feed to the center of TDRA. The dielectric resonator (DR) is located over an extended conducting ground plane. This technique has significantly enhanced antennas bandwidth from 48.8% to 80% (5.29-12.35 GHz), while the biggest problem was radiation characteristics. The basis antenna possesses negative gain in a wide range of bandwidth from 7.5 GHz to 10.5 GHz down to -13.8 dBi. Using this technique improve antenna gain over 1.6 dBi for whole bandwidth, while peak gain is 7.2 dBi.
Energy efficiency (EE) is emerging as a key design criterion for both power limited applications, i.e. mobile devices, and power-unlimited applications, i.e. cellular network. Whereas, resource allocation is a well-known technique for improving the performance of communication system. In this paper, we design a simple and optimal EE-based resource allocation method for the orthogonal multi-user channel by adapting the transmit power and rate to the channel condition such that the energy-per-bit consumption is minimized. We present our EE framework, i.e. EE metric and node power consumption model, and utilizes it for formulating our EE-based optimization problem with or without constraint. In both cases, we derive explicit formulations of the optimal energy-per-bit consumption as well as optimal power and rate for each user. Our results indicate that EE-based allocation can substantially reduce the consumed power and increase the EE in comparison with spectral efficiency-based allocation.
Their inherent broadcasting capabilities over very large geographical areas make satellite systems one of the most effective vehicles for multicast service delivery. Recent advances in spotbeam antennas and high-power platforms further accentuate the suitability of satellite systems as multicasting tools. The focus of this article is reliable multicast service delivery via geostationary satellite systems. Starburst MFTP is a feedback-based multicast transport protocol that is distinct from other such protocols in that it defers the retransmission of lost data until the end of the transmission of the complete data product. In contrast to other multicast transport protocols, MFTP retransmission strategy does not interrupt the fresh data transmission with the retransmissions of older segments. Thanks to this feature, receivers enjoying favourable channel conditions do not suffer from unnecessarily extended transfer delays due to those receivers that experience bad channel conditions. Existing research studies on MFTP's performance over satellite systems assume fixed-capacity satellite uplink channels dedicated to individual clients on the return link. Such fixed-assignment uplink access mechanisms are considered to be too wasteful uses of uplink resources for the sporadic and thin feedback traffic generated by MFTP clients. Indeed, such mechanisms may prematurely limit the scalability of MFTP as the multicast client population size grows. In contrast, the reference satellite system considered in this article employs demand-assignment multiple access (DAMA) with contention-based request signalling on the uplink. DAMA MAC (Medium Access Control) protocols in satellite systems are well-known in the literature for their improved resource utilisation and scalability features. Moreover, DAMA forms the basis for the uplink access mechanisms in prominent satellite networks such as Inmarsat's BGAN (Broadband Global Area Network), and return link specifications such as ETSI DVB-RCS, However, in comparison with fixed-assignment uplink access mechanisms, DAMA protocols may introduce unpredictable delays for MFTP feedback messages on the return link. Collisions among capacity requests on the contention channel, temporary lack of capacity on the reservation channel, and random transmission errors on the uplink are the potential causes of such delays, This article presents the results of a system-level simulation analysis of MFTP over a DAMA GEO satellite system with contention-based request channels. Inmarsat's BGAN system was selected as the reference architecture for analyses. The simulator implements the full interaction between the MFTP server and MFTP clients overlaid on top of the Inmarsat BGAN uplink access mechanism. The analyses aim to evaluate and optimise MFTP performance in Inmarsat BGAN system in terms of transfer delay and system throughput as a function of available capacity, client population size, data product size, channel error characteristics, and MFTP protocol settings. Copyright @ 2006 by the American Institute of Aeronautics and Astronautics, Inc. All rights reserved.
Enormous amounts of dynamic observation and measurement data are collected from sensors in Wireless Sensor Networks (WSNs) for the Internet of Things (IoT) applications such as environmental monitoring. However, continuous transmission of the sensed data requires high energy consumption. Data transmission between sensor nodes and cluster heads (sink nodes) consumes much higher energy than data sensing in WSNs. One way of reducing such energy consumption is to minimise the number of data transmissions. In this paper, we propose an Adaptive Method for Data Reduction (AM-DR). Our method is based on a convex combination of two decoupled Least-Mean-Square (LMS) windowed filters with differing sizes for estimating the next measured values both at the source and the sink node such that sensor nodes have to transmit only their immediate sensed values that deviate significantly (with a pre-defined threshold) from the predicted values. The conducted experiments on a real-world data show that our approach has been able to achieve up to 95% communication reduction while retaining a high accuracy (i.e. predicted values have a deviation of +0:5 from real data values).
This paper investigates the impacts of deploying Mobile Femtocell (MFemtocell) in LET networks. We investigate access delay, capacity, and feedback signalling overhead required for implementation of opportunistic scheduling in LTE cellular networks. We particularly study the impacts of deploying MFemtocells stations on the signalling overhead for opportunistic scheduling. Our system level simulation results indicate that one potential advantage of deploying MFemtocells can contribute to improve spectral efficiency by reducing the amount of feedback signalling. © 2011 IEEE.
This paper highlights, limitations of ECPC (Each Carrier Power Control) concept  - originally proposed for OFDM-DS-CDMA - when extended to MC-CDMA (Multi- Carrier Code Division Multiple Access) systems. First, its impractical signaling overhead of 80% secondly; its inability to be used as an uplink power control mechanism. Then we propose BBPC (Band Based Power Control) as a practical alternative to ECPC for MC-CDMA systems. Unlike, ECPC that controls power on each carrier basis, BBPC assigns same power level to a band of carriers (lying within coherence bandwidth of channel). It has been shown that with a nominal performance loss, BBPC reduces the signaling overheads to 2.5% and by employing the control index estimator after de-spreading, it can be used as an uplink power control mechanism for MC-CDMA. We have used SIR (Signal to Interference Ratio) as a power control index, BER and standard deviation of power control error as performance metrics.
A Ka-band inset-fed microstrip patches linear antenna array is presented for the fifth generation (5G) applications in different countries. The bandwidth is enhanced by stacking parasitic patches on top of each inset-fed patch. The array employs 16 elements in an H-plane new configuration. The radiating patches and their feed lines are arranged in an alternating out-of-phase 180-degree rotating sequence to decrease the mutual coupling and improve the radiation pattern symmetry. A (24.4%) measured bandwidth (24.35 to 31.13 GHz)is achieved with -15 dB reflection coefficients and 20 dB mutual coupling between the elements. With uniform amplitude distribution, a maximum broadside gain of 19.88 dBi is achieved. Scanning the main beam to 49.5◦ from the broadside achieved 18.7 dBi gain with -12.1 dB sidelobe level (SLL). These characteristics are in good agreement with the simulations, rendering the antenna to be a good candidate for 5G applications.
Performance of next generation OFDM/OFDMA based Distributed Cellular Network (ODCN) where no cooperation based interference management schemes are used, is dependent on four major factors: 1) spectrum reuse factor, 2) number of sectors per site, 3) number of relay station per site and 4) modulation and coding efficiency achievable through link adaptation. The combined effect of these factors on the overall performance of a Deployment Architecture (DA) has not been studied in a holistic manner. In this paper we provide a framework to characterize the performance of various DA's by deriving two novel performance metrics for 1) spectral efficiency and 2) fairness among users. These metrics are designed to include the effect of all four contributing factors. We evaluate these metrics for a wide set of DA's through extensive system level simulations. The results provide a comparison of various DA's for both cellular and relay enhanced cellular systems in terms of spectral efficiency and fairness they offer and also provide an interesting insight into the tradeoff between the two performance metrics. Numerical results show that, in interference limited regime, DA's with highest spectrum efficiency are not necessarily those that resort to full frequency reuse. In fact, frequency reuse of 3 with 6 sectors per site is spectrally more efficient than that with full frequency reuse and 3 sectors. In case of relay station enhanced ODCN a DA with full frequency reuse, six sectors and 3 relays per site is spectrally more efficient and can yield around 170% higher spectrum efficiency compared to counterpart DA without RS.
In this paper, we extend a well-developed quantization scheme to block fading relay system using compress-and-forward and propose a new achievable rate based quantization scheme (ARBQS). A new signal combination scheme with less complexity is also proposed accordingly. Based on the scalar quantizer obtained, vector quantizer with Trellis coded quantization (TCQ) scheme is provided. While many quantization schemes have concentrated on minimization of quantization distortion, our simulations results indicate that the new scheme achieves better performance in both AWGN case and block fading case without distortion minimization and achieve higher compression efficiency and reduced complexity simultaneously.
Multiple access (MA) technique is a major building block of the cellular systems. Through the MA technique, the users can simultaneously access the physical medium and share the finite resources of the system, such as spectrum, time and power. Due to the rapid growth in demand on data applications in mobile communications, there has been extensive research to improve the efficiency of cellular systems. A significant part of this effort focuses on developing and optimizing the MA techniques. As a result, many MA techniques have been proposed systematically over the years, and some of these MA techniques are already been adopted in the cellular system standards such as Time Division Multiple Access (TDMA), Frequency Division Multiple Access (FDMA), Orthogonal Frequency Division Multiple Access (OFDMA) and Code Division Multiple Access (CDMA). There are many factors that determine the efficiency of the MA technique such as spectral efficiency, low complexity implementation as well low envelope fluctuations. Mainly, the MA techniques can be categorized into orthogonal and non-orthogonal MA. In orthogonal MA techniques, the signal dimension is partitioned and allocated exclusively to the users, and there is no Multiple Access Interference (MAI). For non-orthogonal MA techniques, all the users share the entire signal dimension, and there is a MAI. Thus, for non-orthogonal transmission, more complicated receiver is required to deal with the MAI comparing to orthogonal transmission. Non-orthogonal MA is more practical in the uplink scenario because the base station can afford the Multiuser Detection (MUD) complexity. On the other hand, for downlink, orthogonal MA is more suitable due to the limited processing power at the user equipment. Many non-orthogonal MA techniques have been overlooked due to the implementation complexity. Evidently, the recent advancements in signal processing have opened up new possibilities for developing more sophisticated and efficient MA techniques. Thus, more advanced MA techniques has been proposed lately. However, in order to adopt these new MA techniques in the mobile communication systems, many challenges and opportunities need to be studied.
The paper presents a time-difference-of-arrival (TDOA) position estimation algorithm for indoor positioning in the present of clock drift in a mobile terminal. Then, a new Cramér-Rao bound is derived as a benchmark of the algorithm. The simulation results show that an acceptable positioning accuracy can be achieved when at least five access points in wireless local area networks are involved in positioning. Moreover, when the clock drift or the TDOA is considerably large, the proposed algorithm outperforms the algorithm without considering the clock drift. © VDE VERLAG GMBH.
One major advantage of cloud/centralized radio access network (C-RAN) is the ease of implementation of multicell coordination mechanisms to improve the system spectrum efficiency (SE). Theoretically, large number of cooperative cells lead to a higher SE, however, it may also cause significant delay due to extra channel state information (CSI) feedback and joint processing computational needs at the cloud data center, which is likely to result in performance degradation. In order to investigate the delay impact on the throughput gains, we divide the network into multiple clusters of cooperative small cells and formulate a throughput optimization problem. We model various delay factors and the sum-rate of the network as a function of cluster size, treating it as the main optimization variable. For our analysis, we consider both base stations’ as well as users’ geometric locations as random variables for both linear and planar network deployments. The output SINR (signal-tointerference-plus-noise ratio) and ergodic sum-rate are derived based on the homogenous Poisson point processing (PPP) model. The sum-rate optimization problem in terms of the cluster size is formulated and solved. Simulation results show that the proposed analytical framework can be utilized to accurately evaluate the performance of practical cloud-based small cell networks employing clustered cooperation.
Future mobile communication systems will be designed to support a wide range of data rates with complex quality of service matrix. It is becoming more challenging to optimize the radio resource management and maximise the system capacity whilst meeting the required quality of service from user's point of view. Traditional schemes have approached this problem mainly focusing on resources within a cell and to large extent ignoring effects of multi-cell architecture. This paper addresses the influence of multi-cell interference on overall radio resource utilisation and proposes a novel approach, setting a new direction for future research on resource scheduling strategies in a multi-cell system. It proposes a concept called Load Matrix (LM) which facilitates joint management of interference within and between cells for allocation of radio resources. Simulation results show significant improvement in the resource utilization and overall network performance. Using the LM strategy, average cell throughput can be increased as much as 30% compared to a benchmark algorithm. Results also show that maintaining cell interference within a margin instead of a hard target can significantly improve resource utilization.
It has been pointed out that Slepian-Wolf (SW) ending is efficient to compress data with side information available at the receiver. However, most papers assume that the compressed information is perfectly known to the receiver. In this paper, we consider more practical assumptions that the channel between the relay and the destination is not perfect and error protection need to be implemented. Accordingly, a soft Slepian-Wolf decoding structure is proposed. The new structure not only supports soft Slepian-Wolf decoding within one level, but it also allows soft information passing between different levels. We also consider the relationship between the codes for error protection and the codes for compression and propose a joint decoding and decompressing algorithm to further improve the .performance.
In previous works the cognitive interference channel with unidirectional destination cooperation has been studied. In this model the cognitive receiver acts as a relay of the primary user's message and its operation is assumed to be strictly causal. In this paper we study the same channel model with a causal rather than a strictly causal relay, i.e. the relay's transmit symbol depends not only on its past but also on its current received symbol. We propose an outer bound for the discrete memoryless channel which is later used to compute an outer bound for the Gaussian channel. We also propose an achievable scheme based on instantaneous amplify-and-forward relaying that meets the outer bound in the very strong interference regime.
When dealing with a large number of devices, the existing indexing solutions for the discovery of IoT sources often fall short to provide an adequate scalability. This is due to the high computational complexity and communication overhead that is required to create and maintain the indices of the IoT sources particularly when their attributes are dynamic. This paper presents a novel approach for indexing distributed IoT sources and paves the way to design a data discovery service to search and gain access to their data. The proposed method creates concise references to IoT sources by using Gaussian Mixture Models (GMM). Furthermore, a summary update mechanism is introduced to tackle the change of sources availability and mitigate the overhead of updating the indices frequently. The proposed approach is benchmarked against a standard centralized indexing and discovery solution. The results show that the proposed solution reduces the communication overhead required for indexing by three orders of magnitude while depending on IoT network architecture it may slightly increase the discovery time
We study the cognitive interference channel where an additional node (a relay) is present. In our model the relay's operation is causal rather than strictly causal, i.e., the relay's transmit symbol depends not only on its past but also on its current received symbol. We derive outer bounds for the discrete and Gaussian cases in very strong interference. A scheme for achievability based on instantaneous amplify-and-forward relaying is proposed for this model. The inner and outer bounds coincide for the special case of very strong interference.
Abstract—Millimeter wave (mmWave) communication is a promising technology in future wireless networks because of its wide bandwidths that can achieve high data rates. However, high beam directionality at the transceiver is needed due to the large path loss at mmWave. Therefore, in this paper, we investigate the beam alignment and power allocation problem in a nonorthogonal multiple access (NOMA) mmWave system. Dierent from the traditional beam alignment problem, we consider the NOMA scheme during the beam alignment phase when two users are at the same or close angle direction from the base station. Next, we formulate an optimization problem of joint beamwidth selection and power allocation to maximize the sum rate, where the quality of service (QoS) of the users and total power constraints are imposed. Since it is dicult to directly solve the formulated problem, we start by fixing the beamwidth. Next, we transform the power allocation optimization problem into a convex one, and a closed-form solution is derived. In addition, a one-dimensional search algorithm is used to find the optimal beamwidth. Finally, simulation results are conducted to compare the performance of the proposed NOMA-based beam alignment and power allocation scheme with that of the conventional OMA scheme.
This paper presents an analysis on performance of an ultra dense network (UDN) with and without cell cooperation from the perspective of network information theory. We propose a UDN performance metric called Total Average Geometry Throughput which is independent from the user distribution or scheduler etc. This performance metric is analyzed in detail for UDN with and without cooperation. The numerical results from the analysis show that under the studied system model, the total average geometry throughput reaches its maximum when the inter-cell distance is around 6 ~ 8 meters, both without and with cell cooperation. Cell cooperation can significantly reduce inter-cell interference but not remove it completely. With cell cooperation and an optimum number of the cooperating cells the maximum performance gain can be achieved. Furthermore, the results also imply that there is an optimum aggregate transmission power if considering the energy cost per bit.
This paper proposes a novel carrier frequency offset (CFO) estimation method for generalized MC-CDMA systems in unknown frequency-selective channels utilizing hidden pi- lots. It is established that CFO is identifiable in the frequency domain by employing cyclic statistics (CS) and linear re-gression (LR) algorithms. We show that the CS-based estimator is capable of mitigating the normalized CFO (NCFO) to a small error value. Then, the LR-based estimator can be employed to offer more accurate estimation by removing the residual quantization error after the CS-based estimator.
In this paper, we investigate the hybrid precoding design for joint multicast-unicast millimeter wave (mmWave) system, where the simultaneous wireless information and power transform is considered at receivers. The subarray-based sparse radio frequency chain structure is considered at base station (BS). Then, we formulate a joint hybrid analog/digital precoding and power splitting ratio optimization problem to maximize the energy efficiency of the system, while the maximum transmit power at BS and minimum harvested energy at receivers are considered. Due to the difficulty in solving the formulated problem, we first design the codebook-based analog precoding approach and then, we only need to jointly optimize the digital precoding and power splitting ratio. Next, we equivalently transform the fractional objective function of the optimization problem into a subtractive form one and propose a two-loop iterative algorithm to solve it. For the outer loop, the classic Bi-section iterative algorithm is applied. For the inner loop, we transform the formulated problem into a convex one by successive convex approximation techniques, which is solved by a proposed iterative algorithm. Finally, simulation results are provided to show the performance of the proposed algorithm.
This article presents a multilayer mobility management scheme for All-IP networks where local mobility movements (micro-mobility) are handled separately from global movements (macro-mobility). Furthermore, a hybrid scheme is proposed to handle macro-mobility (Mobile IP for non-real-time services and SIP for real-time services). The interworking between micromobility and macro-mobility is implemented at an entity called the enhanced mobility gateway. Both qualitative and quantitative results have demonstrated that the performance of the proposed mobility management is better than existing schemes. Furthermore, a context transfer solution for AAA is proposed to enhance the multilayer mobility management scheme by avoiding the additional delay introduced by AAA security procedures.
To flexibly support diverse communication requirements (e.g., throughput, latency, massive connection, etc.) for the next generation wireless communications, one viable solution is to divide the system bandwidth into several service subbands, each for a different type of service. In such a multi-service (MS) system, each service has its optimal frame structure while the services are isolated by subband filtering. In this paper, a framework for multi-service (MS) system is established based on subband filtered multi-carrier (SFMC) modulation. We consider both single-rate (SR) and multi-rate (MR) signal processing as two different MS-SFMC implementations, each having different performance and computational complexity. By comparison, the SR system outperforms the MR system in terms of performance while the MR system has a significantly reduced computational complexity than the SR system. Numerical results show the effectiveness of our analysis and the proposed systems. These proposed SR and MR MS-SFMC systems provide guidelines for next generation wireless system frame structure optimization and algorithm design.
A practical wireless network solution for providing community broadband Internet access services are considered to be wireless mesh networks with delay-throughput tradeoff. This important aspect of network design lies in the capability to simultaneously support multiple independent mesh connections at the intermediate mobile stations. The intermediate mobile stations act as routers by combining network packets with forwarding, a scenario usually known as multiple coding unicasts. The problem of efficient network design for such applications based on multipath network coding with delay control on packet servicing is considered. The simulated solution involves a joint consideration of wireless media access control (MAC) and network-layer multipath selection. Rather than considering general wireless mesh networks, here the focus is on a relatively small-scale mesh network with multiple sources and multiple sinks suitable for multihop wireless backhaul applications within WiMAX standard. © 2012 ICST Institute for Computer Science, Social Informatics and Telecommunications Engineering.
This paper investigates a full duplex wirelesspowered two way communication networks, where two hybrid access points (HAP) and a number of amplify and forward (AF) relays both operate in full duplex scenario. We use time switching (TS) and static power splitting (SPS) schemes with two way full duplex wireless-powered networks as a benchmark. Then the new time division duplexing static power splitting (TDD SPS) and full duplex static power splitting (FDSPS) schemes as well as a simple relay selection strategy are proposed to improve the system performance. For TS, SPS and FDSPS, the best relay harvests energy using the received RF signal from HAPs and uses harvested energy to transmit signal to each HAP at the same frequency and time, therefore only partial self-interference (SI) cancellation needs to be considered in the FDSPS case. For the proposed TDD SPS, the best relay harvests the energy from the HAP and its self-interference. Then we derive closed-form expressions for the throughput and outage probability for delay limited transmissions over Rayleigh fading channels. Simulation results are presented to evaluate the effectiveness of the proposed scheme with different system key parameters, such as time allocation, power splitting ratio and residual SI.
A statistical model is derived for the equivalent signal-to-noise ratio of the Source-to-Relay-to-Destination (S-R-D) link for Amplify-and-Forward (AF) relaying systems that are subject to block Rayleigh-fading. The probability density function and the cumulated density function of the S-R-D link SNR involve modified Bessel functions of the second kind. Using fractional-calculus mathematics, a novel approach is introduced to rewrite those Bessel functions (and the statistical model of the S-R-D link SNR) in series form using simple elementary functions. Moreover, a statistical characterization of the total receive-SNR at the destination, corresponding to the S-R-D and the S-D link SNR, is provided for a more general relaying scenario in which the destination receives signals from both the relay and the source and processes them using maximum ratio combining (MRC). Using the novel statistical model for the total receive SNR at the destination, accurate and simple analytical expressions for the outage probability, the bit error probability, and the ergodic capacity are obtained. The analytical results presented in this paper provide a theoretical framework to analyze the performance of the AF cooperative systems with an MRC receiver.
Exploiting path diversity to enhance communication reliability is a key desired property in Internet. While the existing routing architecture is reluctant to adopt changes, overlay routing has been proposed to circumvent the constraints of native routing by employing intermediary relays. However, the selfish interdomain relay placement may violate local routing policies at intermediary relays and thus affect their economic costs and performances. With the recent advance of the concept of network virtualization, it is envisioned that virtual networks should be provisioned in cooperation with infrastructure providers in a holistic view without compromising their profits. In this paper, the problem of policy-aware virtual relay placement is first studied to investigate the feasibility of provisioning policycompliant multipath routing via virtual relays for inter-domain communication reliability. By evaluation on a real domain-level Internet topology, it is demonstrated that policy-compliant virtual relaying can achieve a similar protection gain against single link failures compared to its selfish counterpart. It is also shown that the presented heuristic placement strategies perform well to approach the optimal solution.
Utilizing the holography theory, a bidirectional wideband leaky wave antenna in the millimetre wave (mmW) band is presented. The antenna includes a printed pattern of continuous metallic strips on an Alumina 99:5% sheet, and a surface wave launcher (SWL) to produce the initial reference waves on the substrate. To achieve a bidirectional radiation pattern, the fundamental TE mode is excited by applying a Vivaldi antenna (as the SWL). The proposed holographic-based leaky wave antenna (HLWA) is fabricated and tested and the measured results are aligned with the simulated ones. The antenna has 22:6% fractional bandwidth with respect to the central frequency of 30 GHz. The interference pattern is designed to generate a 15 deg backward tilted bidirectional radiation pattern with respect to the normal of the hologram sheet. The frequency scanning property of the designed HLWA is also investigated.
In this work, we provide the first attempt to evaluate error performance of Rate-Splitting (RS) based transmission strategies with constellation-constrained coding/modulation. The consider scenario is an overloaded multigroup multicast, where RS can mitigate the inter-group interference thus achieve a better max-min fair group rate over conventional transmission strategies. We bridge the RS-based rate optimization with modulationcoding scheme selection, and implement them in a developed transceiver framework with either linear or non-linear receiver, where the latter equips with a generalized sphere decoder. Simulation results of a coded bit error rate demonstrate that, while the conventional strategies suffer from the error floor in the considered scenario, the RS-based strategy delivers a superior performance even with low complexity receiver techniques. The proposed analysis, transceiver framework and evaluation methodology provide a generic baseline solution to validate the effectiveness of the RS-based system design in practice. Index Terms—Rate-splitting, overloaded system, multigroup multicast, rank-deficient, generalized sphere decoder, coded bit error rate.
Nowadays, system architecture of the fifth generation (5G) cellular system is becoming of increasing interest. To reach the ambitious 5G targets, a dense base station (BS) deployment paradigm is being considered. In this case, the conventional always-on service approach may not be suitable due to the linear energy/density relationship when the BSs are always kept on. This suggests a dynamic on/off BS operation to reduce the energy consumption. However, this approach may create coverage holes and the BS activation delay in terms of hardware transition latency and software reloading could result in service disruption. To tackle these issues, we propose a predictive BS activation scheme under the control/data separation architecture (CDSA). The proposed scheme exploits user context information, network parameters, BS sleep depth and measurement databases to send timely predictive activation requests in advance before the connection is switched to the sleeping BS. An analytical model is developed and closed-form expressions are provided for the predictive activation criteria. Analytical and simulation results show that the proposed scheme achieves a high BS activation accuracy with low errors w.r.t. the optimum activation time.
We investigate the capacity of Intelligent Quadrifiliar Helix Antenna (IQHA) based Multiple-input-multiple-output (MIMO) communication system. We will show that IQHA based MIMO system is able to offer larger capacity compared with MIMO system without using IQHA. And at the same time, it can reduce the number of RF chains following the antenna, thus reducing the total cost. Two sub-optimal algorithms are proposed to adjust the weights of IQHA to maximize the capacity.
This paper presents a machine learning (ML) based model to predict the diffraction loss around the human body. Practically, it is not reasonable to measure the diffraction loss changes for all possible body rotation angles, builds and line of sight (LoS) elevation angles. A diffraction loss variation prediction model based on a non-parametric learning technique called Gaussian process (GP) is introduced. Analysed results state that 86% correlation and normalised mean square error (NMSE) of 0.3 on the test data is achieved using only 40% of measured data. This allows a 60% reduction in required measurements in order to achieve a well-fitted ML loss prediction model. It also confirms the model generalizability for non-measured rotation angles.
In research community, a new radio access network architecture with a logical separation between control plane (CP) and data plane (DP) has been proposed for future cellular systems. It aims to overcome limitations of the conventional architecture by providing high data rate services under the umbrella of a coverage layer in a dual connection mode. This configuration could provide significant savings in signalling overhead. In particular, mobility robustness with minimal handover (HO) signalling is considered as one of the most promising benefits of this architecture. However, the DP mobility remains an issue that needs to be investigated. We consider predictive DP HO management as a solution that could minimise the out-of band signalling related to the HO procedure. Thus we propose a mobility prediction scheme based on Markov Chains. The developed model predicts the user’s trajectory in terms of a HO sequence in order to minimise the interruption time and the associated signalling when the HO is triggered. Depending on the prediction accuracy, numerical results show that the predictive HO management strategy could significantly reduce the signalling cost as compared with the conventional non-predictive mechanism.
In cognitive radio network, secondary (unlicensed) users (SUs) are allowed to utilize the licensed spectrum when it is not used by the primary (licensed) users (PUs). Because of the dynamic nature of cognitive radio network, the activities of SUs such as ??how long to sense?? and ??how long to transmit?? significantly affect both the service quality of the cognitive radio networks and protection to PUs. In this work, we formulate and analyze spectrum utilization efficiency problem in the cognitive radio network with various periodic frame structure of SU, which consists of sensing and data transmission slots. Energy detection is considered for spectrum sensing algorithm. To achieve higher spectrum utilization efficiency, the optimal sensing and data transmission length are investigated and found numerically. The simulation results are presented to verify the our analysis and to evaluate the interference to the PU which should be controlled into tolerable level. Index Terms ?? Cognitive radio network; spectrum utilization efficiency; spectrum sensing; energy detection; frame structure.
In this paper, we investigate the optimal inter- frequency small cell discovery (ISCD) periodicity for small cells deployed on carrier frequency other than that of the serving macro cell. We consider that the small cells and user terminals (UTs) positions are modeled according to a homogeneous Poisson Point Process (PPP). We utilize polynomial curve fitting to approximate the percentage of time the typical UT misses small cell offloading opportunity, for a fixed small cell density and fixed UT speed. We then derive analytically, the optimal ISCD periodicity that minimizes the average UT energy consumption (EC). Furthermore, we also derive the optimal ISCD periodicity that maximizes the average energy efficiency (EE), i.e. bit- per-joule capacity. Results show that the EC optimal ISCD periodicity always exceeds the EE optimal ISCD periodicity, with the exception of when the average ergodic rate in both tiers are equal, in which the optimal ISCD periodicity in both cases also becomes equal.
This paper presents empirically-based large-scale propagation path loss models for small cell fifth generation (5G) cellular system in the millimeter-wave bands, based on practical propagation channel measurements at 26 GHz, 32 GHz, and 39 GHz. To characterize path loss at these frequency bands for 5G small cell scenarios, extensive wideband and directional channel measurements have been performed on the campus of the University of Surrey. Close-in reference (CI), and 3GPP path loss models have been studied, and large-scale fading characteristics have been obtained and presented.
In this paper, an orthogonal stochastic gradient descent (O-SGD) based learning approach is proposed to tackle the wireless channel over-training problem inherent in artificial neural network (ANN)-assisted MIMO signal detection. Our basic idea lies in the discovery and exploitation of the training-sample orthogonality between the current training epoch and past training epochs. Unlike the conventional SGD that updates the neural network simply based upon current training samples, O-SGD discovers the correlation between current training samples and historical training data, and then updates the neural network with those uncorrelated components. The network updating occurs only in those identified null subspaces. By such means, the neural network can understand and memorize uncorrelated components between different wireless channels, and thus is more robust to wireless channel variations. This hypothesis is confirmed through our extensive computer simulations as well as performance comparison with the conventional SGD approach.
Future cellular systems demand higher throughput as an important requirement, along with smaller cell sizes to characterize the performance of network services. This paper purposes a way to optimize the multihop cellular network (MCN) deployment in LTE-A/Mobile WiMAX broadband wireless access systems. A simple way to optimize the MCN is to associate direct and multihop users based on maximum channel quality and allocate the resources blocks dynamically based on traffic load balancing as adjustment variables. The changing traffic demands require dynamic network reconfiguration to maintain proportional fairness in achieving the throughput. A self optimizing network based on genetic algorithm (GA) is made to adaptively resize the cell coverage limit and dynamically allocate resources based on active user demands. A policy control scheme to control resource allocations between direct and multihop users can be either fixed resource allocation (FRA) or dynamic resource allocation (DRA).
This paper proposes a novel unipolar transceiver for visible light communication (VLC) by using orthogonal waveforms. The main advantage of our proposed scheme over most of the existing unipolar schemes in the literature is that the polarity of the real-valued orthogonal frequency division multiplexing (OFDM) sample determines the pulse shape of the continuous-time signal and thus, the unipolar conversion is performed directly in the analog instead of the digital domain. Therefore, our proposed scheme does not require any direct current (DC) biasing or clipping as it is the case with existing schemes in the literature. The bit error rate (BER) performance of our proposed scheme is analytically derived and its accuracy is verified by using Matlab simulations. Simulation results also substantiate the potential performance gains of our proposed scheme against the state-of-the-art OFDM-based systems in VLC; it indicates that the absence of DC shift and clipping in our scheme supports more reliable communication and outperforms the asymmetrically clipped optical-OFDM (ACO-OFDM), DC optical-OFDM (DCOOFDM) and unipolar-OFDM (U-OFDM) schemes. For instance, our scheme outperforms ACO-OFDM by at least 3 dB (in terms of signal to noise ratio) at a target BER of 10
The Wireless Hybrid Enhanced Mobile Radio Estimators (WHERE) consortium researches radio positioning techniques to improve various aspects of communications systems. In order to provide the benefits of position information available to communications systems, hybrid data fusion (HDF) techniques estimate reliable position information. Within this paper, we first present the scenarios and radio technologies evaluated by the WHERE consortium for wireless positioning. We compare conventional HDF approaches with two novel approaches developed within the framework of WHERE. Yet, HDF may still provide insufficient localization accuracy and reliability. Hence, we will research and develop new cooperative positioning algorithms, which exploit the available communications links among mobile terminals of heterogeneous wireless networks, to further enhance the positioning accuracy and reliability.
Amplify-and-forward (AF) is one of the most popular and simple approaches for transmitting information over a cooperative multi-input multi-output (MIMO) relay channel. In cooperative communication, relays are employed for improving the coverage or enhancing the spectral efficiency, especially of cell-edge users. However, in a multi-cell context, the use of relays will also lead to an increase in the level of interference that is experienced by cell-edge users of neighboring cells. In this paper, two novel precoding schemes are proposed for mitigating this adverse effect of cooperative communication. They are designed by taking into account the effect of interference coming from neighboring cells, i.e. other cell-interference (OCI), in order to maximize the sum-rate of cell-edge users. Our novel OCI aware precoding schemes are compared against non OCI-aware techniques and results show the large performance gain in terms of sum-rate that our schemes can achieved, especially for large numbers of users and/or antennas in the multi-cell system.
This work aims to handle the joint transmitter and noncoherent receiver optimization for multiuser single-input multiple-output (MU-SIMO) communications through unsupervised deep learning. It is shown that MU-SIMO can be modeled as a deep neural network with three essential layers, which include a partially-connected linear layer for joint multiuser waveform design at the transmitter side, and two nonlinear layers for the noncoherent signal detection. The proposed approach demonstrates remarkable MU-SIMO noncoherent communication performance in Rayleigh fading channels.
Wideband millimeter-wave (mmWave) directional propagation measurements were conducted in the 32 GHz and 39 GHz bands in outdoor line-of-sight (LoS) small cell scenarios. The measurement provides spatial and temporal statistics that will be useful for small-cell outdoor wireless networks for future mmWave bands. Measurements were performed at two outdoor environments and repeated for all polarization combinations. Measurement results show little spread in the angular and delay domains for the LoS scenario. Moreover root-mean-squared (RMS) delay spread at different polarizations show small difference which can be due to specific scatterers in the channel.
In this paper, a novel unsupervised deep learning approach is proposed to tackle the multiuser frequency synchronization problem inherent in orthogonal frequency-division multiple-access (OFDMA) uplink communications. The key idea lies in the use of the feed-forward deep neural network (FF-DNN) for multiuser interference (MUI) cancellation taking advantage of their strong classification capability. Basically, the proposed FF-DNN consists of two essential functional layers. One is called carrier-frequency-offsets (CFOs) classification layer that is responsible for identifying the users’ CFO range, and another is called MUI-cancellation layer responsible for joint multiuser detection (MUD) and frequency synchronization. By such means, the proposed FF-DNN approach showcases remarkable MUIcancellation performances without the need of multiuser CFO estimation. In addition, we also exhibit an interesting phenomenon occurred at the CFO-classification stage, where the CFO-classification performance get improved exponentially with the increase of the number of users. This is called multiuser diversity gain in the CFO-classification stage, which is carefully studied in this paper.
In this paper, we investigate channel estimation for MC-CDMA in the presence of time and frequency synchronization errors. Channel estimation in MC-CDMA requires transmission of pilot tones, based on which MMSE interpolation or FFT-based interpolation algorithms are applied. Most channel estimation methods in literature assume perfect synchronization. However, this condition is not guaranteed in the actual case, and channel estimators are always expected to work as a fine synchronizer which has some ability to compensate synchronization errors . Multicarrier systems are very sensitive to synchronization errors. Uncorrected errors cause inter-carrier interference (ICI) and degrade the performance significantly. In this paper, we analyze the effect of synchronization errors on the performance of pilot-aided channel estimators, and propose low complexity methods to compensate the residual timing and frequency offset respectively. We estimate the timing offset in frequency domain by a single frequency estimation technique, and iteratively search the frequency offset based on the interference power at the certain set of subcarriers. Simulation results show that our methods improve the performance of channel estimators considerably in imperfect synchronization conditions.
This paper presents details of the wideband directional propagation measurements of millimetre-wave (mmWave) channels in the 26 GHz, 32 GHz, and 39 GHz frequency bands in an indoor typical office environment. More than 14400 power delay profiles (PDPs) were measured across the 26 GHz band and over 9000 PDPs have been recorded for the 32 GHz and 39 GHz bands at each measurement point. A mmWave wideband channel sounder has been used, where signal analyzer and vector signal generator was employed. Measurements have been conducted for both co- and crossantenna polarization. The setup provided 2GHz bandwidth and the mechanically steerable directional horn antenna with 8 degrees beamwidth provides 8 degrees of directional resolution over the azimuth for 32 GHz and 39 GHz while 26 GHz measurement setup provides the angular resolution of 5 degrees. Measurements provide path loss, delay and spatial spread of the channel. Large-scale fading characteristics, RMS delay spread, RMS angular spread, angular and delay dispersion are presented for three mmWave bands for the line-of-sight (LoS) scenario.
This paper presents a novel method to estimate the frequency offset between a mobile phone and the infrastructure when the mobile phone initially attaches to the LTE network. The proposed scheme is based on PRACH (Physical Random Access Channel) preambles and can significantly reduce the complexity of preamble detection at the eNodeB side.
A clear understanding of mixed-numerology signals multiplexing and isolation in the physical layer is of importance to enable spectrum efficient radio access network (RAN) slicing, where the available access resource is divided into slices to cater to services/users with optimal individual design. In this paper, a RAN slicing framework is proposed and systematically analyzed from a physical layer perspective. According to the baseband and radio frequency (RF) configurations imparities among slices, we categorize four scenarios and elaborate on the numerology relationships of slices configurations. By considering the most generic scenario, system models are established for both uplink and downlink transmissions. Besides, a low out of band emission (OoBE) waveform is implemented in the system for the sake of signal isolation and inter-service/slice-band-interference (ISBI) mitigation. We propose two theorems as the basis of algorithms design in the established system, which generalize the original circular convolution property of discrete Fourier transform (DFT). Moreover, ISBI cancellation algorithms are proposed based on a collaboration detection scheme, where joint slices signal models are implemented. The framework proposed in the paper establishes a foundation to underpin extremely diverse user cases in 5G that implement on a common infrastructure.
In this paper, a novel approach, namely realcomplex hybrid modulation (RCHM), is proposed to scale up multiuser multiple-input multiple-output (MU-MIMO) detection with particular concern on the use of equal or approximately equal service antennas and user terminals. By RCHM, we mean that user terminals transmit their data sequences with a mix of real and complex modulation symbols interleaved in the spatial and temporal domain. It is shown, through the system outage probability, RCHM can combine the merits of real and complex modulations to achieve the best spatial diversity-multiplexing trade-off that minimizes the required transmit-power given a sum-rate. The signal pattern of RCHM is optimized with respect to the real-to-complex symbol ratio as well as power allocation. It is also shown that RCHM equips the successive interference canceling MU-MIMO receiver with near-optimal performances and fast convergence in Rayleigh fading channels. This result is validated through our mathematical analysis of the average biterror- rate as well as extensive computer simulations considering the case with single or multiple base-stations.
The beyond 3G or 4G mobile systems envisions heterogeneous infrastructures comprising diverse wireless systems, e.g., 2G, 3G, DVB, WLAN, and various transmission approaches, e.g., “one-to-one” and “one-to-many”. In this context, a network selection (NS) problem emerges regarding determining the appropriate Access Network (AN), as users are reachable through several different ANs. This paper addresses the issue of provisioning “one-to-many” services over heterogeneous wireless networks in terms of how to choose the AN that satisfies the bandwidth requirement of services, while maximizing the system profit obtained in the combined network. A heterogeneous network comprising Multicast Broadcast Multimedia Service (MBMS) of the third generation mobile terrestrial network and the digital video broadcasting transmission system for handheld terminals (DVB-H) is adopted in this study. Both networks cooperate and complement each other to improve the resource usage and to support “one-to-many” services with their multicast and broadcast transmission capabilities. Based on this architecture, an algorithm framework is defined and proposed to solve the NS problem for the “one-to-many” services. Six schemes based on the algorithm framework are then evaluated by simulation.
The paper addresses the TCP performance enhancing proxy techniques broadly deployed in wireless networks. Drawing on available models for TCP latency, we describe an analytical model for the latency and the buffer requirements related to the split-TCP mechanism. Although the model applicability is broad, we present and evaluate the model in the context of geostationary satellite networks, where buffering requirements may become more dramatic. Simulation results are compared with the analytical model estimates and show that the model captures the impact of various parameters affecting the dynamics of the component connections traversing the terrestrial and the satellite network.
The Internet-of-Things (IoT) paradigm envisions billions of devices all connected to the Internet, generating low-rate monitoring and measurement data to be delivered to application servers or end-users. Recently, the possibility of applying innetwork data caching techniques to IoT traffic flows has been discussed in research forums. The main challenge as opposed to the typically cached content at routers, e.g. multimedia files, is that IoT data are transient and therefore require different caching policies. In fact, the emerging location-based services can also benefit from new caching techniques that are specifically designed for small transient data. This paper studies in-network caching of transient data at content routers, considering a key temporal data property: data item lifetime. An analytical model that captures the trade-off between multihop communication costs and data item freshness is proposed. Simulation results demonstrate that caching transient data is a promising information-centric networking technique that can reduce the distance between content requesters and the location in the network where the content is fetched from. To the best of our knowledge, this is a pioneering research work aiming to systematically analyse the feasibility and benefit of using Internet routers to cache transient data generated by IoT applications.
This paper exploits a generic downlink symbiotic radio (SR) system, where a Base Station (BS) establishes a direct (primary) link with a receiver having an integrated backscatter device (BD). In order to accurately measure the backscatter link, the backscattered signal packets are designed to have ﬁnite block length. As such, the backscatter link in this SR system employs the ﬁnite block-length channel codes. According to different types of the backscatter symbol period and transmission rate, we investigate the non-cooperative and cooperative SR (i.e., NSR and CSR) systems, and derive their average achievable rate of the direct and backscatter links, respectively. We formulate two optimization problems, i.e., transmit power minimization and energy efﬁciency maximization. Due to the non-convex property of these formulated optimization problems, the semideﬁnite programming (SDP) relaxation and the successive convex approximation (SCA) are considered to design the transmit beamforming vector. Moreover, a low-complexity transmit beamforming structure is constructed to reduce the computational complexity of the SDP relaxed solution. Finally, the simulation results are demonstrated to validate the proposed schemes.
In this article, we consider the joint subcarrier and power allocation problem for uplink orthogonal frequency division multiple access system with the objective of weighted sum-rate maximization. Since the resource allocation problem is not convex due to the discrete nature of subcarrier allocation, the complexity of finding the optimal solution is extremely high. We use the optimality conditions for this problem to propose a suboptimal allocation algorithm. A simplified implementation of the proposed algorithm has been provided, which significantly reduced the algorithm complexity. Numerical results show that the presented algorithm outperforms the existing algorithms and achieves performance very close to the optimal solution.
Until recently, link adaptation and resource allocation for communication system relied extensively on the spectral efficiency as an optimization criterion. With the emergence of the energy efficiency (EE) as a key system design criterion, resource allocation based on EE is becoming of great interest. In this paper, we propose an optimal EE-based resource allocation method for the scalar broadcast channel (BC-S). We introduce our EE framework, which includes an EE metric as well as a realistic power consumption model for the base station, and utilize this framework for formulating our EE-based optimization problem subject to a power as well as fairness constraints. We then prove the convexity of this problem and compare our EE-based resource allocation method against two other methods, i.e. one based on sum-rate and one based on fairness optimization. Results indicate that our method provides large EE improvement in comparison with the two other methods by significantly reducing the total consumed power. Moreover, they show that near-optimal EE and average fairness can be simultaneously achieved over the BC-S channel. © 2012 IEEE.
A new way to model a CDMA-system applying relaying is proposed in this paper. This method makes it possible to compare directly the performance of relaying. The outage probability, which represents the ability of the users to reach the base Station, is chosen as criteria to compare the system with and without relaying. The model is based on the single mode air interface WCDMA FDD with a two-hop relay. When relaying is applied, the simulation results show that even by using the single mode FDD the uplink capacity is significantly improved by 82%. Also a new relay node selection strategy is proposed and the results show how important it is to choose appropriately the relay node. And finally, different scenarios of relaying are simulated to show when or when it is not better to apply relaying.
High Speed Downlink Packet Access (HSDPA) is the front-line technology within the 3rd Generation Partnership Project (3GPP) and represents mid term evolution of the standard. This paper presents simple equalizer structures based on Minimum Mean Square Error criterion that are suitable for Adaptive Modulation and Coding (AMC), which is one of the key features of HSDPA. Performance of equalizer structures in AMC has been shown to provide significant gain over Rake receiver, in terms of HSDPA throughput, by enabling the use of higher CQI (Channel Quality Indicator) indices whilst showing stability against changing input signal statistics caused by AMC. LMMSE equalizer has been found to roughly double the HSDPA throughput in a variety of radio channels with relatively small increase in complexity. © 2009 IEEE.
This paper studies adaptive power allocation among sub-carriers in MC-CDMA. Due to intrinsic nature of MC-CDMA; Carrier Based power allocation schemes cause MAI (Multiple Access Interference) enhancements, hence fail at higher system loads. We propose a Band Based Dynamic Link Adaptation (BBDLA) scheme that preserves orthogonality (among users) by spreading user’s signal only over a Band of adjacent N sub-carriers (N < Nsc 1 ) lying within coherence bandwidth (Bc) of the channel. Hence, it allows Band Based power allocation without causing any MAI. However, with only N orthogonal users supported on a particular Band, BBDLA essentially proposes a hybrid of FDMA with MC-CDMA where Bands and transmit powers are optimally assigned to users by Base Station (in accordance with their channel state). Optimum Band allocation for BBDLA is found to be computationally intractable hence a sub-optimal heuristic approach is proposed with equal power distribution among all assigned Bands for each user. Effect of Bc over choice of N is studied and BBDLA with suitably chosen N, is shown to outperform other published Carrier Based power allocation schemes while it maintain almost single user BER performance up to 62% of full system loading
It is well-known that the values of IEEE 802.11 MAC parameters directly affect the utilization of the channel capacity and link layer throughput as well as higher layers performance. This paper first studies the throughput of ad hoc network under various 802.11 MAC parameters by developing a 3-dimensional Markov chain. Based on this model, it is mathematically proved that the current values of 802.11 parameters result in dramatic throughput degradation. Then, the optimum values of 802.11 parameters that lead to maximum 802.11 MAC throughput will be proposed.
Information-centric networking (ICN) is an emerging networking paradigm that places content identifiers rather than host identifiers at the core of the mechanisms and protocols used to deliver content to end-users. Such a paradigm allows routers enhanced with content-awareness to play a direct role in the routing and resolution of content requests from users, without any knowledge of the specific locations of hosted content. However, to facilitate good network traffic engineering and satisfactory user QoS, content routers need to exchange advanced network knowledge to assist them with their resolution decisions. In order to maintain the location-independency tenet of ICNs, such knowledge (known as context information) needs to be independent of the locations of servers. To this end, we propose CAINE — Context-Aware Information-centric Network Ecosystem — which enables context-based operations to be intrinsically supported by the underlying ICN routing and resolution functions. Our approach has been designed to maintain the location-independence philosophy of ICNs by associating context information directly to content rather than to the physical entities such as servers and network elements in the content ecosystem, while ensuring scalability. Through simulation, we show that based on such location-independent context information, CAINE is able to facilitate traffic engineering in the network, while not posing a significant control signalling burden on the network
This paper presents the measurement results and analysis for outdoor wireless propagation channels at 26 GHz over 2 GHz bandwidth for two receiver antenna polarization modes. The angular and wideband properties of directional and virtually omni-directional channels, such as angular spread, root-mean-square delay spread and coherence bandwidth, are analyzed. The results indicate that the reflections can have a significant contribution in some realistic scenarios and increase the angular and delay spreads, and reduce the coherence bandwidth of the channel. The analysis in this paper also show that using a directional transmission can result in an almost frequencyflat fading channel over the measured 2 GHz bandwidth; which consequently has a major impact on the choice of system design choices such as beamforming and transmission numerology.
This article presents a comprehensive survey of the literature on self-interference management schemes required to achieve a single frequency full duplex communication in wireless communication networks. A single frequency full duplex system often referred to as in-band full duplex (FD) system has emerged as an interesting solution for the next generation mobile networks where scarcity of available radio spectrum is an important issue. Although studies on the mitigation of self-interference have been documented in the literature, this is the first holistic attempt at presenting not just the various techniques available for handling self-interference that arises when a full duplex device is enabled, as a survey, but it also discusses other system impairments that significantly affect the self-interference management of the system, and not only in terrestrial systems, but also on satellite communication systems. The survey provides a taxonomy of self-interference management schemes and shows by means of comparisons the strengths and limitations of various self-interference management schemes. It also quantifies the amount of self-interference cancellation required for different access schemes from the 1 st generation to the candidate 5 th generation of mobile cellular systems. Importantly, the survey summarises the lessons learnt, identifies and presents open research questions and key research areas for the future. This paper is intended to be a guide and take off point for further work on self-interference management in order to achieve full duplex transmission in mobile networks including heterogeneous cellular networks which is undeniably the network of future wireless systems.
A generic cooperative MIMO BICM system is described. Achievable rates are computed based on the extended equivalent binary input channel model of the original BICM system. Full decode and forward is assumed at the relay node. Two types of two-phased transmission/reception protocols are employed to establish orthogonal transmission/reception of the relay node. The achievable rate results are provided for different combinations of modulation orders and the number of antennas used at the source and relay nodes. Quantitative results provided in this paper could serve as a guide on when to engage cooperative transmission and how to choose proper constellations and puncturing ratios for the practical BICM coded systems. Comparison of the considered BICM system with other possible cooperative coded systems is also crucial that this paper due to lack of space for exposition misses to address.
To avoid unnecessarily using a massive number of base station antennas to support a large number of users spatially multiplexed multi-user MIMO systems, optimal detection methods are required to demultiplex the mutually interfering information streams. Sphere decoding (SD) can achieve this, but its complexity and latency becomes impractical for large MIMO systems. Low complexity detection solutions such as linear detectors (e.g., MMSE) or likelihood ascendant search (LAS) approaches, have significantly lower latency requirements than SD but their achievable throughput is far from optimal. This work presents the concept of Antipodal detection and decoding, that can deliver very high throughput with practical latency requirements, even in systems where the number of user antennas reaches the number of base station antennas. The Antipodal detector either results in a highly reliable vector solution, or it does not find a vector solution at all (i.e., it results in an erasure), skipping the heavy processing load related to finding vector solutions that have a very high likelihood to be erroneous. Then, a belief-propagation-based decoder is proposed, that restores these erasures and further corrects remaining erroneous vector solutions. We show that for 32⇥32, 64-QAM modulated systems, and for packet error rates below 10%, Antipodal detection and decoding requires 9 dB less transmitted power than systems employing soft MMSE or LAS detection and LDPC decoding with similar complexity requirements. For the same scenario, our Antipodal method achieves practical throughput gains of more than 50% compared to soft MMSE and soft LAS-based methods.
This paper investigates self-backhauling with dual antenna selection at multiple small cell base stations. Both half and full duplex transmissions at the small cell base station are considered. Depending on instantaneous channel conditions, the full duplex transmission can have higher throughput than the half duplex transmission, but it is not always the case. Closed-form expressions of the average throughput are obtained, and validated by simulation results. In all cases, the dual receive and transmit antenna selection significantly improves backhaul and data transmission, making it an attractive solution in practical systems.
This paper describes a distributed, cooperative and real time rental protocol for DCA operations in a multi system and mult) cell context for OFDMA systems. A credit token based rental protocol using auctioning Is proposed in support of dynamic spectrum sharing between cells. The proposed scheme can be tuned adaptively as a function of the context by specifying the credit tokens usage in the radio etiquette. The application of the rental protocol is illustrated with an ascending bid auctioning. The paper also describes two approaches for BS-BS communications in support of the rental protocol. Finally, it is described how the proposed mechanisms contribute to the current approaches followed in the IEEE 802.16h and IEEE 802.22 standards efforts addressing cognitive radio, © 2006 IEEE.
Frequent handovers (HOs) in dense small cell deployment scenarios could lead to a dramatic increase in signalling overhead. This suggests a paradigm shift towards a signalling conscious cellular architecture with intelligent mobility management. In this direction, a futuristic radio access network with a logical separation between control and data planes has been proposed in research community. It aims to overcome limitations of the conventional architecture by providing high data rate services under the umbrella of a coverage layer in a dual connection mode. This approach enables signalling efficient HO procedures, since the control plane remains unchanged when the users move within the footprint of the same umbrella. Considering this configuration, we propose a core-network efficient radio resource control (RRC) signalling scheme for active state HO and develop an analytical framework to evaluate its signalling load as a function of network density, user mobility and session characteristics. In addition, we propose an intelligent HO prediction scheme with advance resource preparation in order to minimise the HO signalling latency. Numerical and simulation results show promising gains in terms of reduction in HO latency and signalling load as compared with conventional approaches.
Spectrum sharing and employing highly directional antennas in the mm-wave bands are considered among the key enablers for 5G networks. Conventional interference avoidance techniques like listen-before-talk (LBT) may not be efficient for such coexisting networks. In this paper, we address a coexistence mechanism by means of distributed beam scheduling with minimum cooperation between spectrum sharing subsystems without any direct data exchange between them. We extend a “Good Neighbor” (GN) principle initially developed for decentralized spectrum allocation to the distributed beam scheduling problem. To do that, we introduce relative performance targets, develop a GN beam scheduling algorithm, and demonstrate its efficiency in terms of performance/complexity trade off compared to that of the conventional selfish (SLF) and recently proposed distributed learning scheduling (DLS) solutions by means of simulations in highly directional antenna mm-wave scenarios.
Architecture Description Languages enable the formalization of the architecture of systems and the execution of preliminary analysis on them, aiming at the identification and resolution of design problems in the early stages of development. Such problems can be incompatibilities and mismatches in the connections between system components and in the format and type of information exchanged between them. Architecture Description Languages were initially developed to validate the correctness of software architectures; however, their applicability has been extended to cover many diverse areas during the past few years. In this paper, we aim to show how Architecture Description Languages can be applied to and be a useful tool towards validating the correctness of architectures and configurations of future internet networking environments. We do so by using a recently proposed architectural approach and a recently proposed deployment approach, implemented by means of network virtualization, as case studies
It is well-established that transmitting at full power is the most spectral-efficient power allocation strategy for pointto- point (P2P) multi-input multi-output (MIMO) systems, however, can this strategy be energy efficient as well? In this letter, we address the most energy-efficient power allocation policy for symmetric P2P MIMO systems by accurately approximating in closed-form their optimal transmit power when a realistic MIMO power consumption model is considered. In most cases, being energy efficient implies a reduction in transmit and overall consumed powers at the expense of a lower spectral efficiency.
We investigate the use of fixed-point methods for predicting the performance of multiple TCP flows sharing geostationary satellite links. The problem formulation is general in that it can address both error-free and error-prone links, proxy mechanisms such as split-TCP connections and account for asymmetry and different satellite network configurations. We apply the method in the specific context of bandwidth on demand (BoD) satellite links. The analytical approximations show good agreement with simulation results, although they tend to be optimistic when the link is not saturated. The main constraint upon the method applicability is the limited availability of analytical models for the MAC-induced packet delay under nonhomogeneous load and prioritization mechanisms.
Most of the wireless systems such as the long term evolution (LTE) adopt a pilot symbol-aided channel estimation approach for data detection purposes. In this technique, some of the transmission resources are allocated to common pilot signals which constitute a significant overhead in current standards. This can be traced to the worst-case design approach adopted in these systems where the pilot spacing is chosen based on extreme condition assumptions. This suggests extending the set of the parameters that can be adaptively adjusted to include the pilot density. In this paper, we propose an adaptive pilot pattern scheme that depends on estimating the channel correlation. A new system architecture with a logical separation between control and data planes is considered and orthogonal frequency division multiplexing (OFDM) is chosen as the access technique. Simulation results show that the proposed scheme can provide a significant saving of the LTE pilot overhead with a marginal performance penalty.
In this paper, the capacity of OFDM/OQAM with isotropic orthogonal transfer algorithm (IOTA) pulse shaping is evaluated through information theoretic analysis. In the conventional OFDM systems the insertion of a cyclic prefix (CP) decreases the system’s spectral efficiency. As an alternative to OFDM, filter bank based multicarrier systems adopt proper pulse shaping with good time and frequency localisation properties to avoid interference and maintain orthogonality in real field among sub-carriers without the use of CP. We evaluate the spectral efficiency of OFDM/OQAM systems with IOTA pulse shaping in comparison with conventional OFDM/QAM systems, and our analytical model is further extended in order to gain insights into the effect of utilizing the intrinsic interference on the performance of our system. Furthermore, the spectral efficiency of OFDM/OQAM systems is analyzed when the effect of inter-symbol and inter-carrier interference is considered.
Energy efficiency has become increasingly important in wireless communications, with significant environmental and financial benefits. This paper studies the achievable capacity region of a single carrier uplink channel consisting of two transmitters and a single receiver, and uses average energy efficiency contours to find the optimal rate pair based on four different targets: Maximum energy efficiency, a trade-off between maximum energy efficiency and rate fairness, achieving energy efficiency target with maximum sum-rate and achieving energy efficiency target with fairness. In addition to the transmit power, circuit power is also accounted for, with the maximum transmit power constrained to a fixed value. Simulation results demonstrate the achievability of the optimal energy-efficient rate pair within the capacity region, and provide the trade-off for energy efficiency, fairness and maximum sum-rate.