The Internet of Things is not just a theoretical topic anymore, but it has become a reality that affects everyday life in many aspects. Users and devices can now be connected to the Internet whenever, wherever and interact with each other in revolutionary ways. Mobile Crowdsensing (MCS) is a collaborative demonstration of this interaction. However, MCS introduced a series of challenges surrounding the areas of power management, privacy preservation, and data quality. These may result in compromised user experience, limited user acceptance, and minimise the potential this area has to offer. This study addresses the discussed problems by examining the impact Internet of Things (IoT) communication protocols have on modern mobile crowdsensing systems. To do so, an end-to-end crowdsensing system is designed to evaluate the most popular IoT protocols. Simulation and off-the- shelf-device-based experiment runs provided an insight on the performance of the said protocols. Based on the findings, a new sensing approach is introduced aiming to improve system’s robustness and minimise energy requirements. Finally, a user-preference efficient crowdsensing algorithm that determines the most efficient communication protocol based on user input and experiment parameters is proposed.
Node clustering has been widely studied in recent years for Wireless Sensor Networks (WSN) as a technique to form a hierarchical structure and prolong network lifetime by reducing the number of packet transmissions. Cluster Heads (CH) are elected in a distributed way among sensors, but are often highly overloaded, and therefore re-clustering operations should be performed to share the resource intensive CH-role. Existing protocols involve periodic network-wide re-clustering operations that are simultaneously performed, which requires global time synchronisation. To address this issue, some recent studies have proposed asynchronous node clustering for networks with direct links from CHs to the data sink. However, for large-scale WSNs, multihop packet delivery to the sink is required since longrange transmissions are costly for sensor nodes. In this paper, we present an asynchronous node clustering protocol designed for multihop WSNs, considering dynamic conditions such as residual node energy levels and unbalanced data traffic loads caused by packet forwarding. Simulation results demonstrate that it is possible to achieve similar levels of lifetime extension by re-clustering a multihop WSN via independently made decisions at CHs, without a need for time synchronisation required by existing synchronous protocols.
Hot spots in a wireless sensor network emerge as locations under heavy traffic load. Nodes in such areas quickly deplete energy resources, leading to disruption in network services. This problem is common for data collection scenarios in which Cluster Heads (CH) have a heavy burden of gathering and relaying information. The relay load on CHs especially intensifies as the distance to the sink decreases. To balance the traffic load and the energy consumption in the network, the CH role should be rotated among all nodes and the cluster sizes should be carefully determined at different parts of the network. This paper proposes a distributed clustering algorithm, Energy-efficient Clustering (EC), that determines suitable cluster sizes depending on the hop distance to the data sink, while achieving approximate equalization of node lifetimes and reduced energy consumption levels. We additionally propose a simple energy-efficient multihop data collection protocol to evaluate the effectiveness of EC and calculate the end-to-end energy consumption of this protocol; yet EC is suitable for any data collection protocol that focuses on energy conservation. Performance results demonstrate that EC extends network lifetime and achieves energy equalization more effectively than two well-known clustering algorithms, HEED and UCR.
This paper presents a task allocation-oriented framework to enable efficient in-network processing and cost-effective multi-hop resource sharing for dynamic multi-hop multimedia wireless sensor networks with low node mobility, e.g., pedestrian speeds. The proposed system incorporates a fast task reallocation algorithm to quickly recover from possible network service disruptions, such as node or link failures. An evolutional self-learning mechanism based on a genetic algorithm continuously adapts the system parameters in order to meet the desired application delay requirements, while also achieving a sufficiently long network lifetime. Since the algorithm runtime incurs considerable time delay while updating task assignments, we introduce an adaptive window size to limit the delay periods and ensure an up-to-date solution based on node mobility patterns and device processing capabilities. To the best of our knowledge, this is the first study that yields multi-objective task allocation in a mobile multi-hop wireless environment under dynamic conditions. Simulations are performed in various settings, and the results show considerable performance improvement in extending network lifetime compared to heuristic mechanisms. Furthermore, the proposed framework provides noticeable reduction in the frequency of missing application deadlines.
The IEEE 802.15.4 protocol is widely adopted as the MAC sub-layer standard for wireless sensor networks, with two available modes: beacon-enabled and non-beacon-enabled. The non-beacon-enabled mode is simpler and does not require time synchronisation; however, it lacks an explicit energy saving mech-anism that is crucial for its deployment on energy-constrained sensors. This paper proposes a distributed sleep mechanism for non-beacon-enabled IEEE 802.15.4 networks which provides energy savings to energy-limited nodes. The proposed mechanism introduces a sleep state that follows each successful packet transmission. Besides energy savings, the mechanism produces a trafﬁc shaping effect that reduces the overall contention in the network, effectively improving packet delivery ratio. Based on trafﬁc arrival rate and the level of network contention, a node can adjust its sleep period to achieve the highest packet delivery ratio. Performance results obtained by ns3 simulations validate these improvements as compared to the IEEE 802.15.4 standard.
The random access (RA) mechanism of Long Term Evolution (LTE) networks is prone to congestion when a large number of devices attempt RA simultaneously, due to the limited set of preambles. If each RA attempt is made by means of transmission of multiple consecutive preambles (codewords) picked from a subset of preambles, as proposed in , collision probability can be significantly reduced. Selection of an optimal preamble set size  can maximise RA success probability in the presence of a trade-off between codeword ambiguity and code collision probability, depending on load conditions. In light of this finding, this paper provides an adaptive algorithm, called Multipreamble RA, to dynamically determine the preamble set size in different load conditions, using only the minimum necessary uplink resources. This provides high RA success probability, and makes it possible to isolate different network service classes by separating the whole preamble set into subsets each associated to a different service class; a technique that cannot be applied effectively in LTE due to increased collision probability. This motivates the idea that preamble allocation could be implemented as a virtual network function, called vPreamble, as part of a random access network (RAN) slice. The parameters of a vPreamble instance can be configured and modified according to the load conditions of the service class it is associated to.
Software-Defined Networking (SDN) is a promising paradigm of computer networks, offering a programmable and centralised network architecture. However, although such a technology supports the ability to dynamically handle network traffic based on real-time and flexible traffic control, SDN-based networks can be vulnerable to dynamic change of flow control rules, which causes transmission disruption and packet loss in SDN hardware switches. This problem can be critical because the interruption and packet loss in SDN switches can bring additional performance degradation for SDN-controlled traffic flows in the data plane. In this paper, we propose a novel robust flow control mechanism referred to as Priority-based Flow Control (PFC) for dynamic but disruption-free flow management when it is necessary to change flow control rules on the fly. PFC minimizes the complexity of flow modification process in SDN switches by temporarily adapting the priority of flow rules in order to substantially reduce the time spent on control-plane processing during run-time. Measurement results show that PFC is able to successfully prevent transmission disruption and packet loss events caused by traffic path changes, thus offering dynamic and lossless traffic control for SDN switches.
Establishing wireless networks in urban areas that can provide ubiquitous Internet access to end-users is a central part of the efforts towards defining the Internet of the future. In recent years, Wireless Mesh Network (WMN) backbone infrastructures are proposed as a cost effective technology to provide city-wide Internet access. Studies that evaluate the performance of city-wide mesh network deployments via experiments provide essential information on various challenges of building them. In this survey, we particularly focus on such studies and provide brief conclusions on the problems, benefits, and future research directions of city-wide WMNs.
The parameters of Physical (PHY) layer radio frame for 5th Generation (5G) mobile cellular systems are expected to be flexibly configured to cope with diverse requirements of different scenarios and services. This paper presents a frame structure and design which is specifically targeting Internet of Things (IoT) provision in 5G wireless communication systems. We design a suitable radio numerology to support the typical characteristics, that is, massive connection density and small and bursty packet transmissions with the constraint of low cost and low complexity operation of IoT devices. We also elaborate on the design of parameters for Random Access Channel (RACH) enabling massive connection requests by IoT devices to support the required connection density. The proposed design is validated by link level simulation results to show that the proposed numerology can cope with transceiver imperfections and channel impairments. Furthermore, results are also presented to show the impact of different values of guard band on system performance using different subcarrier spacing sizes for data and random access channels, which show the effectiveness of the selected waveform and guard bandwidth. Finally, we present system level simulation results that validate the proposed design under realistic cell deployments and inter-cell interference conditions.
Energy consumption of sensor nodes is a key factor affecting the lifetime of wireless sensor networks (WSNs). Prolonging network lifetime not only requires energy efficient operation, but also even dissipation of energy among sensor nodes. On the other hand, spatial and temporal variations in sensor activities create energy imbalance across the network. Therefore, routing algorithms should make an appropriate trade-off between energy efficiency and energy consumption balancing to extend the network lifetime. In this paper, we propose a Distributed Energy-aware Fuzzy Logic based routing algorithm (DEFL) that simultaneously addresses energy efficiency and energy balancing. Our design captures network status through appropriate energy metrics and maps them into corresponding cost values for the shortest path calculation. We seek fuzzy logic approach for the mapping to incorporate human logic. We compare the network lifetime performance of DEFL with other popular solutions including MTE, MDR and FA. Simulation results demonstrate that the network lifetime achieved by DEFL exceeds the best of all tested solutions under various traffic load conditions. We further numerically compute the upper bound performance and show that DEFL performs near the upper bound.
Network coding has been proposed as a technique that can potentially increase the transport capacity of a wireless network via mixing data packets at intermediate routers. However, most previous studies either assume a fixed transmission rate or do not consider the impact of using diverse rates on the network coding gain. Since in many cases, network coding implicitly relies on overhearing, the choice of the transmission rate has a big impact on the achievable gains. The use of higher rates works in favor of increasing the native throughput. However, it may in many cases work against effective overhearing. In other words, there is a tension between the achievable network coding gain and the inherent rate gain possible on a link. In this paper, our goal is to drive the network toward achieving the best tradeoff between these two contradictory effects.We design a distributed framework that: 1) facilitates the choice of the best rate on each link while considering the need for overhearing; and 2) dictates the choice of which decoding recipient will acknowledge the reception of an encoded packet. We demonstrate that both of these features contribute significantly toward gains in throughput.We extensively simulate our framework in a variety of topological settings. We also fully implement it on real hardware and demonstrate its applicability and performance gains via proof-of-concept experiments on our wireless testbed. We show that our framework yields throughput gains of up to 390% as compared to what is achieved in a rate-unaware network coding framework.
Data collection is a fundamental task of Wireless Sensor Networks (WSN) to support a variety of applications, such as remote monitoring, and emergency response, where collected information is relayed to an infrastructure network via packet gateways for processing and decision making. In large-scale monitoring scenarios, data packets need to be relayed over multi-hop paths to the gateways, and sensors are often randomly deployed, causing local node density differences. As a result, imbalance in data traffic load on the gateways is likely to occur. Furthermore, due to dynamic network conditions and differences in sensor data generation rates, congestion on some data paths is also often experienced. Numerous studies have focused on the problem of in-network traffic load balancing, while a few works have aimed at equalizing the loads on gateways. However, there is a potential trade-off between these two problems. In this paper, the dual objective of gateway and in-network load balancing is addressed and the RALB (Reactive and Adaptive Load Balancing) algorithm is presented. RALB is proposed as a generic solution for multihop networks and mesh topologies, especially in large-scale remote monitoring scenarios, to balance traffic loads.
In this letter, we analyse the trade-off between collision probability and code-ambiguity, when devices transmit a sequence of preambles as a codeword, instead of a single preamble, to reduce collision probability during random access to a mobile network. We point out that the network may not have sufficient resources to allocate to every possible codeword, and if it does, then this results in low utilisation of allocated uplink resources. We derive the optimal preamble set size that maximises the probability of success in a single attempt, for a given number of devices and uplink resources.
The Internet-of-Things (IoT) paradigm envisions billions of devices all connected to the Internet, generating low-rate monitoring and measurement data to be delivered to application servers or end-users. Recently, the possibility of applying innetwork data caching techniques to IoT traffic flows has been discussed in research forums. The main challenge as opposed to the typically cached content at routers, e.g. multimedia files, is that IoT data are transient and therefore require different caching policies. In fact, the emerging location-based services can also benefit from new caching techniques that are specifically designed for small transient data. This paper studies in-network caching of transient data at content routers, considering a key temporal data property: data item lifetime. An analytical model that captures the trade-off between multihop communication costs and data item freshness is proposed. Simulation results demonstrate that caching transient data is a promising information-centric networking technique that can reduce the distance between content requesters and the location in the network where the content is fetched from. To the best of our knowledge, this is a pioneering research work aiming to systematically analyse the feasibility and benefit of using Internet routers to cache transient data generated by IoT applications.