Chang Ge, Ning Wang, Ioannis Selinis, Joe Cahill, Mark Kavanagh, Konstantinos Liolis, Christos Politis, Jose Nunes, Barry Evans, Yogaratnam Rahulan, Nivedita Nouvel, Mael Boutin, Jeremy Desmauts, Fabrice Arnal, Simon Watts, Georgia Poziopoulou QoE-Assured Live Streaming via Satellite Backhaul in 5G Networks, In: IEEE Transactions on Broadcasting
Satellite communication has recently been included as one of the key enabling technologies for 5G backhauling, especially for the delivery of bandwidth-demanding enhanced mobile broadband (eMBB) applications in 5G. In this paper, we present a 5G-oriented network architecture that is based on satellite communications and multi-access edge computing (MEC) to support eMBB applications, which is investigated in the EU 5GPPP Phase-2 SaT5G project. We specifically focus on using the proposed architecture to assure Quality-of-Experience (QoE) of HTTP-based live streaming users by leveraging satellite links, where the main strategy is to realise transient holding and localization of HTTP-based (e.g., MPEG-DASH or HTTP Live Streaming) video segments at 5G mobile edge while taking into account the characteristics of satellite backhaul link. For the very first time in the literature, we carried out experiments and systematically evaluated the performance of live 4K video streaming over a 5G core network supported by a live geostationary satellite backhaul, which validates its capability of assuring live streaming users’ QoE under challenging satellite network scenarios.
The launch of the StarLink Project has recently stimulated a new wave of research on integrating Low Earth Orbit (LEO) satellite networks with the terrestrial Internet infrastructure. In this context, one distinct technical challenge to be tackled is the frequent topology change caused by the constellation behaviour of LEO satellites. Frequent change of the peering IP connection between the space and terrestrial Autonomous Systems (ASes) inevitably disrupts the Border Gateway Protocol (BGP) routing stability at the network boundaries which can be further propagated into the internal routing infrastructures within ASes. To tackle this problem, we introduce the Geosynchronous Network Grid Addressing (GNGA) scheme by decoupling IP addresses from physical network elements such as a LEO satellite. Specifically, according to the density of LEO satellites on the orbits, the IP addresses are allocated to a number of stationary "grids" in the sky and dynamically bound to the interfaces of the specific satellites moving into the grids along time. Such a scheme allows static peering connection between a terrestrial BGP speaker and a fixed external BGP (e-BGP) peer in the space, and hence is able to circumvent the exposure of routing disruptions to the legacy terrestrial ASes. This work-in-progress specifically addresses a number of fundamental technical issues pertaining to the design of the GNGA scheme.
An initial analysis of 5G has shown that it is a radical departure form the generational trend: In particular headline rates and capacities which are X10 and X100 greater than the improvement attained with previous, more evolutionary, upgrades. In order to achieve these metrics will require extreme densification of the network given the spectrum that is available for 5G. A compelling case is made that this densification will cause costs to balloon.
To access this costs a techno-economic analysis of the 5G eMBB (enhanced Mobile BroadBand) scenario in dense urban areas has been accomplished by radio capacity modelling of probable 5G technologies within a 1km² grid representing central London. Different density networks were modelled at: 700MHz (macro network), 3.5GHz (micro network) and 24-27.5 GHz (hot spots) – together with 802.11ac access points. Using published data on network costs various deployment options have been evaluated for capacity, headline rate and CAPEX/OPEX.
It has been shown that reaching headline rates of 64-100Mbps everywhere is possible with a number of different technology options. Massive increases in capacity (in excess of 100Gbps/km²), however, can only be realistically achieved with millimetre wave (outdoor) and internal base stations The cost of deploying such capacity, however, will be several times that of LTE – we estimate a 4 to 5 times increase in costs for a 100Mbps everywhere network that has x100 capacity increase over existing LTE networks.
One possible way of reducing the costs of 5G and increasing capacity is to place femto or distributed base-stations within buildings: we have demonstrated 3Tbps/km² of capacity with 5,800 femto cells per km² for a neutral hosted solution. However, there is a substantial up-front cost to utilizing internal base-stations: fibre back-haul and internal fibre needs to be installed. This initial cost is identified as significant barrier.
Web-based content is a dominant application type in mobile network but accessing such content suffers from poor downloading latency. In modern mobile networks, accelerating web content downloading faces three distinctive challenges. First the web contents enter a rich-media era, with an explosion of the content size and an evolution of content structure which not only requires increased network resources but also incurs noticeable computation latency. Second the unavoidable network uncertainties like RTT variation and random loss aggravate such degraded downloading time, although the network has already offered augmented resources like high bandwidth, low packet loss and latency. Third, the newly standardised protocols like HTTP 2.0 and QUIC are expected to provide an optimised resource utilisation, but existing understanding of such protocols when applying on web content is still superficial. By realising these intertwined technical aspects, we examined three web downloading scenarios, figured out how these aspects qualitatively affect downloading time and then proposed optimisation intelligence accordingly. First, we focused on the fixed single connection number of HTTP 2.0 which cannot be adaptive for various content size and network conditions. By clarifying the numerical relationship between content size, network condition and connection number, we proposed a context-aware mobile edge hint framework. In this framework, a mobile edge hint server offline collects the meta-data of popular webpages as well as the network condition and performs online hints of such information upon receiving the user request. Then the user can execute a novel algorithm to select an optimal connection number by understanding the specific network condition and content characteristics through the edge hint. Both numerical and test-bed based results validate that this framework can bring a noticeable acceleration of webpage downloading. Second, we turned our attention to the computation latency which is caused by the unavoidable computation task during webpage downloading. We seek for a transport layer approach since pure application layer approaches are recognised to have practicality and security limitation. To this end, a non-URL based mobile edge computing framework is proposed to serve a novel transport layer IW selection algorithm at the client side. This framework is validated to have remarkable performance improvement when computation latency occupies less than 50% of total downloading time. Third, we investigated QUIC's performance on web content, especially in the presence of network uncertainties. The evaluation results carried out on real mobile networks reveal that the different congestion control algorithms plugged in QUIC can lead to distinctive shortages under network fluctuations. Then we proposed a mQUIC scheme which performs a customised state and congestion window synchronisation algorithm based on multiple coordinated connections. We conducted extensive evaluations of mQUIC and the results substantiated faster and robust downloading time can be achieved by mQUIC when compared to plain QUIC enable contents.
Fast reroute (FRR) techniques have been designed and standardised in recent years for supporting sub-50-millisecond failure recovery in operational ISP networks. On the other hand, if the provisioning of FRR protection paths does not take into account traffic engineering (TE) requirements, customer traffic may still get disrupted due to post-failure traffic congestion. Such a situation could be more severe in operational networks with highly dynamic traffic patterns. In this paper we propose a distributed technique that enables adaptive control of FRR protection paths against dynamic traffic conditions, resulting in self-optimisation in addition to the self-healing capability. Our approach is based on the Loop-free Alternates (LFA) mechanism that allows non-deterministic provisioning of protection paths. The idea is for repairing routers to periodically re-compute LFA alternative next-hops using a lightweight algorithm for achieving and maintaining optimised post-failure traffic distribution in dynamic network environments. Our experiments based on a real operational network topology and traffic traces across 24 hours have shown that such an approach is able to significantly enhance relevant network performance compared to both TE-agnostic and static TE-aware FRR solutions. © 2011 IEEE.
With the fast development of the Internet, the size of Forwarding Information Base (FIB) maintained at backbone routers is experiencing an exponential growth, making the storage support and lookup process of FIBs a severe challenge. One effective way to address the challenge is FIB compression, and various solutions have been proposed in the literature. The main shortcoming of FIB compression is the overhead of updating the compressed FIB when routing update messages arrive. Only when the update time of FIB compression algorithms is small bounded can the probability of packet loss incurred by FIB compression operations during update be completely avoided. However, no prior FIB compression algorithm can achieve small bounded worst case update time, and hence a mature solution with complete avoidance of packet loss is still yet to be identified. To address this issue, we propose the Unite and Split (US) compression algorithm to enable fast update with controlled worst case update time. Further, we use the US algorithm to improve the performance of a number of classic software and hardware lookup algorithms. Simulation results show that the average update speed of the US algorithm is a little faster than that of the binary trie without any compression, while prior compression algorithms inevitably seriously degrade the update performance. After applying the US algorithm, the evaluated lookup algorithms exhibit significantly smaller on-chip memory consumption with little additional update overhead
The vehicular cloud is a promising new paradigm where vehicular networking and mobile cloud computing are elaborately integrated to enhance the quality of vehicular information services. Pseudonym is a resource for vehicles to protect their location privacy, which should be efficiently utilized to secure vehicular clouds. However, only a few existing architectures of pseudonym systems take flexibility and efficiency into consideration, thus leading to potential threats to location privacy. In this paper, we exploit software-defined networking technology to significantly extend the flexibility and programmability for pseudonym management in vehicular clouds. We propose a software-defined pseudonym system where the distributed pseudonym pools are promptly scheduled and elastically managed in a hierarchical manner. In order to decrease the system overhead due to the cost of inter-pool communications, we leverage the two-sided matching theory to formulate and solve the pseudonym resource scheduling.We conducted extensive simulations based on the real map of San Francisco. Numerical results indicate that the proposed software-defined pseudonym system significantly improves the pseudonym resource utilization, and meanwhile, effectively enhances the vehicles’ location privacy by raising their entropy.
Spectrum sensing is one of the key technologies to realize dynamic spectrum access in cognitive radio (CR). In this paper, a novel database-augmented spectrum sensing algorithm is proposed for a secondary access to the TV White Space (TVWS) spectrum. The proposed database-augmented sensing algorithm is based on an existing geo-location database approach for detecting incumbents like Digital Terrestrial Television (DTT) and Programme Making and Special Events (PMSE) users, but is combined with spectrum sensing to further improve the protection to these primary users (PUs). A closed-form expression of secondary users' (SUs) spectral efficiency is also derived for its opportunistic access of TVWS. By implementing previously developed power control based geo-location database and adaptive spectrum sensing algorithm, the proposed database-augmented sensing algorithm demonstrates a better spectrum efficiency for SUs, and better protection for incumbent PUs than the exiting stand-alone geo-location database model. Furthermore, we analyze the effect of the unregistered PMSE on the reliable use of the channel for SUs.
With the recent development of Device-toDevice (D2D) communication technologies, mobile devices will no longer be treated as pure “terminals”, but they could become an integral part of the network in specific application scenarios. In this paper, we introduce a novel scheme of using D2D communications for enabling data relay services in partial Not-Spots, where a client without local network access may require data relay by other devices. Depending on specific social application scenarios that can leverage on the D2D technology, we consider tailored algorithms in order to achieve optimised data relay service performance on top of our proposed networkcoordinated communication framework. The approach is to exploit the network’s knowledge on its local user mobility patterns in order to identify best helper devices participating in data relay operations. This framework also comes with our proposed helper selection optimization algorithm based on reactive predictability of individual user. According to our simulation analysis based on both theoretical mobility models and real human mobility data traces, the proposed scheme is able to flexibly support different service requirements in specific social application scenarios.
Traffic Engineering (TE) involves network configuration in order to achieve optimal IP network performance. The existing literature considers intra- and inter-AS (Autonomous System) TE independently. However, if these two aspects are considered separately, the overall network performance may not be truly optimized. This is due to the interaction between intra- and inter-AS TE, where a good solution of inter-AS TE may not be good for intra-AS TE. To remedy this situation, we propose a joint optimization of intra- and inter-AS TE in order to improve the overall network performance by simultaneously finding the best egress points for inter-AS traffic and the best routing scheme for intra-AS traffic. Three strategies are presented to attack the problem, sequential, nested and integrated optimization. Our evaluation shows that, in comparison to sequential and nested optimization, integrated optimization can significantly improve overall network performance by being able to accommodate approximately 30%-60% more traffic demand.
Holographic-type Communication (HTC) has been widely deemed as an emerging type of augmented reality (AR) media which offers Internet users deeply immersive experiences. In contrast to the traditional video content transmissions, the characteristics and network requirements of HTC have been much less studied in the literature. Due to the high bandwidth requirements and various limitations of today’s HTC platforms, large-scale HTC streaming has never been systematically attempted and comprehensively evaluated till now. In this paper, we introduce a novel HTC based teleportation platform leveraging cloud-based remote production functions, also supported with newly proposed adaptive frame buffering and end-to-end signalling techniques against network uncertainties, which for the first time is able to provide assured user experiences at the public Internet scale. According to our real-life experiments based on strategically deployed cloud sites for remote production functions, we have demonstrated the feasibility of supporting user assured performances for such applications at the global Internet scale.
Konstantinos Liolis, Alexander Geurtz, Ray Sperber, Detlef Schulz, Simon Watts, Georgia Poziopoulou, Barry Evans, Ning Wang, Oriol Vidal, Boris Tiomela Jou, Michael Fitch, Salva Diaz Sendra, Pouria Sayyad Khodashenas, Nicolas Chuberre (2019)Use cases and scenarios of 5G integrated satellite‐terrestrial networks for enhanced mobile broadband: The SaT5G approach, In: International Journal of Satellite Communications and Networking37(2)pp. 91-112
This paper presents initial results available from the European Commission Horizon 2020 5G Public Private Partnership Phase 2 project “SaT5G” (Satellite and Terrestrial Network for 5G).1 After describing the concept, objectives, challenges, and research pillars addressed by the SaT5G project, this paper elaborates on the selected use cases and scenarios for satellite communications positioning in the 5G usage scenario of enhanced mobile broadband.
© 2015 IEEE.Wireless sensor networks usually have a massive number of randomly deployed sensor nodes that perform sensing and transmitting data to a base station. This can be a cause of sensor redundancy and data duplication. Sensor scheduling is a solution to reducing the enormous amount of the data load by selecting certain potential sensors to perform the tasks. Meanwhile, the quality of connectivity and coverage is also assured. This paper proposes a sensor scheduling method, called 4-Sqr, which uses a virtual square partition that is composed of consecutive square cells. Based on coordinates upon a monitored area, sensors learn their position on the virtual partition themselves; these are divided into groups of target areas, depending on the sensors' geographical locations. They are then ready for the node selection phase. In order to distribute energy consumption equally, the sensors with the highest residual energy within the same group usually have more chance of being active than the others. Compared to other existing methods, the proposed method is outstanding in many aspects such as the quality of connected coverage, the chance of being selected and the network's lifetime.
© 2014 IEEE.In-network content caching has recently emerged in the context of Information-Centric Networking (ICN), which allows content objects to be cached at the content router side. In this paper, we specifically focus on in-network caching of Peer-to-Peer (P2P) content objects for improving both service and operation efficiencies. We propose an intelligent in-network caching scheme of P2P content chunks, aiming to reduce P2P-based content traffic load and also to achieve improved content distribution performance. Towards this end, the proposed holistic decision-making logic takes into account context information on the P2P characteristics such as chunk availability. In addition, we also analyse the benefit of coordination between neighbouring content routers when making caching decisions in order to avoid duplicated P2P chunk caching nearby. An analytical modelling framework is developed to quantitatively evaluate the efficiency of the proposed in-network caching scheme.
Network virtualization has been recognized as a promising solution to enable the rapid deployment of customized services by building multiple Virtual Networks (VNs) on a shared substrate network. Whereas various VN embedding schemes have been proposed to allocate the substrate resources to each VN requests, little work has been done to provide backup mechanisms in case of substrate network failures. In a virtualized infrastructure, a single substrate failure will affect all the VNs sharing that resource. Provisioning a dedicated backup network for each VN is not efficient in terms of substrate resource utilization. In this paper, we investigate the problem of shared backup network provision for VN embedding and propose two schemes: shared on-demand and shared pre-allocation backup schemes. Simulation experiments show that both proposed schemes make better utilization of substrate resources than the dedicated backup scheme without sharing, while each of them has its own advantages. © 2011 IEEE.
MP Howarth, P Flegkas, G Pavlou, N Wang, P Trimintzios, D Griffin, J Griem, M Boucadair, P Morand, AH Asgari, P Georgatsos (2005)Provisioning for interdomain quality of service: the MESCAL approach, In: IEEE Communications Magazine43(6)pp. 129-137
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
This article presents an architecture for supporting interdomain QoS across the multi-provider global Internet. While most research to date has focused on supporting QoS within a single administrative domain, mature solutions are not yet available for the provision of QoS across multiple domains administered by different organizations. The architecture described in this article encompasses the full set of functions required in the management (service and resource), control and data planes for the provision of end-to-end QoS-based IP connectivity services. We use the concept of QoS classes and show how these can be cascaded using service level specifications (SLSs) agreed between BGP peer domains to construct a defined end-to-end QoS. We illustrate the architecture by describing a typical operational scenario.
Handling traffic dynamics in order to avoid network congestion and subsequent service disruptions is one of the key tasks performed by contemporary network management systems. Given the simple but rigid routing and forwarding functionalities in IP base environments, efficient resource management and control solutions against dynamic traffic conditions is still yet to be obtained. In this article, we introduce AMPLE - an efficient traffic engineering and management system that performs adaptive traffic control by using multiple virtualized routing topologies. The proposed system consists of two complementary components: offline link weight optimization that takes as input the physical network topology and tries to produce maximum routing path diversity across multiple virtual routing topologies for long term operation through the optimized setting of link weights. Based on these diverse paths, adaptive traffic control performs intelligent traffic splitting across individual routing topologies in reaction to the monitored network dynamics at short timescale. According to our evaluation with real network topologies and traffic traces, the proposed system is able to cope almost optimally with unpredicted traffic dynamics and, as such, it constitutes a new proposal for achieving better quality of service and overall network performance in IP networks.
Energy consumption in ISP backbone networks has been rapidly increasing with the advent of increasingly bandwidth-hungry applications. Network resource optimization through sleeping reconfiguration and rate adaptation has been proposed for reducing energy consumption when the traffic demands are at their low levels. It has been observed that many operational backbone networks exhibit regular diurnal traffic patterns, which offers the opportunity to apply simple time-driven link sleeping reconfigurations for energy-saving purposes. In this work, an efficient optimization scheme called Time-driven Link Sleeping (TLS) is proposed for practical energy management which produces an optimized combination of the reduced network topology and its unified off-peak configuration duration in daily operations. Such a scheme significantly eases the operational complexity at the ISP side for energy saving, but without resorting to complicated online network adaptations. The GÉANT network and its real traffic matrices were used to evaluate the proposed TLS scheme. Simulation results show that up to 28.3% energy savings can be achieved during off-peak operation without network performance deterioration. In addition, considering the potential risk of traffic congestion caused by unexpected network failures based on the reduced topology during off-peak time, we further propose a robust TLS scheme with Single Link Failure Protection (TLS-SLFP) which aims to achieve an optimized trade-off between network robustness and energy efficiency performance.
This paper further develops an architecture and design elements for a resource management and a signalling system to support the construction and maintenance of a mid-long term hybrid multicast tree for multimedia distribution services in a QoS guaranteed way, over multiple IP domains. The system called E-cast is composed of an overlay part – in inter-domain and possible IP level multicast in intra-domain. Each E-cast tree is associated with a given QoS class and is composed of unicast pipes established through Service Level Specification negotiations between the domain managers. The paper continues a previous work, by proposing an inter-domain signalling system to support the multicast management and control operations and then defining the resource management for tree construction and adjustment procedures in order to assure the required static and dynamic properties of the tree.
It has been envisaged that in future 5G networks user devices will become an integral part by participating in the transmission of mobile content traffic typically through Deviceto- device (D2D) technologies. In this context, we promote the concept of Mobility as a Service (MaaS), where content-aware mobile network edge is equipped with necessary knowledge on device mobility in order to distribute popular mobile content items to interested clients via a small number of helper devices. Towards this end, we present a device-level Information Centric Networking (ICN) architecture that is able to perform intelligent content distribution operations according to necessary context information on mobile user mobility and content characteristics. Based on such a platform, we further introduce device-level online content caching and offline helper selection algorithms in order to optimise the overall system efficiency. In particular, this paper sheds distinct light on the importance of user mobility data analytics based on which helper selection can lead to overall system optimality. Based on representative user mobility models, we conducted realistic simulation experiments and modelling which have proven the efficiency in terms of both network traffic offloading gains and user-oriented performance improvements. In addition, we show how the framework can be flexibly configured to meet specific delay tolerance constraints according to specific context policies.
IP Fast ReRoute (FRR) mechanisms have been proposed to achieve fast failover for supporting Quality of Services (QoS) assurance. However, these mechanisms do not consider network performance after affected traffic is rerouted onto repair paths. As a result, QoS deterioration may still happen due to post-failure traffic congestion in the network, which nullifies the effectiveness of IP FRR. In this paper, by considering IP tunneling as the underlying IP FRR mechanism, we proposed an efficient algorithm to judiciously select tunnel endpoints such that the network performance is optimized after the repair paths are activated for rerouting. According to the simulation results using real operational network topologies and traffic matrices, the algorithm achieves significant improvement on post-failure load balancing compared to the traditional IGP re-convergence and plain tunnel endpoint selection without such consideration.
In order to meet the requirements of emerging demanding services, network resource management functionality that is decentralized, flexible and adaptive to traffic and network dynamics is of paramount importance. In this paper we describe the main mechanisms of DACoRM, a new intra-domain adaptive resource management approach for IP networks. Based on path diversity provided by multi-topology routing, our approach controls the distribution of traffic load in the network in an adaptive manner through periodical re-configurations that uses real-time monitoring information. The re-configuration actions performed are decided in a coordinated fashion between a set of source nodes that form an in-network overlay. We evaluate the overall performance of our approach using realistic network topologies. Results show that near-optimal network performance in terms of resource utilization can be achieved in scalable manner. © 2012 IEEE.
Due to the fact that P2P applications have dominantly accounted for the entire Internet traffic, how to efficiently manage P2P traffic has become increasingly important. It has been recently proposed that the underlying network information can be shared between ISPs and P2P service providers in order to achieve efficient resource utilization, with the locality-based peer selection being a specific example. Based on such collaboration, we propose a proportional traffic-exchange localization scheme for making efficient use of network resources. Our approach employs locality information in order to regulate the volume of traffic exchange between peers according to their physical distance between peers. The key objective of our approach is to further reduce both intra- and inter-autonomous system (AS) traffic compared with basic locality-based peer selection solutions. Our simulation-based results have shown that this approach is not only able to reduce a significant of inter-AS P2P traffic, but also to balance the network utilization in comparison to existing approaches.
WK Chai, N Wang, I Psaras, G Pavlou, C Wang, GG de Blas, FJ Ramon-Salguero, L Liang, S Spirou, A Beben, E Hadjioannou (2011)Curling: Content-ubiquitous resolution and delivery infrastructure for next-generation services, In: IEEE Communication Magazine49(3)pp. 112-120
CURLING, a Content-Ubiquitous Resolution and Delivery Infrastructure for Next Generation Services, aims to enable a future content-centric Internet that will overcome the current intrinsic constraints by efficiently diffusing media content of massive scale. It entails a holistic approach, supporting content manipulation capabilities that encompass the entire content life cycle, from content publication to content resolution and, finally, to content delivery. CURLING provides to both content providers and customers high flexibility in expressing their location preferences when publishing and requesting content, respectively, thanks to the proposed scoping and filtering functions. Content manipulation operations can be driven by a variety of factors, including business relationships between ISPs, local ISP policies, and specific content provider and customer preferences. Content resolution is also natively coupled with optimized content routing techniques that enable efficient unicast and multicast- based content delivery across the global Internet.
W Chai, N Wang, I Psaras, G Pavlou, C Wang, G Blas, F Salguero, L Liang, S Spirou, A Beben, E Hadjioannou (2011)CURLING: CONTENT-UBIQUITOUS RESOLUTION AND DELIVERY INFRASTRUCTURE FOR NEXT GENERATION SERVICES, In: IEEE Communications Magazine49(3)pp. 112-120
The next generation Internet is expected to focus more on large-scale media/content distribution rather than the communication infrastructure. In this article, we present CURLING, a Content-Ubiquitous Resolution and Delivery Infrastructure for Next Generation Services. The proposed architecture will support the realization of a future content-centric Internet that will overcome the current intrinsic constraints by efficiently diffusing media content of massive scale. We propose a holistic approach that natively supports content manipulation capabilities which encompass the entire content lifecycle, from content publication to content resolution and finally, to content delivery at Internet-wide scale. The CURLING infrastructure offers to both content providers and customers high flexibility in expressing their location preferences when publishing and requesting content respectively, thanks to the proposed scoping and filtering functions. Content manipulation operations can be driven by a variety of factors, including business relationships between Internet Service Providers (ISPs), local ISP policies, and specific content provider and customer preferences. Content resolution is also natively coupled with optimized content routing techniques that enable efficient unicast and multicast-based content delivery across the global Internet.
Receiving great interest from the research community, Delay Tolerant Networks (DTNs) are a type of Next Generation Networks (NGNs) proposed to bridge communication in challenged environments. In this paper, the message replication probability is proportionally sprayed for efficient routing mainly under sparse scenario. This methodology is different from the spray based algorithms using message copy tickets to control replication. Our heuristic algorithm aims to overcome the scalability of the spray based algorithms, since to determine the initial value of the copy tickets requires the assumption that either the number of nodes is known in advance, or the underlying mobility model follows the Random WayPoint (RWP) characteristic. Specifically, in combining with the assistance of geographic information to estimate the movement range of destination, the routing decision is based on the encounter angle between pairwise nodes, and is dynamically switched between the designed two routing phases, named as geographic replication and replication probability spray. Furthermore, messages are under prioritized transmission with the consideration of redundancy pruning. Simulation results show our heuristic algorithm outperforms other well known algorithms in terms of delivery ratio, transmission overhead, average latency as well as buffer occupancy time. © 2012 IEEE.
Cooperation between peer-to-peer (P2P) overlays and underlying networks has been proposed as an effective approach to improve the efficiency of both the applications and the underlying networks. However, fundamental characteristics with respect to ISP business relationships and inter-ISP routing information are not sufficiently investigated in the context of collaborative ISP-P2P paradigms in multi-domain environments. In this paper, we focus on such issues and develop an analytical modelling framework for analysing optimized inter-domain peer selection schemes concerning ISP policies, with the main purpose of mitigating cross-ISP traffic and enhancing service quality of end users. In addition, we introduce an advanced hybrid scheme for peer selections based on the proposed analytical theory framework, in accordance with practical network scenarios, wherein cooperative and non-cooperative behaviours coexisting. Numerical results show that the proposed scheme incorporating ISP policies is able to achieve desirable network efficiency as well as great service quality for P2P users. Our analytical modelling framework can be used as a guide for analysing and evaluating future network-aware P2P peer selection paradigms in general multi-domain scenarios.
HTTP-based live streaming has become increasingly popular in recent years, and more users have started generating 4K live streams from their devices (e.g., mobile phones) through social-media service providers like Facebook or YouTube. If the audience is located far from a live stream source across the global Internet, TCP throughput becomes substantially suboptimal due to slow-start and congestion control mechanisms. This is especially the case when the end-to-end content delivery path involves radio access network (RAN) at the last mile. As a result, the data rate perceived by a mobile receiver may not meet the high requirement of 4K video streams, which causes deteriorated Quality-of-Experience (QoE). In this paper, we propose a scheme named Edge-based Transient Holding of Live sEgment (ETHLE), which addresses the issue above by performing context-aware transient holding of video segments at the mobile edge with virtualized content caching capability. Through holding the minimum number of live video segments at the mobile edge cache in a context-aware manner, the ETHLE scheme is able to achieve seamless 4K live streaming experiences across the global Internet by eliminating buffering and substantially reducing initial startup delay and live stream latency. It has been deployed as a virtual network function at an LTE-A network, and its performance has been evaluated using real live stream sources that are distributed around the world. The significance of this paper is that by leveraging virtualized caching resources at the mobile edge, we address the conventional transport-layer bottleneck and enable QoE-assured Internet-wide live streaming services with high data rate requirements.
Energy consumption has already become a major challenge to the current Internet. Most researches aim at lowering energy consumption under certain fixed performance constraints. Since trade-offs exist between network performance and energy saving, Internet Service Providers (ISPs) may desire to achieve different Traffic Engineering (TE) goals corresponding to changeable requirements. The major contributions of this paper are twofold: 1) we present an OSPF-based routing mechanism, Routing On Demand (ROD), that considers both performance and energy saving, and 2) we theoretically prove that a set of link weights always exists for each trade-off variant of the TE objective, under which solutions (i.e., routes) derived from ROD can be converted into shortest paths and realized through OSPF. Extensive evaluation results show that ROD can achieve various trade-offs between energy saving and performance in terms of Maximum Link Utilization, while maintaining better packet delay than that of the energy-agnostic TE. © 2012 IFIP International Federation for Information Processing.
Internet video streaming applications have been demanding more bandwidth and higher video quality, especially with the advent of Virtual Reality (VR) and Augmented Reality (AR) applications. While adaptive streaming protocols like MPEG-DASH (Dynamic Adaptive Streaming over HTTP) allows video quality to be flexibly adapted, e.g., degraded when mobile network condition deteriorates, this is not an option if the application itself requires guaranteed 4K quality at all time. On the other hand, conventional end-to-end TCP has been struggling in supporting 4K video delivery across long-distance Internet paths containing both fixed and mobile network segments with heterogeneous characteristics. In this paper, we present a novel and practically-feasible system architecture named MVP (Mobile edge Virtualization with adaptive Prefetching), which enables content providers to embed their content intelligence as a virtual network function (VNF) into the mobile network operator’s (MNO) infrastructure edge. Based on this architecture, we present a context-aware adaptive video prefetching scheme in order to achieve QoE-assured 4K video on demand (VoD) delivery across the global Internet. Through experiments based on a real LTE-A network infrastructure, we demonstrate that our proposed scheme is able to achieve QoE-assured 4K VoD streaming, especially when the video source is located remotely in the public Internet, in which case none of the state-of-the-art solutions is able to support such an objective at global Internet scale.
D Griffin, J Spencer, J Griem, M Boucadair, P Morand, Michael Howarth, Ning Wang, G Pavlou, A Asgari, P Georgatsos (2007)Interdomain routing through QoS-class planes, In: IEEE COMMUN MAG45(2)pp. 88-95
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
This article presents an approach to delivering qualitative end-to-end quality of service (QoS) guarantees across the multiprovider Internet. We propose that bilateral agreements between a number of autonomous systems (ASs) result in the establishment of QoS-class planes that potentially extend across the global Internet. The deployment of a QoS-enhanced Border Gateway Protocol (BGP) with different QoSbased route selection policies in each of the planes allows a range of interdomain QoS capabilities to coexist on the same network infrastructure. The article presents simulation results showing the benefits of the approach and discusses aspects of the performance of QoSenhanced BGP.
Node clustering has been widely studied in recent years for Wireless Sensor Networks (WSN) as a technique to form a hierarchical structure and prolong network lifetime by reducing the number of packet transmissions. Cluster Heads (CH) are elected in a distributed way among sensors, but are often highly overloaded, and therefore re-clustering operations should be performed to share the resource intensive CH-role. Existing protocols involve periodic network-wide re-clustering operations that are simultaneously performed, which requires global time synchronisation. To address this issue, some recent studies have proposed asynchronous node clustering for networks with direct links from CHs to the data sink. However, for large-scale WSNs, multihop packet delivery to the sink is required since longrange transmissions are costly for sensor nodes. In this paper, we present an asynchronous node clustering protocol designed for multihop WSNs, considering dynamic conditions such as residual node energy levels and unbalanced data traffic loads caused by packet forwarding. Simulation results demonstrate that it is possible to achieve similar levels of lifetime extension by re-clustering a multihop WSN via independently made decisions at CHs, without a need for time synchronisation required by existing synchronous protocols.
In order to minimize the downloading time of short-lived applications like web browsing, web application and short video clips, the recently standardized HTTP/2 adopts stream multiplexing on one single TCP connection. However, aggregating all content objects within one single connection suffers from the Head-of-Line blocking issue. QUIC, by eliminating such an issue on the basis of UDP, is expected to further reduce the content downloading time. However, in mobile network environments, the single connection strategy still leads to a degraded and high variant completion time due to the unexpected hindrance of congestion window growth caused by the common but uncertain fluctuations in round trip time and also random loss event at the air interface. To retain resilient congestion window against such network fluctuations, we propose an intelligent connection management scheme based on QUIC which not only employs adaptively multiple connections but also conducts a tailored state and congestion window synchronization between these parallel connections upon the detection of network fluctuation events. According to the performance evaluation results obtained from an LTE-A/Wi-Fi testing network, the proposed multiple QUIC scheme can effectively overcome the limitations of different congestion control algorithms (e.g. the loss-based New Reno/CUBIC and the rate-based BBR), achieving substantial performance improvement in both median (up to 59.1%) and 95th completion time (up to 72.3%). The significance of this piece of work is to achieve highly robust short-lived content downloading performance against various uncertainties of network conditions as well as with different congestion control schemes.
The Internet, the de facto platform for large-scale content distribution, suffers from two issues that limit its manageability, efficiency and evolution: (1) The IP-based Internet is host-centric and agnostic to the content being delivered and (2) the tight coupling of the control and data planes restrict its manageability, and subsequently the possibility to create dynamic alternative paths for efficient content delivery. Here we present the CURLING system that leverages the emerging Information- Centric Networking paradigm for enabling cost-efficient Internetscale content delivery by exploiting multicasting and in-network caching. Following the software-defined networking concept that decouples the control and data planes, CURLING adopts an inter-domain hop-by-hop content resolution mechanism that allows network operators to dynamically enforce/change their network policies in locating content sources and optimizing content delivery paths. Content publishers and consumers may also control content access according to their preferences. Based on both analytical modelling and simulations using real domainlevel Internet subtopologies, we demonstrate how CURLING supports efficient Internet-scale content delivery without the necessity for radical changes to the current Internet.
In this paper we introduce a new scheme to achieve fast failure recovery in IP multicast based content delivery, which is based on efficient extensions to the Not-via fast reroute (FRR) technique. The design of such an approach takes into account distinct characteristics of IP multicast routing, namely receiver-initiated and state-based, and it offers comprehensive protections against both simple and complex network failures. We also specify in the paper moderate extensions to the standard PIM-SM routing protocol in order to equip individual repairing routers with necessary knowledge for dynamically binding protected multicast trees with pre-established Not-via tunnels that are able to automatically bypass failed network components. Our simulation experiments based on both real and synthetically generated topologies indicate promising scalability performance in the proposed multicast FRR approach. © 2010 IEEE.
The current Internet has been founded on the architectural premise of a simple network service used to interconnect relatively intelligent end systems. While this simplicity allowed it to reach an impressive scale, the predictive manner in which ISP networks are currently planned and configured through external management systems and the uniform treatment of all traffic are hampering its use as a unifying multi-service network. The future Internet will need to be more intelligent and adaptive, optimizing continuously the use of its resources and recovering from transient problems, faults and attacks without any impact on the demanding services and applications running over it. This article describes an architecture that allows intelligence to be introduced within the network to support sophisticated self-management functionality in a coordinated and controllable manner. The presented approach, based on intelligent substrates, can potentially make the Internet more adaptable, agile, sustainable, and dependable given the requirements of emerging services with highly demanding traffic and rapidly changing locations. We discuss how the proposed framework can be applied to three representative emerging scenarios: dynamic traffic engineering (load balancing across multiple paths); energy efficiency in ISP network infrastructures; and cache management in content-centric networks.
The high volume of energy consumption has become a great concern to the Internet community because of high energy waste on redundant network devices. One promising scheme for energy savings is to reconfigure network elements to sleep mode when traffic demand is low. However, due to the nature of today's traditional IP routing protocols, network reconfiguration is generally deemed to be harmful because of routing table reconvergence. To make these sleeping network elements, such as links, robust to traffic disruption, we propose a novel online scheme called designate to sleep algorithm that aims to remove network links without causing traffic disruption during energy-saving periods. Considering the nature of diurnal traffic, there could be traffic surge in the network because of reduced network capacity. We therefore propose a complementary scheme called dynamic wake-up algorithm that intelligently wakes up minimum number of sleeping links needed to control such dynamicity. This is contrary to the normal paradigm of either reverting to full topology and sacrificing energy savings or employing on-the-fly link weight manipulation. Using the real topologies of GEANT and Abilene networks respectively, we show that the proposed schemes can save a substantial amount of energy without affecting network performance.
In this paper, we present a Mobile Edge Computing (MEC) scheme for enabling network edge-assisted video adaptation based on MPEG-DASH (Dynamic Adaptive Streaming over HTTP). In contrast to the traditional over-the-top (OTT) adaptation performed by DASH clients, the MEC server at the mobile network edge can capture radio access network (RAN) conditions through its intrinsic Radio Network Information Service (RNIS) function, and use the knowledge to provide guidance to clients so that they can perform more intelligent video adaptation. In order to support such MECassisted DASH video adaptation, the MEC server needs to locally cache the most popular content segments at the qualities that can be supported by the current network throughput. Towards this end, we introduce a two-dimensional user Quality-of-Experience (QoE)-driven algorithm for making caching / replacement decisions based on both content context (e.g., segment popularity) and network context (e.g., RAN downlink throughput). We conducted experiments by deploying a prototype MEC server at a real LTE-A based network testbed. The results show that our QoE-driven algorithm is able to achieve significant improvement on user QoE over 2 benchmark schemes
Given that the vast majority of Internet interactions relate to content access and delivery, recent research has pointed to a potential paradigm shift from the current host-centric Internet model to an information-centric one. In information-centric networks, named content is accessed directly, with the best content copy delivered to the requesting user given content caching within the network. Here, we present an Internet-scale mediation approach for content access and delivery that supports content and network mediation. Content characteristics, server load, and network distance are taken into account in order to locate the best content copy and optimize network utilization while maximizing the user quality of experience. The content mediation infrastructure is provided by Internet service providers in a cooperative fashion, with both decoupled/two-phase and coupled/one-phase modes of operation. We present in detail the coupled mode of operation which is used for popular content and follows a domain-level hop-by-hop content resolution approach to optimally identify the best content copy. We also discuss key aspects of our content mediation approach, including incremental deployment issues and scalability. While presenting our approach, we also take the opportunity to explain key information-centric networking concepts.
Due to the explosive growth of mobile data traffic, it has become a common practice for Mobile Network Operators (MNOs, also known as operators or carriers) to utilize cellular and WiFi resources simultaneously through mobile data offloading. However, existing offloading technologies are mainly established between operators and third-party WiFi resources, which cannot reflect users dynamic traffic demands. Therefore, MNOs have to design an effective incentive framework, encouraging users to reveal their valuations on resources. In this paper, we propose a novel bid-based Heterogeneous Resources Allocation (HRA) framework. It can enable operators to efficiently utilize both cellular and operator-own WiFi resources simultaneously, where the decision cost of user is strictly controlled. Through auction-based mechanisms, it can achieve dynamic offloading with awareness of users valuations. And the operator-domain offloading effectively avoids anarchy brought by users selfishness and lack of information. More specifically, HRA-Profit and HRA-Utility, are proposed to achieve the maximal profit and social utility, respectively. addition, based on Stochastic Multi-Armed Bandit model, the newly proposed HRA-UCB-Profit and HRA-UCB-Utility are able to gain near-optimal profit and social utility under incomplete user context information. All mechanisms have been proven to be truthful and satisfy individual rationality, while the achieved profit of our mechanism is within a bounded difference from the optimal profit. In addition, the trace-based simulations and evaluations have demonstrated that HRA-Profit and HRA-Utility increase the profit and social utility by up to 40% and 47%, respectively, compared with benchmarks. And the cellular utilization rate is kept at a favorable level under the proposed mechanisms. HRA-UCB-Profit and HRA-UCB-Utility restrict pseudo-regret ratios under 20%.
Femtocell is becoming a promising solution to face the explosive growth of mobile broadband usage in cellular networks. While each femtocell only covers a small area, a massive deployment is expected in the near future forming networked femtocells. An immediate challenge is to provide seamless mobility support for networked femtocells with minimal support from mobile core networks. In this paper, we propose efficient local mobility management schemes for networked femtocells based on X2 traffic forwarding under the 3GPP Long Term Evolution Advanced (LTE-A) framework. Instead of implementing the path switch operation at core network entity for each handover, a local traffic forwarding chain is constructed to use the existing Internet backhaul and the local path between the local anchor femtocell and the target femtocell for ongoing session communications. Both analytical studies and simulation experiments are conducted to evaluate the proposed schemes and compare them with the original 3GPP scheme. The results indicate that the proposed schemes can significantly reduce the signaling cost and relieve the processing burden of mobile core networks with the reasonable distributed cost for local traffic forwarding. In addition, the proposed schemes can enable fast session recovery to adapt to the self-deployment nature of the femtocells.
Locality-based peer selection paradigms have been proposed recently based on cooperation between peer-to-peer (P2P) service providers, Internet Service Providers (ISPs) and end users in order to achieve efficient resource utilization by P2P traffic. Based on this cooperation between different stakeholders, we introduce a more advanced paradigm with adaptive peer selection that takes into account traffic dynamics in the operational network. Specifically, peers associated with low path utilization as measured by the ISP are selected in order to reduce the probability of network congestion. This approach not only improves real-time P2P service assurance but also optimizes the overall use of network resources. Our simulations based on the GEANT network topology and real traffic traces show that the proposed adaptive peer selection scheme achieves significant improvement in utilizing bandwidth resources as compared to static locality-based approaches.
N Wang, D Griffin, J Spencer, J Griem, JR Sanchez, M Boucadair, E Mykoniati, B Quoitin, MP Howarth, G Pavlou (2007)A framework for lightweight QoS provisioning: Network planes and parallel Internets, In: 2007 10TH IFIP/IEEE INTERNATIONAL SYMPOSIUM ON INTEGRATED NETWORK MANAGEMENT (IM 2007), VOLS 1 AND 2pp. 797-800
Emerging Peer-to-Peer (P2P) technologies have enabled various types of content to be efficiently distributed over the Internet. Most P2P systems adopt selfish peer selection schemes in the application layer that in some sense optimize the user quality of experience. On the network side, traffic engineering (TE) is deployed by ISPs in order to achieve overall efficient network resource utilization. These TE operations are typically performed without distinguishing between P2P flows and other types of traffic. Due to inconsistent or even conflicting objectives from the perspectives of P2P overlay and network-level TE, the interactions between the two and their impact on the performance for each is likely to be non-optimal, and also has not yet been investigated in detail. In this paper we study such non-cooperative interactions by modeling best-reply dynamics, in which the P2P overlay and network-level TE optimize their own strategies based on the decision of the other player in the previous round. According to our simulations results based on data from the ABILENE network, P2P overlays exhibit strong resilience to adverse TE operations in maintaining end-to-end performance at the application layer. In addition, we show that network-level TE may suffer from performance deterioration caused by greedy peer (re-)selection behavior in reacting to previous TE adjustments.
As a scalable paradigm for content distribution at Internet-wide scale, Peer-to-Peer (P2P) technologies have enabled a variety of networked services, such as distributed file-sharing and live video streaming. Most existing P2P systems employ nonintelligent peer selection algorithms for content swarming which greedily consume Internet bandwidth resources. As a result, Internet service providers (ISPs) need some efficient solutions for managing P2P traffic within their own networks. A common practice today is to block or shape P2P traffic in order to conserve bandwidth resources for carrying standard traffic from which revenue can be generated. In this paper, instead of looking at simple time-driven blocking/limiting approaches, we investigate how such types of limiting behaviors can be more gracefully performed by the ISP by taking into account the dynamics of both P2P traffic and of standard Internet traffic. Specifically, our approach is to adaptively limit excessive P2P traffic on critical network links that are prone to congestion, based on periodical link load/utilization measurements by the ISP. The ultimate objective is to guarantee non-P2P service capability while trying to accommodate as much P2P traffic as possible based on the available bandwidth resources. This approach can be regarded as a complementary solution to the recently proposed collaboration-based P2P paradigms such as P4P. Simulation results show that our approach not only eliminates performance degradation of non-P2P services that are caused by overwhelming P2P traffic, but also accommodates P2P traffic efficiently in both existing and future collaboration-based P2P network scenarios.
Optimizing server’s power consumption in content distribution infrastructure has attracted increasing research efforts. The technical challenge is the tradeoff between server power consumption and the content service capability on both the server and the network side. This paper proposes and evaluates a novel approach that optimizes content servers’ power consumptions in large-scale content distribution platforms across multiple ISP domains. Specifically, our approach strategically puts servers to sleep mode without violating load capacities of virtual content delivery links and active servers in the infrastructure. Such a problem can be formulated into a nonlinear programming model. The efficiency of our approach is evaluated in a content distribution topology covering two real interconnected domains. The simulation has shown that our approach is capable of reducing servers’ power consumptions by up to 62.2%, while maintaining the actual service performance in an acceptable scope.
The design of an efficient charging management system for on-the-move Electric Vehicles (EVs) has become an emerging research problem, in future connected vehicle applications given their mobility uncertainties. Major technical challenges here involve decision-making intelligence for the selection of Charging Stations (CSs), as well as the corresponding communication infrastructure for necessary information dissemination between the power grid and mobile EVs. In this article, we propose a holistic solution that aims to create high impact on the improvement of end users’ driving experiences (e.g., to minimize EVs’ charging waiting time during their journeys) and charging efficiency at the power grid side. Particularly, the CS-selection decision on where to charge is made by individual EVs for privacy and scalability benefits. The communication framework is based on a mobile Publish/Subscribe (P/S) paradigm to efficiently disseminate CSs condition information to EVs on-the-move. In order to circumvent the rigidity of having stationary Road Side Units (RSUs) for information dissemination, we promote the concept of Mobility as a Service (MaaS) by exploiting the mobility of public transportation vehicles (e.g. buses) to bridge the information flow to EVs, given their opportunistic encounters. We analyze various factors affecting the possibility for EVs to access CSs information via opportunistic Vehicle-to-Vehicle (V2V) communications, and also demonstrate the advantage of introducing buses as mobile intermediaries for information dissemination, based on a common EV charging management system under the Helsinki city scenario. We further study the feasibility and benefit of enabling EVs to send their charging reservations involved for CS-selection logic, via opportunistically encountered buses as well. Results show this advanced management system improves both performances at CS and EV sides.
In today’s BGP routing architecture, traffic delivery is in general based on single path selection paradigms. The lack of path diversity hinders the support for resilience, traffic engineering and QoS provisioning across the Internet. Some recently proposed multi-plane extensions to BGP offer a promising mechanism to enable diverse inter-domain routes towards destination prefixes. Based on these enhanced BGP protocols, we propose in this paper a novel technique to enable controlled fast egress router switching for handling network failures. In order to minimize the disruptions to real-time services caused by the failures, backup egress routers can be immediately activated through locally remarking affected traffic towards alternative routing planes without waiting for IGP routing re-convergence. According to our evaluation results, the proposed multi-plane based egress router selection algorithm is able to provide both high path diversity and balanced load distribution across inter-domain links with a small number of planes.
Current practices for managing resources in fixed networks rely on off-line approaches, which can be sub-optimal in the face of changing or unpredicted traffic demand. To cope with the limitations of these off-line configurations new traffic engineering (TE) schemes that can adapt to network and traffic dynamics are required. In this paper, we propose an intra-domain dynamic TE system for IP networks. Our approach uses multi-topology routing as the underlying routing protocol to provide path diversity and supports adaptive resource management operations that dynamically adjust the volume of traffic sent across each topology. Re-configuration actions are performed in a coordinated fashion based on an in-network overlay of network entities without relying on a centralized management system. We analyze the performance of our approach using a realistic network topology, and our results show that the proposed scheme can achieve near-optimal network performance in terms of resource utilization in a responsive manner.
With the increasing importance of the Internet for delivering personal and business applications, the slow re-convergence after network failure of existing routing protocols becomes a significant problem. This is especially true for real time multimedia services where service disruption cannot be generally tolerated. In order to ensure fast network failure recovery, IP Fast Reroute (FRR) can be adopted to immediately reroute affected customer traffic from the default path onto a backup path when link failure occurs, thus avoiding slow Interior Gateway Protocol (IGP) re-convergence. We notice that IGP link weight setting plays an important role in influencing the protection coverage performance in intra-domain link failures. Therefore in this paper we present an IGP link weight optimization scheme for backup path provisioning, which works on top of a multi-plane enabled routing platform. The scheme aims to optimize the path diversity among multiple routing planes. Due to the large search space of possible intra-domain link weights, in this paper we adopted a global search method based on a Genetic Algorithm to optimize the IGP link weights. Evaluation results show that in most cases a set of optimal link weights can be found which ensures that there are no more critical shared links among all the diverse paths on each routing plane. As a result, backup paths can be always available in case of single link failures.
Current practices for managing resources in fixed networks rely on off-line approaches, which can be sub-optimal in the face of changing or unpredicted traffic demand. To cope with the limitations of these off-line configurations new traffic engineering (TE) schemes that can adapt to network and traffic dynamics are required. In this paper, we propose an intra-domain dynamic TE system for IP networks. Our approach uses multi-topology routing as the underlying routing protocol to provide path diversity and supports adaptive resource management operations that dynamically adjust the volume of traffic sent across each topology. Re-configuration actions are performed in a coordinated fashion based on an in-network overlay of network entities without relying on a centralized management system. We analyze the performance of our approach using a realistic network topology, and our results show that the proposed scheme can achieve near-optimal network performance in terms of resource utilization in a responsive manner.
In a content delivery network (CDN), the energy cost is dominated by its geographically distributed data centers (DCs). Generally within a DC, the energy consumption is dominated by its server infrastructure and cooling system, with each contributing approximately half. However, existing research work has been addressing energy efficiency on these two sides separately. In this paper, we jointly optimize the energy consumption of both server infrastructures and cooling systems in a holistic manner. Such an objective is achieved through both strategies of: 1) putting idle servers to sleep within individual DCs; and 2) shutting down idle DCs entirely during off-peak hours. Based on these strategies, we develop a heuristic algorithm, which concentrates user request resolution to fewer DCs, so that some DCs may become completely idle and hence have the opportunity to be shut down to reduce their cooling energy consumption. Meanwhile, QoS constraints are respected in the algorithm to assure service availability and end-to-end delay. Through simulations under realistic scenarios, our algorithm is able to achieve an energy-saving gain of up to 62.1% over an existing CDN energy-saving scheme. This result is bound to be near-optimal by our theoretically-derived lower bound on energy-saving performance.
With increased complexity of webpages nowadays, computation latency incurred by webpage processing during downloading operations has become a newly identified factor that may substantially affect user experiences in a mobile network. In order to tackle this issue, we propose a simple but effective transport-layer optimization technique which requires necessary context information dissemination from the mobile edge computing (MEC) server to user devices where such an algorithm is actually executed. The key novelty in this case is the mobile edge’s knowledge about webpage content characteristics which is able to increase downloading throughput for user QoE enhancement. Our experiment results based on a real LTE-A test-bed show that, when the proportion of computation latency varies between 20% and 50% (which is typical for today’s webpages), the downloading throughput can be improved up to 34.5%, with reduced downloading time by up to 25.1%
A mobile ad hoc network (MANET) is a self-configuring infrastructure-less network. Taking advantage of spontaneous and infrastructure-less behavior, MANET can be integrated with satellite network to provide world-wide communication for emergency and disaster relieve services and can also be integrated with cellular network for mobile data offloading. To achieve different purposes, different architecture of integrated system, protocols and mechanisms are designed. For emergency services, ubiquitous and robust communications are of paramount importance. For mobile data offloading services, emphasis is amount of offloaded data, limited storage and energy of mobile devices. It is important to study the common features and distinguish of the architecture and service considerations for further research in the two integrated systems. In this paper, we study common issues and distinguish between two systems in terms of routing protocol, QoS provision, energy efficiency, privacy protection and resource management. The future research can benefit from taking advantage of the similarity of two systems and address the relevant issues.
This paper introduces a new scheme called Green MPLS Fast ReRoute (GMFRR) for enabling energy aware traffic engineering. The scheme intelligently exploits bac kup label switched paths, originally used for failure protection, in order to achieve energy saving during the normal failure-free operation period. GMFRR works in an online and distributed fashion whe re each router periodically monitors its local traffic condition and cooperatively determines how to efficiently reroute traffic onto the backup paths in order to exploit opportunities for power saving through link sleeping in the primary paths. According to our performance evaluations based on the academic network GEANT and its traffic matrices, GMFRR is able to achieve significant power saving gains, which are within 15% of the theoretical upper bound.
Due to dynamic wireless network conditions and heterogeneous mobile web content complexities, web-based content services in mobile network environments always suffer from long loading time. The new HTTP/2.0 protocol only adopts one single TCP connection, but recent research reveals that in real mobile environments, web downloading using single connection will experience long idle time and low bandwidth utilization, in particular with dynamic network conditions and web page characteristics. In this paper, by leveraging the Mobile Edge Computing (MEC) technique, we present the framework of Mobile Edge Hint (MEH), in order to enhance mobile web downloading performances. Specifically, the mobile edge collects and caches the meta-data of frequently visited web pages and also keeps monitoring the network conditions. Upon receiving requests on these popular webpages, the MEC server is able to hint back to the HTTP/2.0 clients on the optimized number of TCP connections that should be established for downloading the content. From the test results on real LTE testbed equipped with MEH, we observed up to 34.5% time reduction and in the median case the improvement is 20.5% compared to the plain over-the-top (OTT) HTTP/2.0 protocol.
Backbone network energy efficiency has recently become a primary concern for Internet Service Providers and regulators. The common solutions for energy conservation in such an environment include sleep mode reconfigurations and rate adaptation at network devices when the traffic volume is low. It has been observed that many ISP networks exhibit regular traffic dynamicity patterns which can be exploited for practical time-driven link sleeping configurations. In this work, we propose a joint optimization algorithm to compute the reduced network topology and its actual configuration duration during daily operations. The main idea is first to intelligently remove network links using a greedy heuristic, without causing network congestion during off-peak time. Following that, a robust algorithm is applied to determine the window size of the configuration duration of the reduced topology, making sure that a unified configuration with optimized energy efficiency performance can be enforced exactly at the same time period on a daily basis. Our algorithm was evaluated using on a Point-of-Presence representation of the GÉANT network and its real traffic matrices. According to our simulation results, the reduced network topology obtained is able to achieve 18.6% energy reduction during that period without causing significant network performance deterioration. The contribution from this work is a practical but efficient approach for energy savings in ISP networks, which can be directly deployed on legacy routing platforms without requiring any protocol extension. © 2012 IEEE.
Chang Ge, Ning Wang, Ioannis Selinis, Joe Cahill, Mark Kavanagh, Konstantinos Liolis, Christos Politis, Jose Nunes, Barry Evans, Yogaratnam Rahulan, Nivedita Nouvel, Mael Boutin, Jeremy Desmauts, Fabrice Arnal, Simon Watts, Georgia Poziopoulou (2019)QoE-Assured Live Streaming via Satellite Backhaul in 5G Networks, In: IEEE Transactions on Broadcasting65(2)pp. 381-391
Institute of Electrical and Electronics Engineers (IEEE)
Satellite communication has recently been included as one of the key enabling technologies for 5G backhauling, especially for the delivery of bandwidth-demanding enhanced mobile broadband (eMBB) applications in 5G. In this paper, we present a 5G-oriented network architecture that is based on satellite communications and multi-access edge computing to support eMBB applications, which is investigated in the EU 5GPPP phase-2 satellite and terrestrial network for 5G project. We specifically focus on using the proposed architecture to assure quality-of-experience (QoE) of HTTP-based live streaming users by leveraging satellite links, where the main strategy is to realize transient holding and localization of HTTP-based (e.g., MPEG-DASH or HTTP live streaming) video segments at 5G mobile edge while taking into account the characteristics of satellite backhaul link. For the very first time in the literature, we carried out experiments and systematically evaluated the performance of live 4K video streaming over a 5G core network supported by a live geostationary satellite backhaul, which validates its capability of assuring live streaming users' QoE under challenging satellite network scenarios.
The integrated MANET and satellite network is a natural evolution in providing local and remote connectivity. The features of this integrated network, such as requiring no fixed infrastructure, ease of deployment and providing global ubiquitous communication, give advantages of its being popular. However, its unpredictable mobility of nodes, lack of central coordination and limited available resources emphasizes the challenges in networking. A large library of studies has been done in literature, yet some issues are still worth tackling, such as gateway selection mechanisms, satellite link management, resource management and so on. As a basic step of internetworking, the issue of gateway selection is studied specifically and corresponding optimization scheme for achieving load balancing is described.
MP Howarth, M Boucadair, P Flegkas, N Wang, G Pavlou, P Morand, T Coadic, D Griffin, A Asgari, P Georgatsos (2006)End-to-end quality of service provisioning through inter-provider traffic engineering, In: COMPUTER COMMUNICATIONS29(6)pp. 683-702
ELSEVIER SCIENCE BV
The random access (RA) mechanism of Long Term Evolution (LTE) networks is prone to congestion when a large number of devices attempt RA simultaneously, due to the limited set of preambles. If each RA attempt is made by means of transmission of multiple consecutive preambles (codewords) picked from a subset of preambles, as proposed in , collision probability can be significantly reduced. Selection of an optimal preamble set size  can maximise RA success probability in the presence of a trade-off between codeword ambiguity and code collision probability, depending on load conditions. In light of this finding, this paper provides an adaptive algorithm, called Multipreamble RA, to dynamically determine the preamble set size in different load conditions, using only the minimum necessary uplink resources. This provides high RA success probability, and makes it possible to isolate different network service classes by separating the whole preamble set into subsets each associated to a different service class; a technique that cannot be applied effectively in LTE due to increased collision probability. This motivates the idea that preamble allocation could be implemented as a virtual network function, called vPreamble, as part of a random access network (RAN) slice. The parameters of a vPreamble instance can be configured and modified according to the load conditions of the service class it is associated to.
Software-Defined Networking (SDN) is a promising paradigm of computer networks, offering a programmable and centralised network architecture. However, although such a technology supports the ability to dynamically handle network traffic based on real-time and flexible traffic control, SDN-based networks can be vulnerable to dynamic change of flow control rules, which causes transmission disruption and packet loss in SDN hardware switches. This problem can be critical because the interruption and packet loss in SDN switches can bring additional performance degradation for SDN-controlled traffic flows in the data plane. In this paper, we propose a novel robust flow control mechanism referred to as Priority-based Flow Control (PFC) for dynamic but disruption-free flow management when it is necessary to change flow control rules on the fly. PFC minimizes the complexity of flow modification process in SDN switches by temporarily adapting the priority of flow rules in order to substantially reduce the time spent on control-plane processing during run-time. Measurement results show that PFC is able to successfully prevent transmission disruption and packet loss events caused by traffic path changes, thus offering dynamic and lossless traffic control for SDN switches.
The research in this letter focuses on geographic routing in Delay/Disruption Tolerant Networks (DTNs), by considering sparse network density. We explore the Delegation Forwarding (DF) approach to overcome the limitation of the geometric metric which requires mobile node moving towards destination, with the Delegation Geographic Routing (DGR) proposed. Besides, we handle the local maximum problem of DGR, by considering nodal mobility and message lifetime. Analysis and evaluation results show that DGR overcomes the limitation of the algorithm based on the given geometric metric. By overcoming the limited routing decision and handling the local maximum problem, DGR is reliable for delivering messages before expiration lifetime. Meanwhile, the efficiency of DGR regarding low overhead ratio is contributed by utilizing DF. © 2013 IEEE.
The recently proposed Application Layer Traffic Optimization (ALTO) framework has opened up a new dimension for Internet traffic management that is complementary to the traditional application-agnostic traffic engineering (AATE) solutions currently employed by ISPs. In this paper, we investigate how ALTO-assisted Peer-to-Peer (P2P) traffic management functions interact with the underlying AATE operations, given that there may exist different application-layer policies in the P2P overlay. By considering specific P2P peer selection behaviors on top of a traffic-engineered ISP network, we conduct a performance analysis on how the application and network-layer respective performance is influenced by different policies at the P2P side. Our empirical study offers significant insight for the future design and analysis of cross-layer network engineering approaches that involve multiple autonomous optimization entities with both consistent and non-consistent policies.
Largely motivated by the proliferation of content-centric applications in the Internet, information-centric networking has attracted the attention of the research community. By tailoring network operations around named information objects instead of end hosts, ICN yields a series of desirable features such as the spatiotemporal decoupling of communicating entities and the support of in-network caching. In this article, we advocate the introduction of such ICN features in a new, rapidly transforming communication domain: the smart grid. With the rapid introduction of multiple new actors, such as distributed (renewable) energy resources and electric vehicles, smart grids present a new networking landscape where a diverse set of multi-party machine-to-machine applications are required to enhance the observability of the power grid, often in real time and on top of a diverse set of communication infrastructures. Presenting a generic architectural framework, we show how ICN can address the emerging smart grid communication challenges. Based on real power grid topologies from a power distribution network in the Netherlands, we further employ simulations to both demonstrate the feasibility of an ICN solution for the support of real-time smart grid applications and further quantify the performance benefits brought by ICN against the current host-centric paradigm. Specifically, we show how ICN can support real-time state estimation in the medium voltage power grid, where high volumes of synchrophasor measurement data from distributed vantage points must be delivered within a very stringent end-to-end delay constraint, while swiftly overcoming potential power grid component failures. © 1986-2012 IEEE.
Backup paths are usually pre-installed by network operators to protect against single link failures in backbone networks that use multi-protocol label switching. This paper introduces a new scheme called Green Backup Paths (GBP) that intelligently exploits these existing backup paths to perform energy-aware traffic engineering without adversely impacting the primary role of these backup paths of preventing traffic loss upon single link failures. This is in sharp contrast to most existing schemes that tackle energy efficiency and link failure protection separately, resulting in substantially high operational costs. GBP works in an online and distributed fashion, where each router periodically monitors its local traffic conditions and cooperatively determines how to reroute traffic so that the highest number of physical links can go to sleep for energy saving. Furthermore, our approach maintains quality-of-service by restricting the use of long backup paths for failure protection only, and therefore, GBP avoids substantially increased packet delays. GBP was evaluated on the point-of-presence representation of two publicly available network topologies, namely, GÉANT and Abilene, and their real traffic matrices. GBP was able to achieve significant energy saving gains, which are always within 15% of the theoretical upper bound. © 2004-2012 IEEE.
N Wang, G Pavlou, M Boucadair (2011)Preface
Node provisioning in wireless sensor networks is very high density and is a cause of data duplication. Therefore, sensors' duty-cycling is a significant process in order to reduce data load and prolong network lifetime, where certain sensors are selected to be active, while some others are pushed into sleep mode. However, quality of service in terms of network connectivity and sensing coverage must be guaranteed. This paper proposes a sensor selection method to guarantee connected coverage by using hexagonal tessellation as a virtual partition which consists of many hexagonal cells across the network. Six pieces of equilateral triangles in each hexagonal cell are target areas in which k sensors are selected to operate. Performance of the method is evaluated in terms of quality of connected coverage, number of active nodes, efficient coverage area and chance of node selection.
Data collection is a fundamental yet challenging task of Wireless Sensor Networks (WSN) to support a variety of applications, due to the inherent distinguish characteristics for sensor networks, such as limited energy supply, self-organizing deployment and QoS requirements for different applications. Mobile sink and virtual MIMO (vMIMO) techniques can be jointly considered to achieve both time efficient and energy efficient for data collection. In this paper, we aim to minimize the overall data collection latency including both sink moving time and sensor data uploading time. We formulate the problem and propose a multihop weighted revenue (MWR) algorithm to approximate the optimal solution. To achieve the trade-off between full utilization of concurrent uploading of vMIMO and the shortest moving tour of mobile sink, the proposed algorithm combines the amount of concurrent uploaded data, the number of neighbours, and the moving tour length of sink in one metric for polling point selection. The simulation results show that the proposed MWR effectively reduces total data collection latency in different network scenarios with less overall network energy consumption.
We present an efficient multi-plane based fast network failure recovery scheme which can be realized using the recently proposed multi-path enabled BGP platforms. We mainly focus on the recovery scheme that takes into account BGP routing disruption avoidance at network boundaries, which can be caused by intra-AS failures due to the hot potato routing effect. On top of this scheme, an intelligent IP crank-back operation is also introduced for further enhancement of network protection capability against failures. Our simulations based on both real operational network topologies and synthetically generated ones suggest that, through our proposed optimized backup egress point selection algorithm, as few as two routing planes are able to achieve high degree of path diversity for fast recovery in any single link failure scenario.
Reducing energy consumption in the Telecom industry has become a major research challenge to the Internet community. Towards this end, numerous research works have been carried out to mitigate the growth of energy consumption through intelligent network control mechanisms. This paper proposes a novel approach to achieving energy efficiency in ISP backbone networks according to dynamic traffic conditions. The main objective is to enforce as many links as possible to go to sleep during the off-peak time, while in event of traffic volume increase, the minimum number of sleeping links should be required to wake up to handle this dynamicity and in a way that this creates minimal or no traffic disruption. Based on our simulations with the GEANT and Abilene network topologies and their traffic traces respectively, up to 47% and 44% energy gains can be achieved without any obstruction to the network performance. Secondly, we show that the activation of a small number of sleeping links is still sufficient to cope with any traffic surge instead of reverting to the full topology or sacrificing energy savings as seen in some research proposals.
With the popularity of information and content items that can be cached within ISP networks, developing high-quality and efficient content distribution approaches has become an important task in future internet architecture design. As one of the main techniques of content distribution, in-network caching mechanism has attracted attention from both academia and industry. However, the general evaluation model of in-network caching is seldom discussed. The trade-off between economic cost and the deployment of in-network caching still remains largely unclear, especially for heterogeneous applications. We take a first yet important step towards the design of a better evaluation model based on the Application Adaptation CapaciTy (2ACT) of the architecture to quantify the trade-off in this paper. Based on our evaluation model, we further clarify the deployment requirements for the in-network caching mechanism. Based on our findings, ISPs and users can make their own choice according to their application scenarios. © 2013 IEEE.
Power consumption in Information and Communication Technology (ICT) is 10% of total energy consumed in industrial countries. According to the latest measurements, this amount is increasing rapidly in recent years. In the literature, a variety of new schemes have been proposed to save energy in operational communication networks. In this paper, we propose a novel optimization algorithm for network virtualization environment, by sleeping reconfiguration on the maximum number of physical links during off-peak hours, while still guaranteeing the connectivity and off-peak bandwidth availability for supporting parallel virtual networks over the top. Simulation results based on the GÉANT network topology show our novel algorithm is able to put notable number of physical links to sleep during off-peak hours while still satisfying the bandwidth demands requested by ongoing traffic sessions in the virtual networks. © 2012 IEEE.
Ning Wang, Nivedita Nouwell, Chang Ge, Barry Evans, Yogaratnam Rahulan, Mael Boutin, Jeremy Desmauts, Konstantinos Liolis, Christos Politis, Simon Votts, Georgia Poziopoulou (2018)Satellite Support for Enhanced Mobile Broadband Content Delivery in 5G, In: 2018 IEEE International Symposium on Broadband Multimedia Systems and Broadcasting (BMSB)pp. 1-6
Institute of Electrical and Electronics Engineers (IEEE)
Satellite communication has recently been included as one of the enabling technologies for 5G backhauling, in particular for the delivery of bandwidth-demanding enhanced mobile broadband (eMBB) application data in 5G. In this paper we introduce a 5G-oriented network architecture empowered by satellite communications for supporting emerging mobile video delivery, which is investigated in the EU 5GPPP Phase 2 SAT5G Project. Two complementary use cases are introduced, including (1) the use of satellite links to support offline multicasting and caching of popular video content at 5G mobile edge, and (2) real-time prefetching of DASH (Dynamic Adaptive Streaming over HTTP) video segments by 5G mobile edge through satellite links. In both cases, the objective is to localize content objects close to consumers in order to achieve assured Quality of Experiences (QoE) in 5G content applications. In the latter case, in order to circumvent the large end-to-end propagation delay of satellite links, testbed based experiments have been carried out to identify specific prefetching policies to be enforced by the Multiaccess computing server (MEC) for minimizing user perceived disruption during content consumption sessions.
How to reduce power consumption within individual data centers has attracted major research efforts in the past decade, as their energy bills have contributed significantly to the overall operating costs. In recent years, increasing research efforts have also been devoted to the design of practical powersaving techniques in content delivery networks (CDNs), as they involve thousands of globally distributed data centers with content server clusters. In this paper, we present a comprehensive survey on existing research works aiming to save power in data centers and content delivery networks that share high degree of commonalities in different aspects. We firstly highlight the necessities of saving power in these two types of networks, followed by the identification of four major power-saving strategies that have been widely exploited in the literature. Furthermore, we present a high-level overview of the literature by categorizing existing approaches with respect to their scopes and research directions. These schemes are later analyzed with respect to their strategies, advantages and imitations. In the end, we summarize several key aspects that are considered to be crucial in effective power-saving schemes. We also highlight a number of our envisaged open research directions in the relevant areas that are of significance and hence require further elaborations.
Konstantinos Liolis, Alexander Geurtz, Ray Sperber, Detlef Schulz, Simon Watts, Georgia Poziopoulou, Barry Evans, Ning Wang, Oriol Vidal, Boris Tiomela Jou, Michael Fitch, Salva Sendra Diaz, Pouria Sayyad Khodashenas, Nicolas Chuberre (2018)Satellite use cases and scenarios for 5G eMBB, In: Satellite Communications in the 5G Erapp. 25-60
The Institution of Engineering and Technology
This chapter presents initial results available from the European Commission H2020 5G PPP Phase 2 project SaT5G (Satellite and Terrestrial Network for 5G) . It specifically elaborates on the selected use cases and scenarios for satellite communications (SatCom) positioning in the 5G usage scenario of eMBB (enhanced mobile broadband), which appears the most commercially attractive for SatCom. After a short introduction to the satellite role in the 5G ecosystem and the SaT5G project, the chapter addresses the selected satellite use cases for eMBB by presenting their relevance to the key research pillars (RPs), their relevance to key 5G PPP key performance indicators (KPIs), their relevance to the 3rd Generation Partnership Project (3GPP) SA1 New Services and Markets Technology Enablers (SMARTER) use case families, their relevance to key 5G market verticals, and their market size assessment. The chapter then continues by providing a qualitative high-level description of multiple scenarios associated to each of the four selected satellite use cases for eMBB. Useful conclusions are drawn at the end of the chapter.
Quality-of-service in terms of network connectivity and sensing coverage is important in wireless sensor networks. Particularly in sensor scheduling, it must be controlled to meet the required quality. In this paper, we present novel methods of the connected coverage optimization for sensor scheduling using a virtual hexagon partition composed of hexagonal cells. We first investigate the optimum number of active sensors to fully cover an individual hexagonal cell. According to the best case, a sensor selection method called the three-symmetrical area method (3-Sym) is then proposed. Furthermore, we optimize the coverage efficiency by reducing the overlapping coverage degree incurred from the 3-Sym method, which is called the symmetrical area optimization method. This considers coverage redundancy within the particular area, namely, sensor's territory. The simulation results show that we achieve not only complete connected coverage over the entire monitored area with the near-ideal number of active sensors but also the minimum overlapping coverage degree in each scheduling round.
In this paper, we design and evaluate the proposed geographic-based spray-and-relay (GSaR) routing scheme in delay/disruption-tolerant networks. To the best of our knowledge, GSaR is the first spray-based geographic routing scheme using historical geographic information for making a routing decision. Here, the term spray means that only a limited number of message copies are allowed for replication in the network. By estimating a movement range of destination via the historical geographic information, GSaR expedites the message being sprayed toward this range, meanwhile prevents that away from and postpones that out of this range. As such, the combination of them intends to fast and efficiently spray the limited number of message copies toward this range and effectively spray them within range, to reduce the delivery delay and increase the delivery ratio. Furthermore, GSaR exploits delegation forwarding to enhance the reliability of the routing decision and handle the local maximum problem, which is considered to be the challenges for applying the geographic routing scheme in sparse networks. We evaluate GSaR under three city scenarios abstracted from real world, with other routing schemes for comparison. Results show that GSaR is reliable for delivering messages before the expiration deadline and efficient for achieving low routing overhead ratio. Further observation indicates that GSaR is also efficient in terms of a low and fair energy consumption over the nodes in the network.
Data collection is a fundamental task of Wireless Sensor Networks (WSN) to support a variety of applications, such as remote monitoring, and emergency response, where collected information is relayed to an infrastructure network via packet gateways for processing and decision making. In large-scale monitoring scenarios, data packets need to be relayed over multi-hop paths to the gateways, and sensors are often randomly deployed, causing local node density differences. As a result, imbalance in data traffic load on the gateways is likely to occur. Furthermore, due to dynamic network conditions and differences in sensor data generation rates, congestion on some data paths is also often experienced. Numerous studies have focused on the problem of in-network traffic load balancing, while a few works have aimed at equalizing the loads on gateways. However, there is a potential trade-off between these two problems. In this paper, the dual objective of gateway and in-network load balancing is addressed and the RALB (Reactive and Adaptive Load Balancing) algorithm is presented. RALB is proposed as a generic solution for multihop networks and mesh topologies, especially in large-scale remote monitoring scenarios, to balance traffic loads.
The energy consumption of backbone networks has become a primary concern for network operators and regulators due to the pervasive deployment of wired backbone networks to meet the requirements of bandwidth-hungry applications. While traditional optimization of IGP link weights has been used in IP based load-balancing operations, in this paper we introduce a novel link weight setting algorithm, the Green Load-balancing Algorithm (GLA), which is able to jointly optimize both energy efficiency and load-balancing in backbone networks. Such a scheme can be directly applied on top of existing link sleeping techniques in order to achieve substantially improved energy saving gains. The contribution is a practical solution that opens a new dimension of energy efficiency optimization, but without sacrificing traditional traffic engineering performance in plain IP routing environments. In order to evaluate the efficiency of the proposed optimization scheme without losing generality, we applied it to a set of recently proposed but diverse algorithms for link sleeping operations in the literature. Evaluation results based on the European academic network topology, GÉANT, and its real traffic matrices show that GLA can achieve significantly improved energy efficiency compared to the original standalone algorithms, while also maintaining near-optimal load-balancing performance.
This paper addresses delay/disruption tolerant networking routing under a highly dynamic scenario, envisioned for communication in vehicular sensor networks (VSNs) suffering from intermittent connection. Here, we focus on the design of a high-level routing framework, rather than the dedicated encounter prediction. Based on an analyzed utility metric to predict nodal encounter, our proposed routing framework considers the following three cases. First, messages are efficiently replicated to a better qualified candidate node, based on the analyzed utility metric related to destination. Second, messages are conditionally replicated if the node with a better utility metric has not been met. Third, messages are probabilistically replicated if the information in relation to destination is unavailable in the worst case. With this framework in mind, we propose two routing schemes covering two major technique branches in literature, namely: 1) encounter-based replication routing and 2) encounter-based spraying routing. Results under the scenario applicable to VSNs show that, in addition to achieving high delivery ratio for reliability, our schemes are more efficient in terms of a lower overhead ratio. Our core investigation indicates that apart from what information to use for encounter prediction, how to deliver messages based on the given utility metric is also important.
M Boucadair, A Levis, D Griffin, N Wang, MP Howarth, G Pavlou, E Mykoniati, P Georgatsos, B Quoitin, JR Sanchez, ML Garcia-Osma (2007)A framework for end-to-end service differentiation: Network planes and parallel internets, In: IEEE COMMUN MAG45(9)pp. 134-143
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
WK Chai, N Wang, KV Katsaros, G Kamel, G Pavlou, S Melis, M Hoefling, B Vieira, P Romano, S Sarri, TT Tesfay, B Yang, F Heimgaertner, M Pignati, M Paolone, M Menth, E Poll, M Mampaey, HHI Bontius, C Develder (2015)An Information-Centric Communication Infrastructure for Real-Time State Estimation of Active Distribution Networks, In: IEEE TRANSACTIONS ON SMART GRID6(4)pp. 2134-2146
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
As the Internet has grown in size and diversity of applications, the next generation is designed to accommodate flows that span over multiple domains with quality of service guarantees, and in particular bandwidth. In that context, a problem emerges when destinations for inter-domain traffic may be reachable through multiple egress routers. Selecting different egress routers for traffic flows can have diverse effects on network resource utilization. In this paper, we address a critical provisioning issue of how to select an egress router that satisfies the customer end-to-end bandwidth requirement while minimizing the total bandwidth consumption in the network.
Exploiting path diversity to enhance communication reliability is a key desired property in Internet. While the existing routing architecture is reluctant to adopt changes, overlay routing has been proposed to circumvent the constraints of native routing by employing intermediary relays. However, the selfish interdomain relay placement may violate local routing policies at intermediary relays and thus affect their economic costs and performances. With the recent advance of the concept of network virtualization, it is envisioned that virtual networks should be provisioned in cooperation with infrastructure providers in a holistic view without compromising their profits. In this paper, the problem of policy-aware virtual relay placement is first studied to investigate the feasibility of provisioning policycompliant multipath routing via virtual relays for inter-domain communication reliability. By evaluation on a real domain-level Internet topology, it is demonstrated that policy-compliant virtual relaying can achieve a similar protection gain against single link failures compared to its selfish counterpart. It is also shown that the presented heuristic placement strategies perform well to approach the optimal solution.
In this letter, we analyse the trade-off between collision probability and code-ambiguity, when devices transmit a sequence of preambles as a codeword, instead of a single preamble, to reduce collision probability during random access to a mobile network. We point out that the network may not have sufficient resources to allocate to every possible codeword, and if it does, then this results in low utilisation of allocated uplink resources. We derive the optimal preamble set size that maximises the probability of success in a single attempt, for a given number of devices and uplink resources.
Routing in delay/disruption-tolerant networks (DTNs) is without the assumption of contemporaneous end-to-end connectivity to relay messages. Geographic routing is an alternative approach using real-time geographic information instead of network topology information. However, if considering the mobility of destination, its real-time geographic information is often unavailable due to sparse network density in DTNs. Using historical geographic information to overcome this problem, we propose the converge-and-diverge (CaD) by combining two routing phases that depend on the proximity to the movement range estimated for destination. The key insight is to promote message replication converging to the edge of this range and diverging to the entire area of this range to achieve fast delivery, given limited message lifetime. Furthermore, the concept of delegation replication (DR) is explored to overcome the limitation of routing decisions and the local maximum problem. Evaluation results under the Helsinki city scenario show an improvement of CaD in terms of delivery ratio, average delivery latency, and overhead ratio. Since geographic routing in DTNs has not received much attention, apart from the design of CaD, our novelty also focuses on exploring DR to overcome the limitation of routing decision and the local maximum problem, in addition to enhancing efficiency, as DR originally intended. © 1967-2012 IEEE.
The energy consumption of backbone networks has risen exponentially during the past decade with the advent of various bandwidth-hungry applications. To address this serious issue, network operators are keen to identify new energy-saving techniques to green their networks. Up to this point, the optimization of IGP link weights has only been used for load-balancing operations in IP-based networks. In this paper, we introduce a novel link weight setting algorithm, the Green Load-balancing Algorithm (GLA), which is able to jointly optimize both energy efficiency and load-balancing in backbone networks without any modification to the underlying network protocols. The distinct advantage of GLA is that it can be directly applied on top of existing link-sleeping based Energy-aware Traffic Engineering (ETE) schemes in order to achieve substantially improved energy saving gains, while at the same time maintain traditional traffic engineering objectives. In order to evaluate the performance of GLA without losing generality, we applied the scheme to a number of recently proposed but diverse ETE schemes based on link sleeping operations. Evaluation results based on the European academic network topology GÉANT and its real traffic matrices show that GLA is able to achieve significantly improved energy efficiency compared to the original standalone algorithms, while also achieving near-optimal load-balancing performance. In addition, we further consider end-to-end traffic delay requirements since the optimization of link weights for load-balancing and energy savings may introduce substantially increased traffic delay after link sleeping. In order to solve this issue, we modified the existing ETE schemes to improve their end-to-end traffic delay performance. The evaluation of the modified ETE schemes together with GLA shows that it is still possible to save a significant amount of energy while achieving substantial load-balancing within a given traffic delay constraint. © 2014 Elsevier B.V. All rights reserved.
With the advent of Network Function Virtualization (NFV) techniques, a subset of the Internet traffic will be treated by a chain of virtual network functions (VNFs) during their journeys while the rest of the background traffic will still be carried based on traditional routing protocols. Under such a multi-service network environment, we consider the co-existence of heterogeneous traffic control mechanisms, including flexible, dynamic service function chaining (SFC) traffic control and static, dummy IP routing for the aforementioned two types of traffic that share common network resources. Depending on the traffic patterns of the background traffic which is statically routed through the traditional IP routing platform, we aim to perform dynamic service function chaining for the foreground traffic requiring VNF treatments, so that both the end-to-end SFC performance and the overall network resource utilization can be optimized. Towards this end, we propose a deep reinforcement learning based scheme to enable intelligent SFC routing decision-making in dynamic network conditions. The proposed scheme is ready to be deployed on both hybrid SDN/IP platforms and future advanced IP environments. Based on the real GEANT network topology and its one-week traffic traces, our experiments show that the proposed scheme is able to significantly improve from the traditional routing paradigm and achieve close-to-optimal performances very fast while satisfying the end-to-end SFC requirements.
Taking advantage of spontaneous and infrastructure⁃less behaviour, a mobile ad hoc network (MANET) can be integrated with various networks to extend communication for different types of network services. In the integrated system, to provide interconnection between different networks and provide data aggregation, the design of the gateway is vital. In some integrated networks with multiple gateways, proper gateway selection guarantees desirable QoS and optimization of network resource utilization. However, how to select gateway efficiently is still challenging in the integrated MANET systems with distributed behaviour terminals and limited network resources. In this paper, we examine gateway selection problem from different aspects including information discovery behaviour, selection criteria and decision-making entity. The benefits and drawbacks for each method are illustrated and compared. Based on the discussion, points of considerations are highlighted for future studies.
The Internet-of-Things (IoT) paradigm envisions billions of devices all connected to the Internet, generating low-rate monitoring and measurement data to be delivered to application servers or end-users. Recently, the possibility of applying innetwork data caching techniques to IoT traffic flows has been discussed in research forums. The main challenge as opposed to the typically cached content at routers, e.g. multimedia files, is that IoT data are transient and therefore require different caching policies. In fact, the emerging location-based services can also benefit from new caching techniques that are specifically designed for small transient data. This paper studies in-network caching of transient data at content routers, considering a key temporal data property: data item lifetime. An analytical model that captures the trade-off between multihop communication costs and data item freshness is proposed. Simulation results demonstrate that caching transient data is a promising information-centric networking technique that can reduce the distance between content requesters and the location in the network where the content is fetched from. To the best of our knowledge, this is a pioneering research work aiming to systematically analyse the feasibility and benefit of using Internet routers to cache transient data generated by IoT applications.
Multi-Protocol Label Switching (MPLS) has been considered to be a promising solution to achieve end-to-end QoS guarantees in Differentiated Services (DiffServ) domains .Based on the Service Level Specification (SLS) between customers and the ISP, traffic forecast mechanism is able to predict traffic demands between ingress-egress routers, and hence bandwidth guaranteed LSPs can be set up accordingly through the DiffServ domain. In this paper, we address the problem of computing multiple LSPs with heterogeneous bandwidth requirements, while the overall network link cost is optimized. We first prove that finding a set of feasible LSPs with bandwidth constrained is NP-complete, and then propose an efficient heuristic with global network resource coordination over individual traffic aggregates. By simulation we show that the proposed coordinated path section (CPS) scheme obtains better overall LSP cost and lower bandwidth consumption compared with existing bandwidth constrained routing algorithms.
The continuous growth in volume of Internet traffic, including VoIP, IPTV and user-generated content, requires improved routing mechanisms that satisfy the requirements of both the Internet Service Providers (ISPs) that manage the network and the end-users that are the sources and sinks of data. The objectives of these two players are different, since ISPs are typically interested in ensuring optimised network utilisation and high throughput whereas end-users might require a low-delay or a high-bandwidth path. In this paper, we present our UAESR (Utilisation-Aware Edge Selected Routing) algorithm, which aims to satisfy both players' demands concurrently by selecting paths that are a good compromise between the two players' objectives. We demonstrate by simulation that this algorithm allows both actors achieve their goals. The results support our argument that our cooperative approach achieves effective network resource engineering at the same time as offering routing flexibility and good quality of service to end-users.
This paper presents a holistic peer selection scheme in multi-domain environments, aiming to mitigate Peer-to-Peer (P2P) traffic volumes over expensive inter-domain links as well as the maintenance of desirable P2P users' perceived service quality. The mechanism combines the traditional locality-aware peer selection with the consideration of ISP business relationship. By leveraging between the two peering strategies, the risk of possible congestion on critical inter-connected links can be effectively alleviated due to more concentrated P2P traffic over fewer inter-ISP links under pure cooperative peering schemes. According to our analytical modelling, the proposed hybrid approach is able to achieve better performance for P2P users, and can retain desirable network efficiency as of the cooperative peer selection strategy. Our modelling based analysis offers the incentives to perform peer selections in multi-domain environments wherein non-cooperative networks and cooperative networks coexist. © 2013 IEEE.
Information-centric networking (ICN) is an emerging networking paradigm that places content identifiers rather than host identifiers at the core of the mechanisms and protocols used to deliver content to end-users. Such a paradigm allows routers enhanced with content-awareness to play a direct role in the routing and resolution of content requests from users, without any knowledge of the specific locations of hosted content. However, to facilitate good network traffic engineering and satisfactory user QoS, content routers need to exchange advanced network knowledge to assist them with their resolution decisions. In order to maintain the location-independency tenet of ICNs, such knowledge (known as context information) needs to be independent of the locations of servers. To this end, we propose CAINE — Context-Aware Information-centric Network Ecosystem — which enables context-based operations to be intrinsically supported by the underlying ICN routing and resolution functions. Our approach has been designed to maintain the location-independence philosophy of ICNs by associating context information directly to content rather than to the physical entities such as servers and network elements in the content ecosystem, while ensuring scalability. Through simulation, we show that based on such location-independent context information, CAINE is able to facilitate traffic engineering in the network, while not posing a significant control signalling burden on the network
With the advent of various emerging network services in recent years, the current best-effort based internet infrastructure has increasingly struggled in providing comprehensive support for these applications. Despite the QoS (Quality of Services) frameworks proposed in the 1990’s, such as Integrated Services (IntServ) and Differentiated Services (DiffServ), large-scale deployments have not been seen across the global internet until now, and this slow progress has significantly hindered the development of the relevant services. In addition, network resilience to failures has become another major concern by today’s ISPs (Internet Service Providers), as QoS assurance to end-users may be severely impacted by various failures which are very common in operational networks today.
Energy-aware traffic engineering (ETE) has been gaining increasing research attentions due to the cost reduction benefits that it can offer to network operators and for environmental reasons. While numerous approaches exist which attempt to provide energy reduction benefits by intelligently manipulating network devices and their configurations, most of them suffer from one fundamental shortcoming: however, minor adaptations to a given IP network topology configuration all lead to temporal service disruptions incurred by routing reconvergence, which makes these schemes less appealing to network operators. The more frequently the IP topology reconfigurations take place in order to optimize the network performance against dynamic traffic demands, the more frequently service disruptions will occur to end users. Motivated by the essential requirement for network operators to enable seamless service assurance, we put forward a framework for disruption-free ETE, which leverages on selective link sleeping and wake-up operations in a disruption-free manner. The framework allows for maximizing the opportunities for disruption-free reconfigurations based on intelligent IGP link weight settings, assisted by a dynamic scheme that optimizes the reconfigurations in response to changing traffic conditions. As our simulation-based evaluation show, the framework is capable of achieving significant energy saving gains while at the same time ensuring robustness in terms of disruption avoidance and resilience to congestion.
X Yang, Zhili Sun, Y Miao, N Wang, S Kang, Y Wang, Y Yang (2016)Performance Optimisation for DSDV in VANETs, In: Proceedings of the 17th UKSim-AMSS International Conference on Modelling and Simulation (UKSim), 2015pp. 514-519
In recent years, Mobile Ad hoc Networks (MANETs) have been great interest all over the world for its advantage of high mobility and flexibility. It is also among the greatest challenges in wireless communications. As a special type of MANET, Vehicular Ad hoc Networks (VANETs) are considerably important in Next-Generation Networking (NGN). Unlike typical MANETs, VANETs are much more challenging due to high velocity, which makes classic MANET routing protocols cannot fit in such scenarios efficiently. This paper is intended to evaluate performance of two different routing protocols, namely DSDV and AODV, in various realistic scenarios. Thus, a DSDV optimization approach is therefore proposed to improve DSDV's performance in VANETs.
Small cells are becoming a promising solution for providing enhanced coverage and increasing system capacity in a large-scale small cell network. In such a network, the large number of small cells may cause mobility signalling overload on the core network (CN) due to frequent handovers, which impact the users Quality of Experience (QoE). This is one of the major challenges in dense small cell networks. Such a challenge has been considered, this thesis addresses this challenging task to design an effective signalling architecture in dense small cell networks. First, in order to reduce the signalling overhead incurred by path switching operations in the small cell network, a new mobility control function, termed the Small Cell Controller (SCC) is introduced to the existing base station (BS) on the Radio-Access-Network(RAN)-side. Based on the signalling architecture, a clustering optimisation algorithm is proposed in order to select the optimal SCC in a highly user density environment. Specifically, this algorithm is designed to select multiple optimal SCCs due to the growth in number of small cells in the large-scale environment. Finally, a scalable architecture for handling the control plane failures in heterogeneous networks is proposed. In that architecture, the proposed SCC scheme controls and manages the affected small cells in a clustered fashion during the macro cell fail-over period. Particularly, the proposed SCC scheme can be flexibly configured into a hybrid scenario. For operational reduction (reducing a number of direct S1 connections to the CN), better scalability (reducing a number of S1 bearers on the CN) and reduction of signalling load on the CN, the proposed radio access network (RAN) signalling architecture is a viable and preferable option for dense small cell networks. Besides, the proposed signalling architecture is evaluated through realistic simulation studies.
This research investigates Denial of Service (DoS) attacks targeting the Internet’s Application Layer protocols, namely Session Initiation Protocol (SIP), and SPDY, the proposed second version of the Hyper Text Transfer Protocol (HTTP 2.0). The attack detection methodology was set using a Statistical Process Control (SPC) technique and Monitoring charts, as well as Cumulative Summation (CUSUM) and Exponential Weighted Moving Average (EWMA). The techniques tackle different possible flooding attacks, typically through monitoring the incoming messages. The system works by sensing sudden changes and detecting abnormal traffic increases alerting for an attack, and then triggering an alarm on the DoS attack. The scenarios are designed for SIP to simulate normal traffic behaviour and attack traffic behaviour; some scenarios were set to have a large ratio of the non-acknowledged requests, and another scenario was set to simulate a slight increase in the ratio. There was a scenario in which its traffic was imported from another SIP related research. In addition, the thesis discusses the results of DoS attacks targeting the SPDY protocol; one scenario is about a large increase in the total number of the sent requests by a user towards a SPDY proxy, and another scenario is set with a slight increase. SPC was tested on all previously mentioned scenarios; they have shown significant results in detecting the attacks, either it was large sudden flooding, or slight low rate DoS flood, as the low rate DoS attacks are very difficult and sometimes impossible to detect. SPC was tested to aim in false attack alarms reduction, as they are also difficult to deal with. These techniques were applied in two approaches: in the first approach, the Offline implementation, the statistical values of the whole observations, the mean and the standard deviation, are found and then applied to the equations. In the second approach, the Online implementation, the statistical values were updated on getting a new observation and immediately applying the SPC equations; there has not been any other research that discussed such an approach. The first approach represents a system with previous knowledge and experience of the ongoing traffic. This reduces the overhead spent in finding the mean and the standard deviation every time a new observation is added to the sequence. The second approach represents a system that is newly starting with no knowledge, or a system which was reset after detecting an attack. Finally, a framework was suggested to effectively employ the previous contributions in detecting the flood of the traffic. Key words: DoS, SIP, SPDY, HTTP, SPC, CUSUM, EWMA, traffic behaviour. Email: firstname.lastname@example.org WWW: http://www.surrey.ac.uk/
Over the last two decades, the world has witnessed a vast increase in smart phones devices usage, where mobile phone devices have become an integral part of our daily routine. As a result, this has created security issues and lead to an increased dependency on smartphone usage, criminal activities and/or illegal practices. This increase in crimes committed by or via smartphones has made it a necessity for digital forensics experts to come up with reliable tools that can be used to help in extracting data from those smart phones. Currently mobile forensics work is fragmented and although attempts have been made to develop conceptual frameworks for mobile devices in the past few years, there is however, no common framework adopted to date that meets the needs of the ever changing and expanding world of mobile devices. A comprehensive survey of mobile forensics frameworks in this research revealed that current frameworks tend to focus on targeting specific operating systems, responding to specific issues, or use complicated steps that make it difficult for users to follow. Some are also based on desktop and non-mobile device models. Also, tools analysis was carried out benefitting from NIST guidelines, where areas in which each tool should be tested and how the test should be conducted are specified. The results of the Tools Analysis were not encouraging, and quite surprising that many challenges that existed at the advent of the mobile devices have not been solved. Without the existence of a generalized Process Based Framework for Mobile Forensics (PBFMF) to provide the appropriate guidelines, steps and procedures to be followed during the digital forensic phases, it will not be as simple as it might appear to extract data in an appropriate way from smart-phones even with the utilisation of the most popular tools. Based on the research and analysis in this thesis, it was clear that there is a need for a set of effective methods to ensure that extracted and examined information from mobile phones devices are not tampered with, accepted by a court of law, or can be relied upon as an undisputed means of proving that something has or has not taken place. A new PBFMF that is platform independent, open architecture, extensible and capable of integrating newer mobile device technologies is presented in this thesis. It formulates a better understanding of the barriers to using forensics tools effectively and appropriately. Key words: Processed Base Framework, Mobile Forensics Tools, Digital Forensics, Operating Systems, Smart Phones.
Reducing energy consumption in the Telecom industry has become a major research challenge to the Internet community due to high level of energy waste on redundant network devices. In search for a paradigm shift, recent research efforts have been focusing on time-driven sleep-mode reconfiguration of network elements during periods of low traffic demand. However, due to the routing re-convergence issue of today's traditional IP routing protocols, frequent network reconfigurations are generally deemed to be harmful as a result of routing table re-convergence. Furthermore, diurnal traffic behaviours are unpredictable and can lead to network congestion as a result of the reduced network resources. This thesis presents novel event-driven green backbone routing schemes for network managements which are capable of saving energy in fixed IP networks (using both regular and non-regular traffic matrix) without inhibiting its performance. First, a Link Wake-up Optimisation Technique (LiWOT) is proposed during energy saving periods when the pruned topology is applied. The key novelty here is that LiWOT selects the minimum number of router's line cards to wake-up when the network is congestion is detected. This is contrary to the norms of reverting to the full network topology or on-the-fly network reconfigurations in the case of even minor traffic surge and thereby sacrificing energy savings. In order to mitigate the effect of routing re-convergence in networks, LiWOT prioritises the waking up of non-disruptive sleeping links. This scheme was further extended to a fully disruption-free scheme. The second proposed scheme is the Green Link Weight Disruption-Free Energy-aware Traffic Engineering which limits its wake-up operation to only non-disruptive links. In order to maximise the energy savings, the number of this type of links are maximised in an offline manner. Using a genetic algorithm based approach, a new link weight optimisation scheme is proposed and this forms the basis of the second research contribution. Finally, a completely dynamic link sleeping reconfigurations (DLSR) for green traffic engineering is proposed. The scheme coordinates the sleep and wake-up operations in a dynamic way such that operations are based on the current traffic. The key contribution is that DLSR is oblivious of historical traffic conditions like the previous schemes and can enhance energy savings by putting back woken-up links to sleeping mode during low traffic. The performances of the three schemes were evaluated using the publicly accessible traffic traces of both GEANT and Abilene network respectively over a period of one week and the obtained results show a substantial amount of energy saving.
Peer-to-peer (P2P) content sharing applications account for a significant fraction of the traffic volumes and is expected to increase . Data is distributed to a large population of end client peers from end source peers in P2P systems, without the need for big investments to deploy servers. The costs of the content distribution are thus shared among end users and Internet service providers (ISPs). Consequently, negative impacts, such as increased inter-ISP traffic in particular, have become critical issues that need to be kept low, due to that most popular P2P protocols are not designed to be aware of network topology. On the other hand, substantial access burden at source peers’ side can be increased due to their limited uploading bandwidth capacities, compared to the massive data demand. The top-level technical objectives of this work are thus to 1) achieve optimised usage of network resources through reducing P2P content traffic, and at the same time, 2) to provide enhanced network support to P2P applications. Specifically, we address the above issues in mainly two ways: First, in order to reduce the P2P traffic load especially over the costly inter-ISP links, we propose an advanced hybrid scheme for peer selections across multi-domains, by promoting cooperation and non-cooperation among adjacent ISPs. An analytical modelling framework is developed for analysing inter-domain peer selection schemes concerning ISP business policies. Our analytical modelling framework can be used as a guide for analysing and evaluating future network-aware P2P peer selection paradigms in general multi-domain scenarios. Second, with the concern of improving service quality for P2P users in terms of content access delay and transmission delay, we propose an intelligent in-network caching scheme enabled by Information-Centric Networking (ICN). A simple analytical modelling framework is developed to quantitatively evaluate the efficiency of the proposed in-network caching policy. We further design an ICN-driven protocol for the efficient P2P content delivery with in-network caching support. Bloom Filter (BF) techniques are adopted to save cache space and also to reduce communication overhead. A P2P-like content delivery simulator with In-network caching functionalities is built, with which extensive simulation experiments are conducted to validate the analytical results and to further prove the efficiency of the proposed caching scheme.
The recent information technology development has vastly helped in accelerating and facilitating the banking services and operations in general. In spite of this accelerated development in the banking sector, the risk of invading electronic banking systems is evident. This is manifested in many harmful functions such as unauthorised money transfer, disclosure of client information, denial of online banking services as well as various threats linked with online banking at different lineages especially through authentication of the client online. This thesis utilizes cloud computing in the banking system from technological and economic perspectives, and the possible benefits that a cloud computing provider gives. The definitions and functions of enterprise architecture both for cloud computing and the financial sector are discussed, then the new architecture model that I developed by merging the cloud and e-banking architectures is thoroughly explained. This study presents a novel, unique tamper proof USB, sustained with an operating system dedicated to serve the bank’s clients. This device is realised by embedding the bank application in this tamper proof USB while creating an isolation layer in the client’s PC when the client plugs in this USB. The modified operating system platform is based on the puppy Linux operating system. It has the capability to multiplex physical resources at the granularity of an entire operating system while being able to provide isolation between different operating systems. This tamper proof device is supported by four authentication measures which are; unique tamper proof ID, User account, password and fingerprint with a client secure socket layer. Moreover, I designed two different channels, one with cloud for authentication and transferring an encrypted session key while the other channel is used for communication between the client and the bank after re-authentication accompanied by a one-time password and finger printer image authentication parameter plus session key. The simulation testbed is used to solve the fundamental flow of the mechanism in sufficient detail, using Network Miner to parse libpcap files to do a live packet capture of the network traffic between cloud provider and the client; using Foglight monitoring tools to utilise the simulated server. Netwalk tools are used to represent the percentage of IP usage and Kali Linux, wireshark for penetration testing. Key words: online transaction, Security, tamper proof devices, cloud computing, architecture model
Wireless Sensor Networks (WSNs) support a variety of data collection scenarios and have profound effects on both military and civil applications, such as environmental monitoring, traffic surveillance and tactical military monitoring. Design of efficient data collection algorithms is important yet still challenging due to the distinguished characteristics of WSNs: (i) The large number of sensor nodes may cause severe unbalanced traffic through the network due to the concentration of data traffic towards the sinks and the intersection of multihop routes. (ii) Sensor nodes are limited in power, computational capability and storage capacity, which requires careful resource management using energy efficient schemes. (iii) WSNs are typically application-specific, and the design requirements of networks change with different applications. This thesis presents the following three contributions to the literature of efficient data collection in WSNs: First, we proposed a unified solution for gateway and in-network traffic load balancing in multihop data collection scenarios. We combined multiple path metrics (path residual bandwidth, end-to-end delay and path reliability) and gateway conditions (gateway utilization) in a unified path quality metric. The strategy is to probabilistically choose alternative path and adaptively modify the path switch probability based on the independent decisions made by the sensor nodes. Second, we formulated the delay aware energy efficient data collection with mobile sink and virtual multiple-input multiple-output (VMIMO) technique problem and proposed a weighted revenue based algorithm to approximate the optimal solution. The aim is to achieve full utilization of VMIMO technique to minimize the network energy consumption with consideration of bounded sink moving time. In order to explore the trade-off between overall network consumption and data collection latency, we combined the VMIMO utilization, and sink moving tour length into a weighted metric. Third, we established an minimization model for the total data collection latency in multihop data collection scenarios with bounded hop distance and limited buffer storage. To approximate the optimal solution, we developed a multihop weighted revenue algorithm. The strategy is to jointly consider data uploading time and sink moving time to optimize the total data collection time. In order to increase the time saving due to concurrent data uploading, we balanced the number of associated nodes of the compatible sensors.
The Internet has changed substantially from a limited communication tool to a fully interactive information sharing environment. Quality of Service (QoS) Oblivious applications such as messaging and email have lost their dominance to time-critical and bandwidth-intensive multimedia services such as Voice over Internet Protocol (VoIP), video conferencing and Video on Demand (VoD). This considerable change in QoS demand places a heavy burden on the Internet design, including its underlying network protocols. Solutions such as multipath routing have therefore been proposed to improve data delivery performance and capacity by spreading the distribution of traffic using the network’s inherent path diversity. Although these network-oriented techniques are useful for Internet Service Providers (ISPs) to engineer their resources effectively, they do not necessarily satisfy the requirements of end-users. The reason is that the exclusive control of ISPs in determining the data paths prevents the end-users from reacting to QoS degradation caused by congestion in that path, even after they have noticed such QoS deterioration. On the other hand, granting full source routing capabilities to end-users has its own disadvantages. Firstly, these end-users require regularly updated knowledge of the network topology and its traffic, which is not scalable and would impose a large overhead on the network. Secondly, the computed source routes may violate the ISP traffic engineering policies or may cause network congestions. To address the problems of network-controlled and source-controlled routing paradigms, this thesis considers a middle cooperative approach between ISP and users, which provides a modest amount of control for the end-user to select the path from a limited set of path options, rather than being obliged, as in the current Internet, to follow a single pre-determined path. The path candidates are computed by the ISP based on its performance objectives (such as balanced link utilisations) and presented to the end-user. By restricting the extent of end-user control in the Intradomain path selection process to a few policy-compliant path options, the ISPs’ traffic engineering considerations are not compromised, and the objectives of both communication parties are fulfilled at the same time. Based on the above principle, a cooperative edge selected routing algorithm is presented to demonstrate the viability of this approach and its potential to reach win-win solutions for both communication parties (ISPs and end-users). The algorithm performance is further validated with mathematical analysis. Then, a more scalable version is proposed to increase the efficiency and decrease the memory and processing overhead. Finally, the performance and robustness of the algorithm in the face of network traffic changes is further improved with Genetic Algorithm.
A content delivery network (CDN) typically consists of geographically-distributed data centers (DCs), which are deployed within proximity to end-users who request Internet content. Content copies are stored at the DCs and are delivered to end-users in a localized manner to improve service availability and end-to-end latency. On one hand, CDNs have improved QoS experienced by end-users. On the other hand, the rapid increase in Internet traffic volume has caused the global DC industry's energy usage to skyrocket. Therefore, our focus in this thesis is to realize energy awareness in CDN management while assuring end-to-end QoS. First, we surveyed the literature on energy-aware DC and CDN management schemes. We highlighted the significance of dynamically provisioning server and network resources in DCs in order to reduce DC energy usage. We also recognized that in order to achieve optimal CDN energy saving, energy optimization should be performed both within each DC and among multiple DCs in a CDN. Second, we proposed a theoretical framework that minimizes server power consumption in cross-domain CDNs. The term "server" refers to any co-locating entity that can handle user requests, e.g., server clusters or DCs. Our strategy was to put a subset of servers to sleep mode during off-peak hours to save energy. In order to avoid deteriorated QoS caused by less live server resources, we enforced constraints on utilization of servers and network links respectively to avoid them from being overloaded. Third, we designed an energy-aware CDN management system. The strategy was not only to put a subset of servers within each DC to sleep, but also to put entire DCs to sleep during off-peak hours through load unbalancing among DCs. We showed how the proposed system can be integrated within a typical modern CDN architecture. We also developed a heuristic algorithm that allows CDN operators to quickly make decisions on server and DC sleeping, as well as energy-aware request resolution. QoS was assured through constraints on server response time and end-to-end delay. Fourth, we built an optimization model that minimizes the overall energy consumption of CDN DCs, including their servers and cooling systems. We derived a lower bound to its optimal objective. Through comparing with the lower bound, we showed that our earlier developed heuristic algorithm's energy-saving gain was guaranteed to be near-optimal. We also quantitatively studied the trade-off between CDN energy saving and QoS performance in terms of end-to-end delay and server response time.
It has been envisaged that in future 5G networks user devices will become an integral part of the network by participating in the transmission of mobile content traffic typically through Device-to-device (D2D) technologies. In this context, we promote the concept of Mobility as a Service (MaaS), where the mobile network edge is equipped with necessary knowledge on device mobility in order to meet specific service requirements for clients via a small number of helper devices. In this thesis, we propose a MaaS paradigm based frameworks to address clients’ requirement with regards to content offloading service and connectivity relaying service via network assisted D2D communication framework. To address content traffic offloading, we present a device-level Information Centric Networking (ICN) architecture that is able to perform intelligent content distribution operations according to necessary context information on mobile user mobility and content characteristics. Based on such an architecture, we further introduce device-level online content caching and offline helper selection algorithms in order to optimise the overall system efficiency. In particular, this piece of work sheds distinct light on the importance of user mobility data analytics based on which helper selection can lead to overall system optimality. Based on representative user mobility models, we conducted realistic simulation experiments and modelling which have proven the efficiency in terms of both network traffic offloading gains and user-oriented performance improvements. In addition, we show how the framework can be flexibly configured to meet specific delay tolerance constraints according to specific context policies. With regard to connectivity relaying service, we introduce a novel scheme of using D2D communications for enabling data relay services in partial Not-Spots, where a client without local network access may require data relay by other devices. Depending on specific social application scenarios, this piece of work introduces tailored algorithms in order to achieve optimised data relay service performance. The approach is to exploit the network’s knowledge on its local user mobility patterns to identify best helper devices for participating in data relay operations. This framework is also supported with our proposed helper selection optimisation algorithm based on prediction of individual user mobility. According to our simulation analysis, based on both theoretical mobility models and real human mobility data traces, the proposed scheme is able to flexibly support different service requirements in specific social application scenarios.