
Dr Hamed Alimohammadi
Publications
Open Radio Access Network (Open RAN) is a new paradigm to provide fundamental features for supporting next-generation mobile networks. Disaggregation, virtualisation, closed-loop data-driven control, and open interfaces bring flexibility and interoperability to the network deployment. However, these features also create a new surface for security threats. In this paper, we introduce Key Performance Indicators (KPIs) poisoning attack in Near Real-Time control loops as a new form of threat that can have significant effects on the Open RAN functionality. This threat can arise from traffic spoofing on the E2 interface or compromised E2 nodes. The role of KPIs is explored in the use cases of Near Real-Time control loops. Then, the potential impacts of the attack are analysed. An ML-based approach is proposed to detect poisoned KPI values before using them in control loops. Emulations are conducted to generate KPI reports and inject anomalies into the values. A Long Short-Term Memory (LSTM) neural network model is used to detect anomalies. The results show that more amplified injected values are more accessible to detect, and using more report sequences leads to better performance in anomaly detection, with detection rates improving from 62% to 99%.
This paper presents a detailed framework for detecting anomalies and tracking service usage within the O-RAN architecture, focusing on preventing DDoS attacks caused by unauthorized or compromised User Equipment (UE). The system consists of three main parts: the dApp, xApp-U, and xApp-S. Each part plays a role in identifying suspicious activities and monitoring UE service usage across both the RAN and near-Real-Time (RT) RIC, ensuring effective threat detection and response. We implemented and evaluated various Machine Learning (ML) algorithms, comparing them based on important metrics such as accuracy, precision, recall, False Positive Rate (FPR), and training and testing time. Our analysis shows that different ML algorithms perform better depending on the system's needs, and choosing the right one requires balancing accuracy, low delay, and fewer false positives. The dApp operates in the RAN, where decisions must be made quickly with minimal delay, while xApp-U, working in the near-RT RIC, benefits from having more data and achieves better accuracy in detecting anomalies. Finally, xApp-S focuses on tracking service usage to identify patterns that contribute to suspicious behavior. This multi-layered approach allows for flexible and accurate security measures that suit the specific needs of each part of the system.
This paper explores a rule based mediation approach to optimize the balance between coverage and capacity Key Performance Indicators (KPIs) in Self Organizing Networks (SONs). Traditional methods often provide mitigation solutions to resolve conflicts between these KPIs. In contrast, this study introduces a mediation optimization technique that dynamically adjusts electrical antenna tilt in response to changing user densities. Through the simulations, the proposed approach demonstrates significant improvements in network performance, effectively reducing the coverage and capacity losses typically observed when one KPI is prioritized over the other. This dynamic adjustment method offers a more balanced solution for optimizing SON performance.
The Open Radio Access Network (Open RAN) architecture introduces flexibility, interoperability, and high performance through its open interfaces, disaggregated and virtualized components, and intelligent controllers. However, the open interfaces and disaggregation of base stations leave only the Open Radio Unit (O-RU) physically deployed in the field, making it more vulnerable to malicious attacks. This paper addresses signaling storm attacks and introduces a new sub-use case within the signaling storm use case of the 0 RAN Alliance standards by exploring novel attack triggers. Specifically, we examine the compromise of O-RUs and their power sockets, which can lead to a surge in handovers and reregistration procedures. Additionally, we leverage Open RAN's intelligence capabilities to detect these signaling storm attacks. Seven machine learning algorithms have been evaluated based on their detection rate, accuracy, and inference time. Results indicate that the BiDirectional Long Short-Term Memory (BiDLSTM) model outperforms others, achieving a detection rate of {8 8. 2 4 \%} and accuracy of {9 6. 1 5 \%}.
Open Radio Access Network (Open RAN) has revolutionized future communications by introducing open interfaces and intelligent network management. Network slicing enables the creation of multiple virtual networks on a single physical infrastructure, providing tailored services for performance, security, and latency. Efficient RAN slice resource allocation requires accurate prediction of the slice loads from the collected reports. However, open interfaces brought by Open RAN have also caused new security challenges. Malicious attackers could modify the data between E2 nodes with Near Real-Time RIC, hence mislead the model for a poor performance. To prevent this attack, we hereby proposed a novel contrastive learning design, which uses data augmentation to grant the model the vulnerability of feature distortion. The contrastive learning model could learn the correlation of original data with distorted data. Meanwhile, the proposed contrastive learning has a greater generalization ability compared to conventional supervised learning, which is suitable for dynamic environments and could adapt to various noise levels. The proposed contrastive learning includes supervised and unsupervised contrastive learning (SCL and UCL). The proposed SCL could achieve 87.1 % out-of-distribution network slice classification accuracy, the proposed UCL could achieve 86.6 %, while the conventional MLP is 82.6 %. Meanwhile, the proposed method only requires 8.4 % of computation during training compared to that of conventional MLP.
One of the most important tasks of a network device is packet classification, the support of which has become particularly important with the increasing development of the Internet. In packet classification, the input packets are matched to a set of rules, so that the rule corresponding to each packet is found, and the relevant action(s) is applied to the packet. With the emergence of software-defined networks, many-field rule sets have been introduced, leading to further challenges in the area. One of the best methods of packet classification for today's applications is to use hash tables, because of the fast update support and reasonable searching speed. A well-known packet classification algorithm using hash tables is tuple space search (TSS). TSS extracts the non-wildcard bit positions pattern of the rules, to be used as a hash key in the hash table. Rules with the same pattern are placed in the same cluster. On the other hand, machine learning techniques can be useful to cluster the rule set in many-filed packet classification, due to the increase in the number and type of fields. In this paper, we try to cluster rules using neural gas networks. In this method, there is a specific pattern of bit positions for each cluster of rules, which is non-wildcard in all the rules belonging to the cluster and used as a hash key. The rules of each cluster is hashed into a hash table. In the classification task, the hash key for each cluster is extracted from the packets header according to the non-wildcard pattern of that cluster. Then, using a hashing function, the related rules are investigated to find the highest-priority matching rule. The experimental results show that 93% improvement of throughput on average compared to TSS.
Many-field packet classification is a challenging function of the devices in software-defined networking. In this paper, we propose a new algorithm, which partitions a ruleset in a simple way based on non-wildcard portions of the rules. A portion can be a field or a sub-field. The algorithm uses hash tables as the base data structure. In a partition, all the members have a common non-wildcard portion, which is used as the hash key. It means that only a portion of the rules and headers is used for hashing. It simplifies using the hash table for packet classification, which deals with the ternary vectors. The proposed algorithm supports fast updating as a required feature for most of today's networks. Extensive simulations are conducted to evaluate the algorithm and compare it with well-known algorithms. Results show that the proposed algorithm has a 196% higher throughput and 81% faster update than Tuple Space Search as the base classification algorithm of OpenVSwitch.
In a mobile ad hoc network wireless links are broken frequently because of mobility of nodes. It leads to many routing processes and consequently wastage of the network resources such as energy and bandwidth. Using a routing protocol that reduces the link breakages will result in lower overhead and less packet losses. In this paper we propose a reliable multipath routing protocol that finds stable paths from a source node to a destination node. Our protocol verifies the paths that are being discovered link by link. Each link should satisfy a fairly chosen threshold. If it doesn't, it will not be used in the paths. Therefore, the paths which are stable and reliable will be constructed. Simulation results show that our protocol reduces the routing overhead and the energy consumption at the same time with improving packet delivery ratio and throughput, especially in high mobility scenarios.
Packet classification is one of the main core functions of networking. With the advent of Software-Defined Networking, packet classification has become more challenging by introducing many-field rulesets. In this paper, we propose an algorithm, Clustering-Based Packet Classification (CBPC), which divides ruleset into some clusters using a new hybrid clustering method, based on an innovative bit-level view. Those rules that have more common wildcard and non-wildcard bit positions are put into the same cluster. Each cluster uses common non-wildcard positions of its rules to produce keys for a hash table insertion and query stages. This makes possible to use hash tables without involving difficulties with inserting and querying ternary vectors, because our algorithm converts it to simple binary operations. In fact, we ignore a portion of information of the rules in key production for the hash tables. It overcomes the problem of extending wildcards to all of the possible values. It is true that some information is lost but it is covered by full matching at the hash table entries. We propose two versions for CBPC, online and offline. The online version supports update, which is an important requirement for today's packet classification algorithms. The proposed algorithm is evaluated and compared with some well-known and state-of-the-art algorithms by extensive simulations. The results show that Online-CBPC achieves 197% higher throughput and 64% faster update than Tuple Space Search, OpenVSwitch standard algorithm, while using almost the same amount of memory.