I am a mathematician (Maths, University of Patras) doing research in computer science (PhD in Theoretical Computer Science, University of Surrey). My work on Learning and Control in complex networks focuses on when and where to intervene in a network in order to steer to a desirable outcome. I am keen on mathematical methods (control theory, approximations methods) combined with computational methods such as Reinforcement Learning (rule-based, deep).
This video clip provides a quick overview of my recent research activity.
I have led several UK and EU funded research projects. Current projects include:
- the Real-Time Flow (RTF) project, funded by EIT Digital IVZW - this is a collaboration with Amey UK (lead), Ferrovial, Ci3 (Spain) and Emu Analytics on modelling and predicting flows of passenger and train movements over transport networks
- the CoNTINuE (Capacity building in technology-driven innovation in healthcare) project, funded by GCRF, UKRI, which is an interdisciplinary project between Surrey Business School, Clinical and Experimental Medicine, and Computer Science
- AGELink (Automated GEneration of Linkages between delay events) which looks at minimising reactionary delay -- the knock-on effect on the rail network of a train being late. This is an EPSRC IAA project in collaboration with the Rail Delivery Group (RDG)
I am a member of the executive committee of the IEEE Technical Committee on Cloud Computing and the technical committee on IEEE Industrial Informatics.
On the Programme Committee for the annual conference on Complex Networks, the IEEE Service Oriented Computing and Applications (IEEE SOCA), and a co-chair on the RuleML+RR International Rule Challenge in 2019, 2020.
An Associate member of the Surrey Centre for Cyber Security (SCCS).
An IEEE Member (No. 41465193).
University roles and responsibilities
- Admissions Tutor (UG and PGT)
I am interested in devising the best way to act in problems modelled as complex networks, or more generally real-world situations represented as dynamical systems. The objective is to get a handle on how such systems may evolve over time. My formal methods and verification background means I come from an analytical and computational perspective. My research focuses on applying mathematical methods (e.g., control theory, approximation methods, stochastic optimisation) and computational techniques (rule-based machine learning, deep reinforcement learning) to the controllability of complex networks. This video will give you a quick overview.
This research is applicable to:
- Targeted therapeutics - Gene Regulatory Networks; designing a control policy to guide therapeutic interventions
- Transportation Systems - Rail and Road networks; modelling and predicting flows of trains, cars, people, goods in urban environments
- Cyber security - APTs in 5G networks and Location Proof Systems (LPS); resilience of networks to dynamic (re)configuration
Here are some slides on the Real Time Flow project, and CCTool, presented at the London Node of EIT Digital (January 2020).
The Complex Control tool (CCTool) combines results from structural controllability theory, network analysis and machine learning / AI to identify the most influential nodes (drivers), and develop control policies to direct the network to a desired state. This is joint work with Alexandra Penn and Nicholas Elia, Matthew Karlsen, George Papagiannis, Vlad Georgiev and Angelos Christidis and it originates in the EPSRC ERIE project.
Machine Learning and Data Mining (COMM055) for Postgraduate Taught students on the MSc in Data Science programme in Semester 2 2019-20. Specifically, I teach Reinforcement Learning, and Rule-based Machine Learning.
Database Systems (COMM051) for Postgraduate Taught students on the MSc in Data Science and MSc in Information Security programmes in Semester 1 2019-20.
Software Engineering (COM1028) for First Year (Year 1) undergraduates, in CS and CIT, in Spring 2018-19.
Enterprise Systems Development (COM3011) for Final Year (Year 3) undergraduate students in Autumn 2016-17.
Lecture notes, exercises, and other related material on these courses can be found on SurreyLearn.
Modern software systems become increasingly complex as they are expected to support a large variety of different functions. We need to create more software in a shorter time, and without compromising the quality of the software. In order to build such systems efficiently, a compositional approach is required. This entails some formal technique for analysis and reasoning on local component properties as well as on properties of the composite. In this paper, we present a mathematical framework for the composition of software components, at a semantic modelling level. We describe a mathematical concept of a component and identify properties that ensure its potential behaviour can be captured. Based on that, we give a formal definition of composition and examine its effect on the individual components. We argue that properties of the individual components can, under certain conditions, be preserved in the composite. The proposed framework can be used for guiding the composition of components as it advocates formal reasoning about the composite before the actual composition takes place.
In this paper we present a prototype of a tool that demonstrates how existing limitations in ensuring an agent’s compliance to an argumentation-based dialogue protocol can be overcome. We also present the implementation of compliance enforcement components for a deliberation dialogue protocol, and an application that enables two human participants to engage in an efficiently moderated dialogue, where all inappropriate utterances attempted by an agent are blocked and prevented from inclusion within the dialogue.
A random Boolean network (RBN) may be controlled through the use of a learning classifier system (LCS) – an eXtended Classifier System (XCS) can evolve a rule set that directs an RBN from any state to a target state. However, the rules evolved may not be optimal, in terms of minimising the total cost of the paths used to direct the network from any state to a specified attractor. Here we uncover the optimal set of control rules via an exhaustive algorithm. The performance of an LCS (XCS) on the RBN control problem is assessed in light of the newly uncovered optimal rule set.
The service choreography approach has been proposed for describing the global ordering constraints on the observable message exchanges between participant services in service oriented architectures. Recent work advocates the use of structured natural language, in the form of Semantics of Business Vocabulary and Rules (SBVR), for specifying and validating choreographies. This paper addresses the verification of choreographies - whether the local behaviours of the individual participants conform to the global protocol prescribed by the choreography. We describe how declarative specifications of service choreographies can be verified using a trace-based model, namely an adaptation of Shields’ vector languages. We also use the so-called blackboard rules, which draw upon the Bach coordination language, as a middleware that adds reactiveness to this declarative setting. Vector languages are to trace languages what matrices are to linear transformations; they afford a more concrete representation which has advantages when it comes to computation or manipulation.
We propose a set of optimization techniques for transforming a generic AI codebase so that it can be successfully deployed to a restricted serverless environment, without compromising capability or performance. These involve (1) slimming the libraries and frameworks (e.g., pytorch) used, down to pieces pertaining to the solution; (2) dynamically loading pre-trained AI/ML models into local temporary storage, during serverless function invocation; (3) using separate frameworks for training and inference, with ONNX model formatting; and, (4) performance-oriented tuning for data storage and lookup. The techniques are illustrated via worked examples that have been deployed live on geospatial data from the transportation domain. This draws upon a real-world case study in intelligent transportation looking at on-demand, realtime predictions of flows of train movements across the UK rail network. Evaluation of the proposed techniques shows the response time, for varying volumes of queries involving prediction, to remain almost constant (at 50 ms), even as the database scales up to the 250M entries. The query response time is important in this context as the target is predicting train delays. It is even more important in a serverless environment due to the stringent constraints on serverless functions’ runtime before timeout. The similarities of a serverless environment to other resource constrained environments (e.g., IoT, telecoms) means the techniques can be applied to a range of use cases.
Random Boolean Networks (RBNs) are an arguably simple model which can be used to express rather complex behaviour, and have been applied in various domains. RBNs may be controlled using rule-based machine learning, speciﬁcally through the use of a learning classiﬁer system (LCS) – an eXtended Classiﬁer System (XCS) can evolve a set of condition-action rules that direct an RBN from any state to a target state (attractor). However, the rules evolved by XCS may not be optimal, in terms of minimising the total cost along the paths used to direct the network from any state to a speciﬁed attractor. In this paper, we present an algorithm for uncovering the optimal set of control rules for controlling random Boolean networks. We assign relative costs for interventions and‘natural’steps.We then compare the performance of this optimal rule calculator algorithm(ORC)and the XCS variant of learning classiﬁer systems.We ﬁnd that the rules evolved by XCS are not optimal in terms of total cost. The results provide a benchmark for future improvement.
The aim of this paper is to facilitate e-business transactions between small and medium enterprises (SMEs), in a way that respects their local autonomy, within a digital ecosystem. For this purpose, we distinguish transactions from services (and service providers) by considering virtual private transaction networks (VPTNs) and virtual service networks (VSNs). These two virtual levels are optimised individually and in respect to each other. The effect of one on the other, can supply us with stability, failure resistance and small-world characteristics on one hand and durability, consistency and sustainability on the other hand. The proposed network design has a dynamic topology that adapts itself to changes in business models and availability of SMEs, and reflects the highly dynamic nature of a digital ecosystem.
In this paper we present a model for coordinating distributed long running and multi-service transactions in Digital Business EcoSystems. The model supports various forms of service composition, which are translated into a tuples-based behavioural description that allows to reason about the required behaviour in terms of ordering, dependencies and alternative execution. The compensation mechanism warranties consistency, including omitted results, without breaking local autonomy. The proposed model is considered at the deployment level of SOA, rather than the realisation level, and is targeted to business transactions between collaborating SMEs as it respects the loose-coupling of the underlying services. © 2007 IEEE.
We present a compilation tool SBVR2Alloy which is used to automatically generate as well as validate service choreographies specified in structured natural language. The proposed approach builds on a model transformation between Semantics of Business Vocabulary and Rules (SBVR), an OMG standard for specifying business models in structured English, and the Alloy Analyzer which is a SAT based constraint solver. In this way, declarative specifications can be enacted via a standard constraint solver and verified for realisability and conformance.
In this paper we describe the application of a Deep Reinforcement Learning agent to the problem of control of Gene Regulatory Networks (GRNs). The proposed approach is applied to Random Boolean Networks (RBNs) which have extensively been used as a computational model for GRNs. The ability to control GRNs is central to therapeutic interventions for diseases such as cancer. That is, learning to make such interventions as to direct the GRN from some initial state towards a desired attractor, by allowing at most one intervention per time step. Our agent interacts directly with the environment; being an RBN, without any knowledge of the underlying dynamics, structure or connectivity of the network. We have implemented a Deep Q Network with Double Q Learning that is trained by sampling experiences from the environment using Prioritized Experience Replay. We show that the proposed novel approach develops a policy that successfully learns how to control RBNs significantly larger than previous learning implementations. We also discuss why learning to control an RBN with zero knowledge of its underlying dynamics is important and argue that the agent is encouraged to discover and perform optimal control interventions in regard to cost and number of interventions.
In this paper, we present a survey of deep learning approaches for cybersecurity intrusion detection, the datasets used, and a comparative study. Specifically, we provide a review of intrusion detection systems based on deep learning approaches. The dataset plays an important role in intrusion detection, therefore we describe 35 well-known cyber datasets and provide a classification of these datasets into seven categories; namely, network traffic-based dataset, electrical network-based dataset, internet traffic-based dataset, virtual private network-based dataset, android apps-based dataset, IoT traffic-based dataset, and internet-connected devices-based dataset. We analyze seven deep learning models including recurrent neural networks, deep neural networks, restricted Boltzmann machines, deep belief networks, convolutional neural networks, deep Boltzmann machines, and deep autoencoders. For each model, we study the performance in two categories of classification (binary and multiclass) under two new real traffic datasets, namely, the CSE-CIC-IDS2018 dataset and the Bot-IoT dataset. In addition, we use the most important performance indicators, namely, accuracy, false alarm rate, and detection rate for evaluating the efficiency of several methods.
The Electric Vehicles (EVs) market has seen rapid growth recently despite the anxiety about driving range. Recent proposals have explored charging EVs on the move, using dynamic wireless charging that enables power exchange between the vehicle and the grid while the vehicle is moving. Specifically, part of the literature focuses on the intelligent routing of EVs in need of charging. Inter-Vehicle communications (IVC) play an integral role in intelligent routing of EVs around a static charging station or dynamic charging on the road network. However, IVC is vulnerable to a variety of cyber attacks such as spoofing. In this paper, a probabilistic cross-layer Intrusion Detection System (IDS), based on Machine Learning (ML) techniques, is introduced. The proposed IDS is capable of detecting spoofing attacks with more than 90% accuracy. The IDS uses a new metric, Position Verification using Relative Speed (PVRS), which seems to have a significant effect in classification results. PVRS compares the distance between two communicating nodes that is observed by On-Board Units (OBU) and their estimated distance using the relative speed value that is calculated using interchanged signals in the Physical (PHY) layer.
We describe a formal approach to protocol design for dialogues between autonomous agents in a digital ecosystem that involve the exchange of arguments between the participants. We introduce a vector language-based representation of argumentation protocols, which captures the interplay between different agentspsila moves in a dialogue in a way that (a) determines the legal moves that are available to each participant, in each step, and (b) records the dialogue history. We use UML protocol state machines (PSMs) to model a negotiation dialogue protocol at both the individual participant level (autonomous agent viewpoint) and the dialogue level (overall interaction viewpoint). The underlying vector semantics is used to verify that a given dialogue was played out in compliance with the corresponding protocol.
We apply a learning classifier system, XCSI, to the task of providing personalised suggestions for passenger onward journeys. Learn- ing classifier systems combine evolutionary computation with rule-based machine learning, altering a population of rules to achieve a goal through interaction with the environment. Here XCSI interacts with a simulated environment of passengers travelling around the London Underground network, subject to disruption. We show that XCSI successfully learns individual passenger preferences and can be used to suggest personalised adjustments to the onward journey in the event of disruption.
We describe a true-concurrent approach for managing dependencies between distributed and concurrent coordinator components of a long-running transaction. In previous work we have described how interactions specified in a scenario can be translated into a tuples-based behavioural description, namely vector languages. In this paper we show how reasoning against order-theoretic properties of such languages can reveal missing behaviours which are not explicitly described in the scenario but are still possible. Our approach supports the gradual refinement of scenarios of interaction into a complete set of behaviours that includes all desirable orderings of execution and prohibits emergent behaviour of the transaction. Crown Copyright © 2010.
This paper presents a true-concurrent approach to formalising integration of Small-to-Medium Enterprises (SMEs) with Web services. Our approach formalises common notions in service-oriented computing such as conversations (interactions between clients and web services), multi-party conversations (interactions between multiple web services) and coordination protocols, which are central in a transactional environment. In particular, we capture long-running transactions with recovery and compensation mechanisms for the underlying services in order to ensure that a transaction either commits or is successfully compensated for. © 2008 Springer-Verlag Berlin Heidelberg.
Random Boolean Networks (RBNs) are an arguably simple model which can be used to express rather complex behaviour, and have been applied in various domains. RBNs may be controlled using rule-based machine learning, specifically through the use of a learning classifier system (LCS) – an eXtended Classifier System (XCS) can evolve a set of condition-action rules that direct an RBN from any state to a target state (attractor). However, the rules evolved by XCS may not be optimal, in terms of minimising the total cost along the paths used to direct the network from any state to a specified attractor. In this paper, we present an algorithm for uncovering the optimal set of control rules for controlling random Boolean networks. We assign relative costs for interventions and ‘natural’ steps. We then compare the performance of this optimal rule calculator algorithm (ORC) and the XCS variant of learning classifier systems. We find that the rules evolved by XCS are not optimal in terms of total cost. The results provide a benchmark for future improvement.
Complexity theory has been used to study a wide range of systems in biology and nature but also business and socio-technical systems, e.g., see . The ultimate objective is to develop the capability of steering a complex system towards a desired outcome. Recent developments in network controllability  concerning the reworking of the problem of finding minimal control configurations allow the use of the polynomial time Hopcroft- Karp algorithm instead of exponential time solutions. Subsequent approaches build on this result to determine the precise control nodes, or drivers, in each minimal control configuration , . A browser-based analytical tool, CCTool1, for identifying such drivers automatically in a complex network has been developed in . One key characteristic of a complex system is that it continuously evolves, e.g., due to dynamic changes in the roles, states and behaviours of the entities involved. This means that in addition to determining driver nodes it is appropriate to consider an evolving topology of the underlying complex network, and investigate the effect of removing nodes (and edges) on the corresponding minimal control configurations. The work presented here focuses on arriving at a classification of the nodes based on the effect their removal has on controllability of the network.
In this paper we are concerned with providing support for business activities in moving from value chains to value networks. We describe a fully distributed P2P architecture which reflects the dynamics of business processes that are not governed by a single organisation. The temporary virtual networks of long-term business transactions are used as the building block of the overall scale-free business network. The design is based on dynamically formed permanent clusters resulting in a topology that is highly resilient to failures (and attacks) and is capable of reconfiguring itself to adapt to changes in business models and respond to global failures of conceptual hubs. This fosters an environment where business communities can evolve to meet emerging business opportunities and achieve sustainable growth within a digital ecosystem.
Critical cascades are found in many self-organizing systems. Here, we examine critical cascades as a design paradigm for logic and learning under the linear threshold model (LTM), and simple biologically inspired variants of it as sources of computational power, learning efficiency, and robustness. First, we show that the LTM can compute logic, and with a small modification, universal Boolean logic, examining its stability and cascade frequency. We then frame it formally as a binary classifier and remark on implications for accuracy. Second, we examine the LTM as a statistical learning model, studying benefits of spatial constraints and criticality to efficiency. We also discuss implications for robustness in information encoding. Our experiments show that spatial constraints can greatly increase efficiency. Theoretical investigation and initial experimental results also indicate that criticality can result in a sudden increase in accuracy.
This paper shows the early results of new research on how Digital Ecosystems can promote new modes of sustainable e-business practices, for Small and Medium-Sized Enterprises (SMEs), using an open architecture for content sharing and Business-to-Business (B2B) interactions in the knowledge economy, and within a framework of open standards. The current e-business practices and technologies do not always encourage openness but instead tend to promote established models of proprietary e-business development based on centralised network and service infrastructure. Governments can promote open developments by supporting opportunities for new entry through supporting and augmenting a market environment for the productive coexistence of large and small companies in the B2B e-commerce domain.
The brain is a highly reconfigurable machine capable of task-specific adaptations. The brain continually rewires itself for a more optimal configuration to solve problems. We propose a novel strategic synthesis algorithm for feedforward networks that draws directly from the brain's behaviours when learning. The proposed approach analyses the network and ranks weights based on their magnitude. Unlike existing approaches that advocate random selection, we select highly performing nodes as starting points for new edges and exploit the Gaussian distribution over the weights to select corresponding endpoints. The strategy aims only to produce useful connections and result in a smaller residual network structure. The approach is complemented with pruning to further the compression. We demonstrate the techniques to deep feedforward networks. The residual sub-networks that are formed from the synthesis approaches in this work form common sub-networks with similarities up to ~90%. Using pruning as a complement to the strategic synthesis approach, we observe improvements in compression.
Probabilistic Boolean Networks (PBNs) were introduced as a computational model for the study of complex dynamical systems, such as Gene Regulatory Networks (GRNs). Controllability in this context is the process of making strategic interventions to the state of a network in order to drive it towards some other state that exhibits favourable biological properties. In this paper we study the ability of a Double Deep Q-Network with Prioritized Experience Replay in learning control strategies within a finite number of time steps that drive a PBN towards a target state, typically an attractor. The control method is model-free and does not require knowledge of the network's underlying dynamics, making it suitable for applications where inference of such dynamics is intractable. We present extensive experiment results on two synthetic PBNs and the PBN model constructed directly from gene-expression data of a study on metastatic-melanoma.
We present a Peer-to-Peer network design which aims to support business activities conducted through a network of collaborations that generate value in different, mutually beneficial, ways for the participating organisations. The temporary virtual networks formed by long-term business transactions that involve the execution of multiple services from different providers are used as the building block of the underlying scale-free business network. We show how these local interactions, which are not governed by a single organisation, give rise to a fully distributed P2P architecture that reflects the dynamics of business activities. The design is based on dynamically formed permanent clusters of nodes, the so-called Virtual Super Peers (VSPs), and this results in a topology that is highly resilient to certain types of failure (and attacks). Furthermore, the proposed P2P architecture is capable of reconfiguring itself to adapt to the usage that is being made of it and respond to global failures of conceptual hubs. This fosters an environment where business communities can evolve to meet emerging business opportunities and achieve sustainable growth within a digital ecosystem.
The distinct feature of this volume is its focus on mathematical models that identify the "core" concepts as first class modeling elements, and its providing of techniques for integrating and relating them.
We describe a translation of scenarios given in UML 2.0 sequence diagrams into a tuples-based behavioural model that considers multiple access points for a participating instance and exhibits true-concurrency. This is important in a component setting since different access points are connected to different instances, which have no knowledge of each other. Interactions specified in a scenario are modelled using tuples of sequences, one sequence for each access point. The proposed unfolding of the sequence diagram involves mapping each location (graphical position) onto the so-called component vectors. The various modes of interaction (sequential, alternative, concurrent) manifest themselves in the order structure of the resulting set of component vectors, which captures the dependencies between participating instances. In previous work, we have described how (sets of) vectors generate concurrent automata. The extension to our model with sequence diagrams in this paper provides a way to verify the diagram against the state-based model.
Rule-based machine learning focuses on learning or evolving a set of rules that represents the knowledge captured by the system. Due to its inherent complexity, a certain amount of fine tuning is required before it can be applied to a particular problem. However, there is limited information available to researchers when it comes to setting the corresponding run parameter values. In this paper, we investigate the run parameters of Learning Classifier Systems (LCSs) as applied to single-step problems. In particular, we study two LCS variants, XCS for reinforcement learning and UCS for supervised learning, and examine the effect that different parameter values have on enhancing the model prediction, increasing accuracy and reducing the resulting rule set size.
In this paper we describe a formal model for the distributed coordination of long-running transactions in a Digital Ecosystem for business, involving Small and Medium Enterprises (SMEs). The proposed non-interleaving model of interaction-based service composition allows for communication between internal activities of transactions. The formal semantics of the various modes of service composition are represented by standard xml schemas. The current implementation framework uses suitable asynchronous message passing techniques and reflects the design decisions of the proposed model for distributed transactions in digital ecosystems.
The concept of a digital ecosystem (DE) has been used to explore scenarios in which multiple online services and resources can be accessed by users without there being a single point of control. In previous work we have described how the so-called transaction languages can express concurrent and distributed interactions between online services in a transactional environment. In this paper we outline how transaction languages capture the history of a long-running transaction and highlight the benefits of our true-concurrent approach in the context of DEs. This includes support for the recovery of a long-running transaction whenever some failure is encountered. We introduce an animation tool that has been developed to explore the behaviours of long-running transactions within our modelling environment. Further, we discuss how this work supports the declarative approach to the development of open distributed applications. © 2012 IEEE.
© 2003 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
In this paper, we describe a true-concurrent hierarchical logic interpreted over concurrent automata. Concurrent automata constitute a special kind of asynchronous transition system (ATS) used for modelling the behaviour of components as understood in component-based software development. Here, a component-based system consists of several interacting components whereby each component manages calls to and from the component using ports to ensure encapsulation. Further, a component can be complex and made of several simpler interacting components. When a complex component receives a request through one of its ports, the port delegates the request to an internal component. Our logic allows us to describe the different views we can have on the system. For example, the overall component interactions, whether they occur sequentially, simultaneously or in parallel, and how each component internally manages the received requests (possibly expressed at different levels of detail). Using concurrent automata as an underlying formalism we guarantee that the expressiveness of the logic is preserved in the model. In future work, we plan to integrate our truly-concurrent approach into the Edinburgh Concurrency Workbench. © 2007 Elsevier B.V. All rights reserved.
Declarative technologies have made great strides in expressivity between SQL and SBVR. SBVR models are more expressive that SQL schemas, but not as imminently executable yet. In this paper, we complete the architecture of a system that can execute SBVR models. We do this by describing how SBVR rules can be transformed into SQL DML so that they can be automatically checked against the database using a standard SQL query. In particular, we describe a formalization of the basic structure of an SQL query which includes aggregate functions, arithmetic operations, grouping, and grouping on condition. We do this while staying within a predicate calculus semantics which can be related to the standard SBVR-LF specification and equip it with a concrete semantics for expressing business rules formally. Our approach to transforming SBVR rules into standard SQL queries is thus generic, and the resulting queries can be readily executed on a relational schema generated from the SBVR model.
Concurrency control mechanisms such as turn-taking, locking, serialization, transactional locking mechanism, and operational transformation try to provide data consistency when concurrent activities are permitted in a reactive system. Locks are typically used in transactional models for assurance of data consistency and integrity in a concurrent environment. In addition, recovery management is used to preserve atomicity and durability in transaction models. Unfortunately, conventional lock mechanisms severely (and intentionally) limit concurrency in a transactional environment. Such lock mechanisms also limit recovery capabilities. Finally, existing recovery mechanisms themselves afford a considerable overhead to concurrency. This paper describes a new transaction model that supports release of early results inside and outside of a transaction, decreasing the severe limitations of conventional lock mechanisms, yet still warranties consistency and recoverability of released resources (results). This is achieved through use of a more flexible locking mechanism and by using two types of consistency graph. This provides an integrated solution for transaction management, recovery management and concurrency control. We argue that these are necessary features for management of long-term transactions within "digital ecosystems" of small to medium enterprises.
With REST becoming the dominant architectural paradigm for web services in distributed systems, more and more use cases are applied to it, including use cases that require transactional guarantees. We propose a RESTful transaction model that satisfies both the constraints of transactions and those of the REST architectural style. We then apply the isolation theorems to prove the robustness of its properties on a formal level.
With REST becoming a dominant architectural paradigm for web services in distributed systems, more and more use cases are applied to it, including use cases that require transactional guarantees. We believe that the loose coupling that is supported by RESTful transactions, makes this currently our preferred interaction style for digital ecosystems (DEs). To further expand its value to DEs, we propose a RESTful transaction model that satisfies both the constraints of recoverable transactions and those of the REST architectural style. We then show the correctness and applicability of the model.
In this paper we explore the concept of ldquoecosystemrdquo as a metaphor in the development of the digital economy. We argue that the modelling of social ecosystems as self-organising systems is also relevant to the study of digital ecosystems. Specifically, that centralised control structures in digital ecosystems militate against emergence of innovation and adaptive response to pressures or shocks that may impact the ecosystem. We hope the paper will stimulate a more holistic approach to gaining empirical and theoretical understanding of digital ecosystems.
Deployed AI platforms typically ship with bulky system architectures which present bottlenecks and a high risk of failure. A serverless deployment can mitigate these factors and provide a cost-effective, automatically scalable (up or down) and elastic real-time on-demand AI solution. However, deploying high complexity production workloads into serverless environments is far from trivial, e.g., due to factors such as minimal allowance for physical codebase size, low amount of runtime memory, lack of GPU support and a maximum runtime before termination via timeout. In this paper we propose a set of optimization techniques and show how these transform a codebase which was previously incompatible with a serverless deployment into one that can be successfully deployed in a serverless environment; without compromising capability or performance. The techniques are illustrated via worked examples that have been deployed live on rail data and realtime predictions on train movements on the UK rail network. The similarities of a serverless environment to other resource constrained environments (IoT, Mobile) means the techniques can be applied to a range of use cases.
Steering a complex system towards a desired outcome is a challenging task. The lack of clarity on the system’s exact architecture and the often scarce scientific data upon which to base the op- erationalisation of the dynamic rules that underpin the interactions between participant entities are two contributing factors. We describe an analytical approach that builds on Fuzzy Cognitive Map- ping (FCM) to address the latter and represent the system as a complex network. We apply results from network controllability to address the former and determine minimal control configurations - subsets of factors, or system levers, which comprise points for strategic intervention in steering the system. We have implemented the combination of these techniques in an analytical tool that runs in the browser, and generates all minimal control configurations of a complex network. We demonstrate our approach by reporting on our experience of working alongside industrial, local-government, and NGO stakeholders in the Humber region, UK. Our results are applied to the decision-making process involved in the transition of the region to a bio-based economy.
One of the barriers for the adoption of Electric Vehicles (EVs) is the anxiety around the limited driving range. Recent proposals have explored charging EVs on the move, using dynamic wireless charging which enables power exchange between the vehicle and the grid while the vehicle is moving. In this article, we focus on the intelligent routing of EVs in need of charging so that they can make most efficient use of the so-called Mobile Energy Disseminators (MEDs) which operate as mobile charging stations. We present a method for routing EVs around MEDs on the road network, which is based on constraint logic programming and optimization using a graph-based shortest path algorithm. The proposed method exploits Inter-Vehicle (IVC) communications in order to eco-route electric vehicles. We argue that combining modern communications between vehicles and state of the art technologies on energy transfer, the driving range of EVs can be extended without the need for larger batteries or overtly costly infrastructure. We present extensive simulations in city conditions that show the driving range and consequently the overall travel time of electric vehicles is improved with intelligent routing in the presence of MEDs.
In this paper we describe the application of a learning classifier system (LCS) variant known as the eXtended classifier system (XCS) to evolve a set of ‘control rules’ for a number of Boolean network instances. We show that (1) it is possible to take the system to an attractor, from any given state, by applying a set of ‘control rules’ consisting of ternary conditions strings (i.e. each condition component in the rule has three possible states; 0, 1 or #) with associated bit-flip actions, and (2) that it is possible to discover such rules using an evolutionary approach via the application of a learning classifier system. The proposed approach builds on learning (reinforcement learning) and discovery (a genetic algorithm) and therefore the series of interventions for controlling the network are determined but are not fixed. System control rules evolve in such a way that they mirror both the structure and dynamics of the system, without having ‘direct’ access to either.
We propose the use of structured natural language (English) in specifying service choreographies, focusing on the what rather than the how of the required coordination of participant services in realising a business application scenario. The declarative approach we propose uses the OMG standard Semantics of Business Vocabulary and Rules (SBVR) as a modelling language. The service choreography approach has been proposed for describing the global orderings of the invocations on interfaces of participant services. We therefore extend SBVR with a notion of time which can capture the coordination of the participant services, in terms of the observable message exchanges between them. The extension is done using existing modelling constructs in SBVR, and hence respects the standard specification. The idea is that users - domain specialists rather than implementation specialists - can verify the requested service composition by directly reading the structured English used by SBVR. At the same time, the SBVR model can be represented in formal logic so it can be parsed and executed by a machine.