Professor Brijesh Dongol
Academic and research departments
Computer Science Research Centre, School of Computer Science and Electronic Engineering.About
Biography
I am a Director of the UK Research Institute on Verified Trustworthy Software Systems (VeTSS).
My research is on formal techniques and verification methods for concurrent and real-time systems. This includes concurrent objects, transactional memory and associated correctness conditions; weak memory models; algebraic techniques; and hybrid systems.
I am actively seeking good PhD students. Please contact me for further details.
ResearchResearch projects
EP/R032556/1 Verifiably Correct Transactional MemoryMulti-core computing architectures have become ubiquitous over the last decade. This has been driven by the demand for continual performance improvements to cope with the ever-increasing sophistication of applications, combined with physical limitations on chip designs, whereby speedup via higher clock speeds has become infeasible. The inherent parallelism that multi-core architectures entail offers great technical opportunities, however, exploiting these opportunities presents a number of technical challenges.
To ensure correctness, concurrent programs must be properly synchronised, but synchronisation invariably introduces sequential bottlenecks, causing performance to suffer. Fully exploiting the potential for concurrency requires optimisations to consider executions at low levels of abstraction, e.g., the underlying memory model, compiler optimisations, cache-coherency protocols etc. The complexity of such considerations means that checking correctness with a high degree of confidence is extremely difficult. Concurrency bugs have specifically been attributed to disasters such as a power blackout in north eastern USA, Nasdaq's botched IPO of Facebook shares, and the near failure of NASA's Mars Pathfinder mission. Other safety-critical errors have manifested from using low-level optimisations, e.g., the double-checked locking bug and the Java Parker bug.
This project improves programmability of concurrent programs through the use of transactional memory (TM), which is a concurrency mechanism that makes low-level optimisations available to general application programmers. TM is an adaptation of transactions from databases. TM operations are highly concurrent (which improves efficiency), yet manage synchronisation on behalf of a programmer to provide an illusion of atomicity. Thus, by using TM, the focus of a programmer switches from what should be made atomic, as opposed to how atomicity should be guaranteed.
EP/R025134/1 RoboTest: Systematic Model-Based Testing and Simulation of Mobile Autonomous RobotsMobile and autonomous robots have an increasingly important role in industry and the wider society; from driverless vehicles to home assistance, potential applications are numerous. The UK government identified robotics as a key technology that will lead us to future economic growth (tinyurl.com/q8bhcy7). They have recognised, however, that autonomous robots are complex and typically operate in ever-changing environments (tinyurl.com/o2u2ts7). How can we be confident that they perform useful functions, as required, but are safe? It is standard practice to use testing to check correctness and safety. The software-development practice for robotics typically includes testing within simulations, before robots are built, and then testing of the actual robots. Simulations have several benefits: we can test early, and test execution is cheaper and faster. For example, simulation does not require a robot to move physically. Testing with the real robots is, however, still needed, since we cannot be sure that a simulation captures all the important aspects of the hardware and environment.
EP/R019045/1 Verifiably correct concurrency abstractionsThis is an overseas travel grant funded by the EPSRC.
Research projects
Multi-core computing architectures have become ubiquitous over the last decade. This has been driven by the demand for continual performance improvements to cope with the ever-increasing sophistication of applications, combined with physical limitations on chip designs, whereby speedup via higher clock speeds has become infeasible. The inherent parallelism that multi-core architectures entail offers great technical opportunities, however, exploiting these opportunities presents a number of technical challenges.
To ensure correctness, concurrent programs must be properly synchronised, but synchronisation invariably introduces sequential bottlenecks, causing performance to suffer. Fully exploiting the potential for concurrency requires optimisations to consider executions at low levels of abstraction, e.g., the underlying memory model, compiler optimisations, cache-coherency protocols etc. The complexity of such considerations means that checking correctness with a high degree of confidence is extremely difficult. Concurrency bugs have specifically been attributed to disasters such as a power blackout in north eastern USA, Nasdaq's botched IPO of Facebook shares, and the near failure of NASA's Mars Pathfinder mission. Other safety-critical errors have manifested from using low-level optimisations, e.g., the double-checked locking bug and the Java Parker bug.
This project improves programmability of concurrent programs through the use of transactional memory (TM), which is a concurrency mechanism that makes low-level optimisations available to general application programmers. TM is an adaptation of transactions from databases. TM operations are highly concurrent (which improves efficiency), yet manage synchronisation on behalf of a programmer to provide an illusion of atomicity. Thus, by using TM, the focus of a programmer switches from what should be made atomic, as opposed to how atomicity should be guaranteed.
Mobile and autonomous robots have an increasingly important role in industry and the wider society; from driverless vehicles to home assistance, potential applications are numerous. The UK government identified robotics as a key technology that will lead us to future economic growth (tinyurl.com/q8bhcy7). They have recognised, however, that autonomous robots are complex and typically operate in ever-changing environments (tinyurl.com/o2u2ts7). How can we be confident that they perform useful functions, as required, but are safe? It is standard practice to use testing to check correctness and safety. The software-development practice for robotics typically includes testing within simulations, before robots are built, and then testing of the actual robots. Simulations have several benefits: we can test early, and test execution is cheaper and faster. For example, simulation does not require a robot to move physically. Testing with the real robots is, however, still needed, since we cannot be sure that a simulation captures all the important aspects of the hardware and environment.
This is an overseas travel grant funded by the EPSRC.
Teaching
COM1026 Foundations of Computing
COM3028 Systems Verification
Publications
Non-volatile memory (NVM), also known as persistent memory, is an emerging paradigm for memory that preserves its contents even after power loss. NVM is widely expected to become ubiquitous, and hardware architectures are already providing support for NVM programming. This has stimulated interest in the design of novel concepts ensuring correctness of concurrent programming abstractions in the face of persistency and in the development of associated verification approaches. Software transactional memory (STM) is a key programming abstraction that supports concurrent access to shared state. In a fashion similar to linearizability as the correctness condition for concurrent data structures, there is an established notion of correctness for STMs known as opacity. We have recently proposed durable opacity as the natural extension of opacity to a setting with non-volatile memory. Together with this novel correctness condition, we designed a verification technique based on refinement. In this paper, we extend this work in two directions. First, we develop a durably opaque version of NOrec (no ownership records), an existing STM algorithm proven to be opaque. Second, we modularise our existing verification approach by separating the proof of durability of memory accesses from the proof of opacity. For NOrec, this allows us to re-use an existing opacity proof and complement it with a proof of the durability of accesses to shared state.
Jones’ rely-guarantee framework (originally developed to enable reasoning about partial correctness) has been extended in several works to additionally enable reasoning about deadlock freedom. However, these frameworks were originally developed for the strong memory model known as sequential consistency (SC). Under SC, all threads are assumed to read from the most recent write to each shared location, which is too strong for most modern multithreaded systems. In recent work, we have shown that rules for rely-guarantee can be adapted to cope with weaker causally consistent memory models (e.g., strong release-acquire), while preserving most of its core rules. This framework is modular and can be instantiated to different memory models. The only adaptation necessary is at the level of atomic statements (reads and writes), which require introduction of new assertions and associated proof rules that must be proved sound with respect to the operational semantics of the memory model at hand. In this paper, we show that it is possible to also further extend the framework to cope with deadlock freedom under causal consistency, taking inspiration from the aforementioned extensions to rely-guarantee reasoning for SC. We exemplify our technique on a short but challenging protocol for synchronising initialisation.
Remote direct memory access (RDMA) is a modern technology enabling networked machines to exchange information without involving the operating system of either side, and thus significantly speeding up data transfer in computer clusters. While RDMA is extensively used in practice and studied in various research papers, a formal underlying model specifying the allowed behaviours of concurrent RDMA programs running in modern multicore architectures is still missing. This paper aims to close this gap and provide semantic foundations of RDMA on x86-TSO machines. We propose three equivalent formal models, two operational models in different levels of abstraction and one declarative model, and prove that the three characterisations are equivalent. To gain confidence in the proposed semantics, the more concrete operational model has been reviewed by NVIDIA experts, a major vendor of RDMA systems, and we have empirically validated the declarative formalisation on various subtle litmus tests by extensive testing. We believe that this work is a necessary initial step for formally addressing RDMA-based systems by proposing language-level models, verifying their mapping to hardware, and developing reasoning techniques for concurrent RDMA programs.
These files contain the mechanised proof of an implementation of the RCU synchronisation method in a Treiber stack.
Quiescent consistency is a notion of correctness for a concurrent object that gives meaning the object's behaviour in its quiescent states. This paper shows that the membership problem for quiescent consistency is NP-complete and that the correctness problem is decidable, but coNEXPTIME-complete. We consider restricted versions of quiescent consistency by assuming an upper limit on the number of events between two quiescent points. Here, we show that the membership problem is in PTIME, whereas correctness is PSPACE-complete. We also consider quiescent sequential consistency, which strengthens quiescent consistency with an additional sequential consistency condition. We show that the unrestricted versions of membership and correctness are NP-complete and undecidable, respectively. When placing a limit on the number of events between two quiescent points, membership is in PTIME, while correctness is PSPACE-complete. Given an upper limit on the number of processes for every run of the implementation, the membership problem is in PTIME. (C) 2017 Elsevier Inc. All rights reserved.
A complex real-time system consists of components at multiple time abstractions with varying notions of granularity and precision. Existing hybrid frameworks only allow reasoning at a single granularity and at an absolute level of precision, which can be problematic because the models that are developed can become unimplementable. In this paper, we develop a framework that incorporates time bands so that the behaviour of each component may be specified at a time granularity that is appropriate for the component and its properties. We implement our controllers using teleo-reactive programs, which are high-level programs that are well-suited to controlling reactive systems in dynamic environments. We develop rely/guarantee-style reasoning rules and as an example, prove properties of a well-known mine-pump system.
Transactional memory (TM) manages thread synchronisation to provide an illusion of atomicity for arbitrary blocks of code. There are various implementations of TM, including hardware (HTM) and software (STM). HTMs provide high performance, but are inherently limited by hardware restrictions; STMs avoid these limitations but suffer from unpredictable performance. To solve these problems, hybrid TM (HyTM) algorithms have been introduced which provide reliable software fallback mechanisms for hardware transactions. The key safety property for TM is opacity, however a naive combination of an opaque STM and an opaque HTM does not necessarily result in an opaque HyTM. Therefore, HyTM algorithms must be specially designed to satisfy opacity. In this paper we introduce a modular method for verifying opacity of HyTM implementations. Our method provides conditions under which opacity proofs of HTM and STM components can be combined into a single proof of an overall hybrid algorithm. The proof method has been fully mechanised in Isabelle, and used to verify a novel hybrid version of a transactional mutex lock.
Software transactional memory (STM) provides programmers with a high-level programming abstraction for synchronization of parallel processes, allowing blocks of codes that execute in an interleaved manner to be treated as an atomic block. This atomicity property is captured by a correctness criterion called opacity. Opacity relates histories of a sequential atomic specification with that of STM implementations. In this paper we prove opacity of a recently proposed STM implementation (a Transactional Mutex Lock) by Dalessandro et al.. The proof is carried out within the interactive verifier KIV and proceeds via the construction of an intermediate level in between sequential specification and implementation, leveraging existing proof techniques for linearizability.
Correctness conditions like linearizability and opacity describe some form of atomicity imposed on concurrent objects. In this paper, we propose a correctness condition (called causal atomicity) for concurrent objects executing in a weak memory model, where the histories of the objects in question are partially ordered. We establish compositionality and abstraction results for causal atomicity and develop an associated refinement-based proof technique.
Linearisability is the de facto standard correctness condition for concurrent objects. Classical linearisability assumes that the effect of a method is captured entirely by the allowed sequences of calls and returns. This assumption is inadequate in the presence of relaxed memory models, where happens-before relations are also of importance. In this paper, we develop hb-linearisability for relaxed memory models by extending the classical notion with happens-before information. We consider two variants: Real time hb-linearisability, which adopts the classical view that time runs on a single global clock, and causal hb-linearisability, which eschews real-time and is appropriate for systems without a global clock. For both variants, we prove abstraction (so that programmers can reason about a client program using the sequential specification of an object rather than its more complex concurrent implementation) and composition (so that reasoning about independent objects can be conducted in isolation).
Linearizability is the standard correctness criterion for fine-grained, non-atomic concurrent algorithms, and a variety of methods for verifying linearizability have been developed. However, most approaches assume a sequentially consistent memory model, which is not always realised in practice. In this paper we define linearizability on a weak memory model: the TSO (Total Store Order) memory model, which is implemented in the x86 multicore architecture. We also show how a simulation-based proof method can be adapted to verify linearizability for algorithms running on TSO architectures. We demonstrate our approach on a typical concurrent algorithm, spinlock, and prove it linearizable using our simulation-based approach. Previous approaches to proving linearizabilty on TSO architectures have required a modification to the algorithm's natural abstract specification. Our proof method is the first, to our knowledge, for proving correctness without the need for such modification.
Prolog - program to support the paper "Reasoning about Goal-Directed Real-Time Teleo-Reactive Programs" | 2013, The University of Queensland | Open Access | Creative Commons Attribution 3.0 International (CC BY 3.0)
•A method for data refinement over intervals is developed, together with a sound forward simulation rule.•The framework enables reasoning about true concurrency between parallel processes.•Methods for decomposing forward simulation proofs are developed.•The framework may be specialised to reason about both discrete and realtime systems.•Examples include a concurrent algorithm and a complex real-time pump system. The majority of modern systems exhibit sophisticated concurrent behaviour, where several system components observe and modify the state with fine-grained atomicity. Many systems also exhibit truly concurrent behaviour, where multiple events may occur simultaneously. Data refinement, a correctness criterion to compare an abstract and a concrete implementation, normally admits interleaved models of execution only. In this paper, we present a method of data refinement using a framework that allows one to view a component's evolution over an interval of time, simplifying reasoning about true concurrency. By modifying the type of an interval, our theory may be specialised to cover data refinement of both discrete and real-time systems. We develop a sound interval-based forward simulation rule that enables decomposition of data refinement proofs, and apply this rule to verify data refinement for two examples: a simple concurrent program and a more in-depth real-time controller.
An algebraic approach to the design of program construction and verification tools is applied to separation logic. The control-flow level is modelled by power series with convolution as separating conjunction. A generic construction lifts resource monoids to assertion and predicate transformer quantales. The data domain is captured by concrete store-heap models. These are linked to the separation algebra by soundness proofs. Verification conditions and transformation or refinement laws are derived by equational reasoning within the predicate transformer quantale. This separation of concerns makes an implementation in the Isabelle/HOL proof assistant simple and highly automatic. The resulting tool is itself correct by construction; it is explained on three simple examples.
Correctness of concurrent objects of safety properties such as linearizability, sequential consistency, and quiescent consistency, and progress properties such as wait-, lock-, and obstruction-freedom. These properties, however, only refer to the behaviour of the object in isolation, which does not tell us what guarantees these correctness conditions on concurrent objects provide to their client programs. This paper investigates the links between safety and progress properties of concurrent objects and a form of trace refinement for client programs, called contextual trace refinement. In particular, we show that linearizability together with a minimal notion of progress are sufficient properties of concurrent objects to ensure contextual trace refinement, but sequential consistency and quiescent consistency are both too weak. Our reasoning is carried out in the action systems framework with procedure calls, which we extend to cope with non-atomic operations.
We develop an interval-based logic for reasoning about systems consisting of components specified using stream-processing functions, which map streams of inputs to streams of outputs. The construction is algebraic and builds on a theory of convolution from formal power series. Using these algebraic foundations, we uniformly (and systematically) define operators for time-and space-based (de) composition. We also show that Banach's fixed point theory can be incorporated into the framework, building on an existing theory of partially ordered monoids, which enables a feedback operator to be defined algebraically.
Quiescent consistency is a notion of correctness for a concurrent object that gives meaning to the object's behaviours in quiescent states, i.e., states in which none of the object's operations are being executed. The condition enables greater flexibility in object design by allowing more behaviours to be admitted, which in turn allows the algorithms implementing quiescent consistent objects to be more efficient (when executed in a multithreaded environment). Quiescent consistency of an implementation object is defined in terms of a corresponding abstract specification. This gives rise to two important verification questions: membership (checking whether a behaviour of the implementation is allowed by the specification) and correctness (checking whether all behaviours of the implementation are allowed by the specification). In this paper, we consider the membership and correctness conditions for quiescent consistency, as well as a restricted form that assumes an upper limit on the number of events between two quiescent states. We show that the membership problem for unrestricted quiescent consistency is NP-complete and that the correctness problem is decidable, coNEXPTIME-hard, and in EXPSPACE. For the restricted form, we show that membership is in PTIME, while correctness is PSPACE-complete.
The integration of transactions into hardware relaxed memory architectures is a topic of current research both in industry and academia. In this paper, we provide a general architectural framework for the introduction of transactions into models of relaxed memory in hardware, including the Sc, TSO, ARMV8 and PPC models. Our framework incorporates flexible and expressive forms of transaction aborts and execution that have hitherto been in the realm of software transactional memory. In contrast to software transactional memory, we account for the characteristics of relaxed memory as a restricted form of distributed system, without a notion of global time. We prove abstraction theorems to demonstrate that the programmer API matches the intuitions and expectations about transactions.
Linearizability has become the standard correctness criterion for fine-grained non-atomic concurrent algorithms, however, most approaches assume a sequentially consistent memory model, which is not always realised in practice. In this paper we study the correctness of concurrent algorithms on a weak memory model: the TSO (Total Store Order) memory model, which is commonly implemented by multicore architectures. Here, linearizability is often too strict, and hence, we prove a weaker criterion, quiescent consistency instead. Like linearizability, quiescent consistency is compositional making it an ideal correctness criterion in a component-based context. We demonstrate how to model a typical concurrent algorithm, seqlock, and prove it quiescent consistent using a simulation-based approach. Previous approaches to proving correctness on TSO architectures have been based on linearizabilty which makes it necessary to modify the algorithm’s high-level requirements. Our approach is the first, to our knowledge, for proving correctness without the need for such a modification.
Linearisability has become the standard correctness criterion for concurrent data structures, ensuring that every history of invocations and responses of concurrent operations has a matching sequential history. Existing proofs of linearisability require one to identify so-called linearisation points within the operations under consideration, which are atomic statements whose execution causes the effect of an operation to be felt. However, identification of linearisation points is a non-trivial task, requiring a high degree of expertise. For sophisticated algorithms such as Heller et al's lazy set, it even is possible for an operation to be linearised by the concurrent execution of a statement outside the operation being verified. This paper proposes an alternative method for verifying linearisability that does not require identification of linearisation points. Instead, using an interval-based logic, we show that every behaviour of each concrete operation over any interval is a possible behaviour of a corresponding abstraction that executes with coarse-grained atomicity. This approach is applied to Heller et al's lazy set to show that verification of linearisability is possible without having to consider linearisation points within the program code.
Transactional memory (TM) is a mechanism that manages thread synchronisation on behalf of a programmer so that blocks of code execute with the illusion of atomicity. The main safety criterion for transactional memory is opacity, which defines conditions for serialising concurrent transactions. Verifying opacity is complex because one must not only consider the orderings between fine-grained (and hence concurrent) transactional operations, but also between the transactions themselves. This paper presents a sound and complete method for proving opacity by decomposing the proof into two parts, so that each form of concurrency can be dealt with separately. Thus, in our method, verification involves a simple proof of opacity of a coarse-grained abstraction, and a proof of linearizability, a better-understood correctness condition. The most difficult part of these verifications is dealing with the fine-grained synchronization mechanisms of a given implementation; in our method these aspects are isolated to the linearizability proof. Our result makes it possible to leverage the many sophisticated techniques for proving linearizability that have been developed in recent years. We use our method to prove opacity of two algorithms from the literature. Furthermore, we show that our method extends naturally to weak memory models by showing that both these algorithms are opaque under the TSO memory model, which is the memory model of the (widely deployed) x86 family of processors. All our proofs have been mechanised, either in the Isabelle theorem prover or the PAT model checker.
A notion of convolution is presented in the context of formal power series together with lifting constructions characterising algebras of such series, which usually are quantales. A number of examples underpin the universality of these constructions, the most prominent ones being separation logics, where convolution is separating conjunction in an assertion quantale; interval logics, where convolution is the chop operation; and stream interval functions, where convolution is proposed for analysing the trajectories of dynamical or real-time systems. A Hoare logic can be constructed in a generic fashion on the power-series quantale, which applies to each of these examples. In many cases, commutative notions of convolution have natural interpretations as concurrency operations.
Formal Methods (FM) radically improve the quality of the code artefacts they help to produce. They are simple, probably accessible to first-year undergraduate students and certainly to second-year students and beyond. Nevertheless, in many cases, they are not part of a general recommendation for course curricula, i.e., they are not taught — and yet they are valuable. One reason for this is that teaching “Formal Methods” is often confused with teaching logic and theory. This paper advocates what we call FM thinking: the application of ideas from Formal Methods applied in informal, lightweight, practical and accessible ways. And we will argue here that FM thinking should be part of the recommended curriculum for every Computer Science student. For even students who train only in that “thinking” will become much better programmers. But there will be others who, exposed to those ideas, will be ideally positioned to go further into the more theoretical background: why the techniques work; how they can be automated; and how new ones can be developed. Those students would follow subsequently a specialised, more theoretical stream, including topics such as semantics, logics, verification and proof-automation techniques.
Most approaches to verifying linearizability assume a sequentially consistent memory model, which is not always realised in practice. In this paper we study correctness on a weak memory model: the TSO (Total Store Order) memory model, which is implemented in x86 multicore architectures. Our central result is a proof method that simplifies proofs of linearizability on TSO. This is necessary since the use of local buffers in TSO adds considerably to the verification overhead on top of the already subtle linearizability proofs. The proof method involves constructing a coarse-grained abstraction as an intermediate layer between an abstract description and the concurrent algorithm. This allows the linearizability proof to be split into two smaller components, where the effect of the local buffers in TSO is dealt with at a higher level of abstraction than it would have been otherwise.
This paper describes the formalisation of lava thread synchronisation in an extended Owicki-Gries theory, Which facilitates the proof of safety, and progress properties of multi-threaded lava programs. Although we can use this formalisation to verify existing Java programs, our focus is on deriving them instead. The derivation process consists of two stages: design and transformation. In, the design stage, we use the method of Feijen and van Gasteren to obtain a program that satisfies the given requirements. This solution will most likely make atomicity assumptions Java is unable to guarantee. In the transformation stage, we reduce the granularity of the statements and develop a solution that can be translated directly to a Java implementation.
The Total Store Order memory model is widely implemented by modern multicore architectures such as x86, where local buffers are used for optimisation, allowing limited forms of instruction reordering. The presence of buffers and hardware-controlled buffer flushes increases the level of non-determinism from the level specified by a program, complicating the already difficult task of concurrent programming. This paper presents a new notion of refinement for weak memory models, based on the observation that pending writes to a process' local variables may be treated as if the effect of the update has already occurred in shared memory. We develop an interval-based model with algebraic rules for various programming constructs. In this framework, several decomposition rules for our new notion of refinement are developed. We apply our approach to verify the spinlock algorithm from the literature.
Correctness conditions for concurrent objects describe how atomicity of an abstract sequential object may be decomposed. Many different concurrent objects and proof methods for them have been developed. However, arguments about correctness are conducted with respect to an object in isolation. This is in contrast to real-world practice, where concurrent objects are often implemented as part of a programming language library (e.g., java. util. concurrent) and are instantiated within a client program. A natural question to ask, then is: How does a correctness condition for a concurrent object ensure correctness of a client program that uses the concurrent object? This paper presents the main issues that surround this question and provides some answers by linking different correctness conditions with a form of trace refinement.
The verify-while-develop paradigm allows one to incrementally develop programs from their specifications using a series of calculations against the remaining proof obligations. This paper presents a derivation method for real-time systems with realistic constraints on their behaviour. We develop a high-level interval-based logic that provides flexibility in an implementation, yet allows algebraic reasoning over multiple granularities and sampling multiple sensors with delay. The semantics of an action system is given in terms of interval predicates and algebraic operators to unify the logics for an action system and its properties, which in turn simplifies the calculations and derivations. (C) 2013 Elsevier B.V. All rights reserved.
Transactional Memory (TM) is a high-level programming abstraction for concurrency control that provides programmers with the illusion of atomically executing blocks of code, called transactions. TMs come in two categories, optimistic and pessimistic, where in the latter transactions never abort. While this simplifies the programming model, high-performing pessimistic TMs can be complex. In this paper, we present the first formal verification of a pessimistic software TM algorithm, namely, an algorithm proposed by Matveev and Shavit. The correctness criterion used is opacity, formalising the transactional atomicity guarantees. We prove that this pessimistic TM is a refinement of an intermediate opaque I/O-automaton, known as TMS2. To this end, we develop a rely-guarantee approach for reducing the complexity of the proof. Proofs are mechanised in the interactive prover Isabelle.
Lock-free algorithms are designed to improve the performance of concurrent programs by maximising the potential for processes to operate in parallel. Lock-free algorithms guarantee that within the system as a whole, some process will eventually complete its operation (as opposed to guaranteeing that all operations will eventully complete) Since lock-free algorithms potentially allow a high degree of interference between concurrent processes, and because their progress property is non-trivial, it is difficult to be assured of their correctness without a formal, machine-checked verification. In this paper we describe a method for proving the lock-free progress property. The approach is based on the construction of a well-founded ordering on the set of processes. The method is demonstrated using a well-known lock-free stack algorithm as an example, and we describe how the proof was checked using a theorem prover.
Software Transactional Memory (STM) is an extensively studied paradigm that provides an easy-to-use mechanism for thread safety and concurrency control. With the recent advent of byte-addressable persistent memory, a natural question to ask is whether STM systems can be adapted to support failure atomicity. In this paper, we answer this question by showing how STM can be easily integrated with Intel's Persistent Memory Development Kit (PMDK) transactional library (which we refer to as txPMDK) to obtain STM systems that are both concurrent and persistent. We demonstrate this approach using known STM systems, TML and NOrec, which when combined with txPMDK result in persistent STM systems, referred to as PMDK-TML and PMDK-NORec, respectively. However, it turns out that existing correctness criteria are insufficient for specifying the behaviour of txPMDK and our concurrent extensions. We therefore develop a new correctness criterion, dynamic durable opacity, that extends the previously defined notion of durable opacity with dynamic memory allocation. We provide a model of txPMDK, then show that this model satisfies dynamic durable opacity. Moreover, dynamic durable opacity supports concurrent transactions, thus we also use it to show correctness of both PMDK-TML and PMDK-NORec.
The paper "Owicki-Gries Reasoning for C11 RAR" introduces a new proof calculus for the C11 RAR memory model that allows Owicki-Gries proof rules for compound statements, including non-interference, to remain unchanged. The proof method features novel assertions specifying thread-specific views on the state of programs. This is combined with a set of Hoare logic rules that describe how these assertions are affected by atomic program steps. The artifact includes the Isabelle formalisation of the proof method introduced in the paper. It also contains the formalisation and proof of all case studies presented in the paper. All of the theorems are accompanied with their respective proofs.
This report extends 6 of the main paper by providing further details of the mechanisation effort.
Expression evaluation in programming languages is normally assumed to be deterministic; however, if an expression involves variables that are being modified by the environment of the process during its evaluation, the result of the evaluation can be non-deterministic. Two common scenarios in which this occurs are concurrent programs within which processes share variables and real-time programs that interact to monitor and/or control their environment. In these contexts, although any particular evaluation of an expression gives a single result, there is a range of possible values that could be returned depending on the relative timing between modification of a variable by the environment and its access within the expression evaluation. To compare the semantics of non-deterministic expression evaluation, one can use the set of possible values the expression evaluation could return. This paper formalizes three approaches to non-deterministic expression evaluation, highlights their commonalities and differences, shows the relationships between the approaches and explores conditions under which they coincide. Modal operators representing that a predicate holds for all possible evaluations and for some possible evaluation are associated with each of the evaluation approaches, and the properties and relationships between these operators are investigated. Furthermore, a link is made to a new notation used in reasoning about interference.
This paper describes a logic of progress for concurrent programs. The logic is based on that of UNITY, molded to fit a sequential programming model. Integration of the two is achieved by using auxiliary variables in a systematic way that incorporates program counters into the program text. The rules for progress in UNITY are then modified to suit this new system. This modification is however subtle enough to allow the theory of Owicki and Gries to be used without change.
Software transactional memory (STM) provides programmers with a high-level programming abstraction for synchronization of parallel processes, allowing blocks of codes that execute in an interleaved manner to be treated as atomic blocks. This atomicity property is captured by a correctness criterion called opacity, which relates the behaviour of an STM implementation to those of a sequential atomic specification. In this paper, we prove opacity of a recently proposed STM implementation: the Transactional Mutex Lock (TML) by Dalessandro et al. For this, we employ two different methods: the first method directly shows all histories of TML to be opaque (proof by induction), using a linearizability proof of TML as an assistance; the second method shows TML to be a refinement of an existing intermediate specification called TMS2 which is known to be opaque (proof by simulation). Both proofs are carried out within interactive provers, the first with KIV and the second with both Isabelle and KIV. This allows to compare not only the proof techniques in principle, but also their complexity in mechanization. It turns out that the second method, already leveraging an existing proof of opacity of TMS2, allows the proof to be decomposed into two independent proofs in the way that the linearizability proof does not.
Concurrent data structures like stacks, sets or queues need to be highly optimized to provide large degrees of parallelism with reduced contention. Linearizability, a key consistency condition for concurrent objects, sometimes limits the potential for optimization. Hence algorithm designers have started to build concurrent data structures that are not linearizable but only satisfy relaxed consistency requirements. In this paper, we study quiescent consistency as proposed by Shavit and Herlihy, which is one such relaxed condition. More precisely, we give the first formal definition of quiescent consistency, investigate its relationship with linearizability, and provide a proof technique for it based on (coupled) simulations. We demonstrate our proof technique by verifying quiescent consistency of a (non-linearizable) FIFO queue built using a diffraction tree.
Linearisability is a key correctness criterion for concurrent data structures, ensuring that each history of the concurrent object under consideration is consistent with respect to a history of the corresponding abstract data structure. Linearisability allows concurrent (i.e., overlapping) operation calls to take effect in any order, but requires the real-time order of nonoverlapping to be preserved. The sophisticated nature of concurrent objects means that linearisability is difficult to judge, and hence, over the years, numerous techniques for verifying lineasizability have been developed using a variety of formal foundations such as data refinement, shape analysis, reduction, etc. However, because the underlying framework, nomenclature, and terminology for each method is different, it has become difficult for practitioners to evaluate the differences between each approach, and hence, evaluate the methodology most appropriate for verifying the data structure at hand. In this article, we compare the major of methods for verifying linearisability, describe the main contribution of each method, and compare their advantages and limitations.
In this paper we develop an approach for deriving concurrent programs. At any stage in its derivation, a program consists of a combination of the code for its processes together with a set of enforced properties. The behaviour of such a combination consists of those behaviours of the code that satisfy the enforced properties. Because enforced properties are temporal formulae, they may be used to enforce both satisfy and progress properties of the program. While the code by itself is executable, when combined with enforced properties, the program is not yet in an executable form. A derivation starts from a program in which the desired properties of the code are expressed via enforced properties, and the goal is to derive a program with additional code but no enforced properties. We outline a trace-based theory which formalises the meaning of programs with enforced properties, and transformation rules that ensure each modified program is a refinement of the original.
The logic of Owicki and Gries is a well-known logic for verifying safety properties of concurrent programs. Using this logic, Feijen and van Gasteren describe a method for deriving concurrent programs based on safety. In this work, we explore derivation techniques of concurrent programs using progress-based reasoning. We use a framework that combines the safety logic of Owicki and Gries, and the progress logic of UNITY. Our contributions improve the applicability of our earlier techniques by reducing the calculational overhead in the formal proofs and derivations. To demonstrate the effectiveness of our techniques, a derivation of Dekker's mutual exclusion algorithm is presented. This derivation leads to the discovery of some new and simpler variants of this famous algorithm.
Action systems have been shown to be applicable for modelling and constructing systems in both discrete and hybrid domains. We present a novel semantics for action systems using a sampling logic that facilitates reasoning about the truly concurrent behaviour between an action system and its environment. By reasoning over the apparent states, the sampling logic allows one to determine whether a state predicate is definitely or possibly true over an interval. We present a semantics for action systems that allows the time taken to sample inputs and evaluate expressions (and hence guards) into account. We develop a temporal logic based on the sampling logic that facilitates formalisation of safety, progress, timing and transient properties. Then, we incorporate this logic to the method of enforced properties, which facilitates stepwise refinement of action systems. Crown Copyright (C) 2012 Published by Elsevier B.V. All rights reserved.
Lock-freedom is a property of concurrent programs which states that, from any state of the program, eventually some process will complete its operation. Lock-freedom is a weaker property than the usual expectation that eventually all processes will complete their operations. By weakening their completion guarantees, lock-free programs increase the potential for parallelism, and hence make more efficient Use Of Multiprocessor architectures than lock-based algorithms. However, lock-free algorithms, and reasoning about them, are considerably more complex. In this paper we present a technique for proving that a program is lock-free. The technique is designed to be as general as possible and is guided by heuristics that simplify the proofs. We demonstrate our theory by proving lock-freedom of two non-trivial examples from the literature. The proofs have been machine-checked by the PVS theorem prover, and we have developed proof strategies to minimise user interaction. (c) 2008 Elsevier B.V. All rights reserved.
Processor cores within modern multicore systems often communicate via shared memory and use (local) store buffers to improve performance. A penalty for this improvement is the loss of Sequential Consistency to weaker memory guarantees that increase the number of possible program behaviours, and hence, require a greater amount of programming effort. This paper formalises the effect of Total Store Order (TSO) memory — a weak memory model that allows a write followed by a read in the program order to be reordered during execution. Although the precise effects of TSO are well-known, a high-level formalisation of programs that execute under TSO has not been developed. We present an interval-based semantics for programs that execute under TSO memory and include methods for fine-grained expression evaluation, capturing the non-determinism of both concurrency and TSO-related reorderings.
The majority of modern systems exhibit sophisticated concurrent behaviour, where several system components modify and observe the system state with fine-grained atomicity. Many systems (e.g., multi-core processors, real-time controllers) also exhibit truly concurrent behaviour, where multiple events can occur simultaneously. This paper presents data refinement defined in terms of an interval-based framework, which includes high-level operators that capture non-deterministic expression evaluation. By modifying the type of an interval, our theory may be specialised to cover data refinement of both discrete and continuous systems. We present an interval-based encoding of forward simulation, then prove that our forward simulation rule is sound with respect to our data refinement definition. A number of rules for decomposing forward simulation proofs over both sequential and parallel composition are developed.
The teleo-reactive programming model is a high-level approach to developing real-time systems that supports hierarchical composition and durative actions. The model is different from frameworks such as action systems, timed automata and TLA(+), and allows programs to be more compact and descriptive of their intended behaviour. Teleo-reactive programs are particularly useful for implementing controllers for autonomous agents that must react robustly to their dynamically changing environments. In this paper, we develop a real-time logic that is based on Duration Calculus and use this logic to formalise the semantics of teleo-reactive programs. We develop rely/guarantee rules that facilitate reasoning about a program and its environment in a compositional manner. We present several theorems for simplifying proofs of teleo-reactive programs and present a partially mechanised method for proving progress properties of goal-directed agents.
Correctness of concurrent objects is defined in terms of conditions that determine allowable relationships between histories of a concurrent object and those of the corresponding sequential object. Numerous correctness conditions have been proposed over the years, and more have been proposed recently as the algorithms implementing concurrent objects have been adapted to cope with multicore processors with relaxed memory architectures. We present a formal framework for defining correctness conditions for multicore architectures, covering both standard conditions for totally ordered memory and newer conditions for relaxed memory, which allows them to be expressed in uniform manner, simplifying comparison. Our framework distinguishes between order and commitment properties, which in turn enables a hierarchy of correctness conditions to be established. We consider the Total Store Order (TSO) memory model in detail, formalise known conditions for TSO using our framework, and develop sequentially consistent variations of these. We present a work-stealing deque for TSO memory that is not linearizable, but is correct with respect to these new conditions. Using our framework, we identify a new non-blocking compositional condition, fence consistency, which lies between known conditions for TSO, and aims to capture the intention of a programmer-specified fence.
This book constitutes the refereed proceedings of the workshops which complemented the 23rd Symposium on Formal Methods, FM 2019, held in Porto, Portugal, in October 2019. This volume presents the papers that have been accepted for the following workshops: Third Workshop on Practical Formal Verification for Software Dependability, AFFORD 2019; 8th International Symposium From Data to Models and Back, DataMod 2019; First Formal Methods for Autonomous Systems Workshop, FMAS 2019; First Workshop on Formal Methods for Blockchains, FMBC 2019; 8th International Workshop on Formal Methods for Interactive Systems, FMIS 2019; First History of Formal Methods Workshop, HFM 2019; 8th International Workshop on Numerical and Symbolic Abstract Domains, NSAD 2019; 9th International Workshop on Open Community Approaches to Education, Research and Technology, OpenCERT 2019; 17th Overture Workshop, Overture 2019; 19th Refinement Workshop, Refine 2019; First International Workshop on Reversibility in Programming, Languages, and Automata, RPLA 2019; 10th International Workshop on Static Analysis and Systems Biology, SASB 2019; and the 10th Workshop on Tools for Automatic Program Analysis, TAPAS 2019.
Owicki-Gries reasoning for concurrent programs uses Hoare logic together with an interference freedom rule for concurrency. In this paper, we develop a new proof calculus for the C11 RAR memory model (a fragment of C11 with both relaxed and release-acquire accesses) that allows all Owicki-Gries proof rules for compound statements, including non-interference, to remain unchanged. Our proof method features novel assertions specifying thread-specific views on the state of programs. This is combined with a set of Hoare logic rules that describe how these assertions are affected by atomic program steps. We demonstrate the utility of our proof calculus by verifying a number of standard C11 litmus tests and Peterson’s algorithm adapted for C11. Our proof calculus and its application to program verification have been fully mechanised in the theorem prover
This chapter summarises a panel discussion on the topics of Regulation and Ethics in Software Engineering for Robotics at RoboSoft, chaired by Jon Timmis (University of Sunderland). The panel members were Ron Bell (Engineering Safety Consultants), Mark Lawford (McMaster University), Ibrahim Habli (University of York), and Pippa Moore (Civil Aviation Authority), whose views are summarised below. The chapter also integrates the issues raised in a talk on “Extending automotive certification processes to handle autonomous vehicles” by Zeyn Saigol (Connected Places Catapult), which provides a case study for the issues being discussed.
Verification techniques for C11 programs have advanced significantly in recent years with the development of operational semantics and associated logics for increasingly large fragments of C11. However, these semantics and logics have been developed in a restricted setting to avoid the thin-air-read problem. In this article, we propose an operational semantics that leverages an intra-thread partial order (called semantic dependencies ) induced by a recently developed denotational event-structure-based semantics. We prove that our operational semantics is sound and complete with respect to the denotational semantics. We present an associated logic that generalises a recent Owicki–Gries framework for RC11 RAR (repaired C11) with relaxed and release-acquire accesses. We describe the mechanisation of the logic in the Isabelle/HOL theorem prover, which we use to prove correctness of a number of examples.
The topics covered in this book range from modeling and programming languages and environments, via approaches for design and verification, to issues of ethics and regulation. In terms of techniques, there are results on model-based engineering, product lines, mission specification, component-based development, simulation, testing, and proof. Applications range from manufacturing to service robots, to autonomous vehicles, and even robots than evolve in the real world. A final chapter summarizes issues on ethics and regulation based on discussions from a panel of experts. The origin of this book is a two-day event, entitled RoboSoft, that took place in November 2019, in London. Organized with the generous support of the Royal Academy of Engineering and the University of York, UK, RoboSoft brought together more than 100 scientists, engineers and practitioners from all over the world, representing 70 international institutions. The intended readership includes researchers and practitioners with all levels of experience interested in working in the area of robotics, and software engineering more generally. The chapters are all self-contained, include explanations of the core concepts, and finish with a discussion of directions for further work. Chapters 'Towards Autonomous Robot Evolution', 'Composition, Separation of Roles and Model-Driven Approaches as Enabler of a Robotics Software Ecosystem' and 'Verifiable Autonomy and Responsible Robotics' are available open access under a Creative Commons Attribution 4.0 International License via link.springer.com.
Deductive verification techniques for C11 programs have advanced significantly in recent years with the development of operational semantics and associated logics for increasingly large fragments of C11. However, these semantics and logics have been developed in a restricted setting to avoid the thin-air-read problem. In this paper, we propose an operational semantics that leverages an intra-thread partial order (called semantic dependencies) induced by a recently developed denotational event-structure-based semantics. We prove that our operational semantics is sound and complete with respect to the denotational semantics. We present an associated logic that generalises a recent Owicki-Gries framework for RC11 (repaired C11), and demonstrate the use of this logic over several example proofs.
Transactional memory (TM) is an intensively studied synchronisation paradigm with many proposed implementations in software and hardware, and combinations thereof. However, TM under relaxed memory, e.g., C11 (the 2011 C/C++ standard) is still poorly understood, lacking rigorous foundations that support verifiable implementations. This paper addresses this gap by developing TMS2-ra, a relaxed operational TM specification. We integrate TMS2-ra with RC11 (the repaired C11 memory model that disallows load-buffering) to provide a formal semantics for TM libraries and their clients. We develop a logic, TARO, for verifying client programs that use TMS2-ra for synchronisation. We also show how TMS2-ra can be implemented by a C11 library, TML-ra, that uses relaxed and release-acquire atomics, yet guarantees the synchronisation properties required by TMS2-ra. We benchmark TML-ra and show that it outperforms its sequentially consistent counterpart in the STAMP benchmarks. Finally, we use a simulation-based verification technique to prove correctness of TML-ra. Our entire development is supported by the Isabelle/HOL proof assistant.
Convolution is a ubiquitous operation in mathematics and computing. The Kripke semantics for substructural and interval logics motivates its study for quantale-valued functions relative to ternary relations. The resulting notion of relational convolution leads to generalised binary and unary modal operators for qualitative and quantitative models, and to more conventional variants, when ternary relations arise from identities over partial semigroups. Convolution-based semantics for fragments of categorial, linear and incidence (segment or interval) logics are provided as qualitative applications. Quantitative examples include algebras of durations and mean values in the duration calculus.
This book constitutes the refereed proceedings of the Third International Workshop and Tutorial, FMTea 2019, Held as Part of the Third World Congress on Formal Methods, FM 2019, Porto, Portugal, October 2019. The 14 full papers presented together with 3 abstract papers were carefully reviewed and selected from 22 submissions. The papers are organized in topical sections named: Tutorial lectures; Teaching Program Verification; Teaching Program Development; and Effective Teaching Techniques.
Non-volatile memory (NVM), aka persistent memory, is a new paradigm for memory that preserves its contents even after power loss. The expected ubiquity of NVM has stimulated interest in the design of novel concepts ensuring correctness of concurrent programming abstractions in the face of persistency. So far, this has lead to the design of a number of persistent concurrent data structures, built to satisfy an associated notion of correctness: durable linearizability.
Software transactional memory (STMs) is a software-enabled form of transactional memory, typically implemented as a language library, that provides fine-grained concurrency control on behalf of a programmer STM algorithms have been recently adapted to cope with non-volatile memory (NVM), aka persistent memory, which is a new paradigm for memory that preserves its contents even after power loss. This paper presents a model checking approach to validating correctness of STM algorithms using FDR (a model checker for CS P specifications). Our proofs are based on operational transactional memory specifications that allow proofs of (durable) opacity, the main safety property for STMs under volatile and persistent memory, to be verified by refinement. Since FDR enables automatic proofs of refinement, we obtain an automatic technique for checking both opacity and durable opacity of bounded models of STM algorithms.
We define local transactional race freedom (LTRF), which provides a programmer model for software transactional memory. LTRF programs satisfy the SC-LTRF property, thus allowing the programmer to focus on sequential executions in which transactions execute atomically. Unlike previous results, SCLTRF does not require global race freedom.We also provide a lower-level implementation model to reason about quiescence fences and validate numerous compiler optimizations.
The rise of persistent memory is disrupting computing to its core. Our work aims to help programmers navigate this brave new world by providing a program logic for reasoning about x86 code that uses low-level operations such as memory accesses and fences, as well as persistency primitives such as flushes. Our logic, Pierogi, benefits from a simple underlying operational semantics based on views, is able to handle optimised flush operations, and is mechanised in the Isabelle/HOL proof assistant. We detail the proof rules of Pierogi and prove them sound. We also show how Pierogi can be used to reason about a range of challenging single- and multi-threaded persistent programs.
Hyperproperties are correctness conditions for labelled transition systems that are more expressive than traditional trace properties, with particular relevance to security. Recently, Attiya and Enea studied a notion of strong observational refinement that preserves all hyperproperties. They analyse the correspondence between forward simulation and strong observational refinement in a setting with finite traces only. We study this correspondence in a setting with both finite and infinite traces. In particular, we show that forward simulation does not preserve hyperliveness properties in this setting. We extend the forward simulation proof obligation with a progress condition, and prove that this progressive forward simulation does imply strong observational refinement.
In this article, we propose an approach to program verification using an abstract characterisation of weak memory models. Our approach is based on a hierarchical axiom scheme that captures the observational properties of a memory model. In particular, we show that it is possible to prove correctness of a program with respect to a particular axiom scheme, and we show this proof to suffice for any memory model that satisfies the axioms. Our axiom scheme is developed using a characterisation of weakest liberal preconditions for weak memory. This characterisation naturally extends to Hoare logic and Owicki-Gries reasoning by lifting weakest liberal preconditions (defined over read/write events) to the level of programs. We study three memory models (SC, TSO, and RC11-RAR) as example instantiations of the axioms, then we demonstrate the applicability of our reasoning technique on a number of litmus tests. The majority of the proofs in this article are supported by mechanisation within Isabelle/HOL.
Correctness conditions like linearizability and opacity describe some form of atomicity imposed on concurrent objects. In this paper, we propose a correctness condition (called causal atomicity) for concurrent objects executing in a weak memory model, where the histories of the objects in question are partially ordered. We establish compositionality and abstraction results for causal atomicity and develop an associated refinement-based proof technique.
The properties of a blockchain such as immutability, provenance, and peer-executed smart contracts could bring a new level of security, trust, and transparency to e-learning. In this paper, we introduce our proof-of-concept blockchain-based e-learning platform developed to increase transparency in assessments and facilitate curriculum personalisation in a higher education context. Most notably, our platform could automate assessments and issue credentials. We designed it to be pedagogically neutral and content-neutral in order to showcase the benefits of a blockchain back-end to end users such as students and teaching staff. Our evaluation suggests that our platform could increase trust in online education providers, assessment procedures, education history and credentials.
Teleo-Reactive (TR) robotic agent programs comprise sequences of guarded action rules clustered into named parameterised procedures. Their ancestry goes back to the first cognitive robot, Shakey. Like Shakey, a TR programmed robotic agent has a deductive Belief Store comprising constantly changing predicate logic percept facts, and knowledge facts and rules for querying the percepts. In this paper we introduce TR programming using a simple example expressed in the teleo-reactive programming language TeleoR, which is a syntactic extension of QuLog, a typed logic programming language used for the agent’s Belief Store. We give a formal definition of the regression property that rules of TeleoR procedures should satisfy, and an informal operational semantics of the evaluation of a TeleoR procedure call. We then formally express key features of the evaluation in LTL. Finally we show how this LTL formalisation can be used to prove that a procedure’s rules satisfy the regression property by proving it holds for one rule of the example TeleoR program. The proof requires us: to formally link a TeleoR agent’s percept beliefs with sensed configurations of the external environment; to link the agent’s robotic device action intentions with actual robot actions; to specify the eventual physical effects of the robot’s actions on the environment state.
This book constitutes the refereed proceedings of the 16th International Conference on Integrated Formal Methods, IFM 2019, held in Lugano, Switzerland, in November 2020. The 24 full papers and 2 short papers were carefully reviewed and selected from 63 submissions. The papers cover a broad spectrum of topics: Integrating Machine Learning and Formal Modelling; Modelling and Verification in B and Event-B; Program Analysis and Testing; Verification of Interactive Behaviour; Formal Verification; Static Analysis; Domain-Specific Approaches; and Algebraic Techniques.
Rely-guarantee (RG) is a highly influential compositional proof technique for concurrent programs, which was originally developed assuming a sequentially consistent shared memory. In this paper, we first generalize RG to make it parametric with respect to the underlying memory model by introducing an RG framework that is applicable to any model axiomatically characterized by Hoare triples. Second, we instantiate this framework for reasoning about concurrent programs under causally consistent memory, which is formulated using a recently proposed potential-based operational semantics, thereby providing the first reasoning technique for such semantics. The proposed program logic, which we call Piccolo, employs a novel assertion language allowing one to specify ordered sequences of states that each thread may reach. We employ Piccolo for multiple litmus tests, as well as for an adaptation of Peterson's algorithm for mutual exclusion to causally consistent memory.
Non-volatile memory (NVM), aka persistent memory, is a new paradigm for memory preserving its contents even after power loss. The expected ubiquity of NVM has stimulated interest in the design of persistent concurrent data structures, together with associated notions of correctness. In this paper, we present the first formal proof technique for durable linearizability, which is a correctness criterion that extends linearizability to handle crashes and recovery in the context of NVM. Our proofs are based on refinement of IO-automata representations of concurrent data structures. To this end, we develop a generic procedure for transforming any standard sequential data structure into a durable specification. Since the durable specification only exhibits durably linearizable behaviours, it serves as the abstract specification in our refinement proof. We exemplify our technique on a recently proposed persistent memory queue that builds on Michael and Scott’s lock-free queue.
This paper introduces a technique for modelling and verifying weak memory C11 programs in the Event-B framework. We build on a recently developed operational semantics for the RAR fragment of C11, which we use as a top-level abstraction. In our technique, a concrete C11 program can be modelled by refining this abstract model of the semantics. Program structures and individual operations are then introduced in the refined machine and can be checked and verified using available Event-B provers and model checkers. The paper also discusses how ProB model checker can be used to validate the Event-B model of C11 programs. We applied our technique to the C11 implementation of Peterson’s algorithm, where we discovered that the standard invariant used to characterise mutual exclusion is inadaquate. We therefore propose and verify new invariants necessary for characterising mutual exclusion in a weak memory setting.
Security-critical applications often rely on memory isolation mechanisms to ensure integrity of critical data (e.g., keys) and program instructions (e.g., implementing an attestation protocol). These include software-based security microvisor (SµV) or hardware-based (e.g., TrustLite or SMART). Here, we must guarantee that none of the assembly-level instructions corresponding to a program violate the imposed memory access restrictions. We demonstrate our approach on two architectures (SµV and TrustLite) on which remote at-testation protocols are implemented. We extend an approach based on the Binary Analysis Platform (BAP) to generate compiled assembly for a given C program, which is translated to an assembly intermediate language (BIL) and ultimately to Isabelle/HOL theories. In our extension, we develop an adversary model and define conformance predicates imposed by an architecture. We generate a set of programs covering all possible cases in which an assembly-level instruction attempts to violate at least one of the conformance predicates. This shows that the memory access restriction of both SµV and TrustLite are dynamically maintained. Moreover, we introduce conformance predicates for assembly-level instructions that can change the control flow, which improve TrustLite's memory protection unit.
Modern processors such as ARMv8 and RISC-V allow executions in which independent instructions within a process may be reordered. To cope with such phenomena, so called promising semantics have been developed, which permit threads to read values that have not yet been written. Each promise is a speculative update that is later validated (fulfilled) by an actual write. Promising semantics are operational , providing a pathway for developing proof calculi. In this paper, we develop an incorrectness-style logic, resulting in a framework for reasoning about state reachability. Like incorrectness logic, our assertions are underapproximating, since the set of all valid promises are not known at the start of execution. Our logic uses event structures as assertions to compactly represent the ordering among events such as promised and fulfilled writes. We prove soundness and completeness of our proof calculus and demonstrate its applicability by proving reachability properties of standard weak memory litmus tests.
Cylindric algebras have been developed as an algebraisation of equational first order logic. We adapt them to cylindric Kleene lattices and their variants and present relational and relational fault models for these. This allows us to encode frames and local variable blocks, and to derive Morgan's refinement calculus as well as an algebraic Hoare logic for while programs with assignment laws. Our approach thus opens the door for algebraic calculations with program and logical variables instead of domain-specific reasoning over concrete models of the program store. A refinement proof for a small program is presented as an example.
Transactional memory (TM) is an intensively studied synchronisation paradigm with many proposed implementations in software and hardware, and combinations thereof. However, TM under relaxed memory, e.g., C11 (the 2011 C/C++ standard) is still poorly understood, lacking rigorous foundations that support verifiable implementations. This paper addresses this gap by developing TMS2-RA, a relaxed operational TM specification. We integrate TMS2-RA with RC11 (the repaired C11 memory model that disallows load-buffering) to provide a formal semantics for TM libraries and their clients. We develop a logic, TARO, for verifying client programs that use TMS2-RA for synchronisation. We also show how TMS2-RA can be implemented by a C11 library, TML-RA, that uses relaxed and release-acquire atomics, yet guarantees the synchronisation properties required by TMS2-RA. We benchmark TML-RA and show that it outperforms its sequentially consistent counterpart in the STAMP benchmarks. Finally, we use a simulation-based verification technique to prove correctness of TML-RA. Our entire development is supported by the Isabelle/HOL proof assistant.
Weak memory presents a new challenge for program verification and has resulted in the development of a variety of specialised logics. For C11-style memory models, our previous work has shown that it is possible to extend Hoare logic and Owicki-Gries reasoning to verify correctness of weak memory programs. The technique introduces a set of high-level assertions over C11 states together with a set of basic Hoare-style axioms over atomic weak memory statements (e.g. reads/writes), but retains all other standard proof obligations for compound statements. This paper takes this line of work further by introducing the first deductive verification environment in Isabelle/HOL for C11-like weak memory programs. This verification environment is built on the Nipkow and Nieto's encoding of Owicki-Gries in the Isabelle theorem prover. We exemplify our techniques over several litmus tests from the literature and two non-trivial examples: Peterson's algorithm and a read-copy-update algorithm adapted for C11. For the examples we consider, the proof outlines can be automatically discharged using the existing Isabelle tactics developed by Nipkow and Nieto. The benefit here is that programs can be written using a familiar pseudocode syntax with assertions embedded directly into the program.
In today???s world, critical infrastructure is often controlled by computing systems. This introduces new risks for cyber attacks, which can compromise the security and disrupt the functionality of these systems. It is therefore necessary to build such systems with strong guarantees of resiliency against cyber attacks. One way to achieve this level of assurance is using formal verification, which provides proofs of system compli-ance with desired cyber security properties. The use of Formal Methods (FM) in aspects of cyber security and safety-critical systems are reviewed in this article. We split FM into the three main classes: theorem proving, model checking, and lightweight FM. To allow the different uses of FM to be compared, we define a common set of terms. We further develop categories based on the type of computing system FM are applied in. Solu-tions in each class and category are presented, discussed, compared, and summarised. We describe historical highlights and developments and present a state-of-the-art review in the area of FM in cyber security. This review is presented from the point of view of FM practitioners and researchers, commenting on the trends in each of the classes and categories. This is achieved by considering all types of FM, several types of se-curity and safety-critical systems, and by structuring the taxonomy accordingly. The article hence provides a comprehensive overview of FM and techniques available to system designers of security-critical systems, simplifying the process of choosing the right tool for the task. The article concludes by summarising the dis-cussion of the review, focusing on best practices, challenges, general future trends, and directions of research within this field.
Non-volatile memory (NVM) technologies offer DRAM-like speeds with the added benefit of failure resilience. However, developing concurrent programs for NVM can be challenging since programmers must consider both inter-thread synchronisation and durability aspects at the same time. To alleviate this, libraries such as FliT have been developed to manage transformations to durability, allowing a linearizable concurrent object to be converted into a durably linearizable one by replacing the reads/writes to memory by calls to corresponding operations of the FliT library. However, a formal proof of correctness for FliT is missing, and standard proof techniques for durable linearizability are challenging to apply, since FliT itself is not durably linearizable. In this paper, we study the problem of proving correctness of transformations to durability. First, we develop an abstract persistency library (called PLib) that operationally characterises transformations to durability. We prove soundness of PLib via a forward simulation coupled with a prophecy variable used as an oracle about future behaviour. Second, we show correctness of the library FliT by proving that FliT refines PLib under the realistic PTSO memory model, i.e., the persistent version of TSO memory model implemented by Intel architectures. The proof of refinement between FliT and PLib has been mechanised within the theorem prover KIV. Taken together, these proofs guarantee that FliT is also sound wrt transformations to durability.
This paper explores the use of ownership as a key auxiliary variable within Owicki-Gries framework. We explore this idea in the context of a ring buffer, developed by Amazon, which is a partial library that only provides methods for reserving (acquiring) and releasing addresses within the buffer. A client is required to implement additional functionality (including an additional shared queue) to enable full synchronisation between a writer and a reader. We verify correctness of the Amazon ring buffer (ARB) and show that the ARB satisfies both safety and progress. Our proofs are fully mechanised within the Isabelle/HOL proof assistant.
Swarm attestation protocols extend remote attestation by allowing a verifier to efficiently measure the integrity of software code running on a collection of heterogeneous devices across a network. Many swarm attestation protocols have been proposed for a variety of system configurations. However, these protocols are currently missing explicit specifications of the properties guaranteed by the protocol and formal proofs of correctness. In this paper, we address this gap in the context of list swarm attestation protocols, a category of swarm attestation protocols that allow a verifier to identify the set of healthy provers in a swarm. We describe the security requirements of swarm attestation protocols. We focus our work on the SIMPLE+ protocol, which we model and verify using the tamarin prover. Our proofs enable us to identify two variations of SIMPLE+: (1) we remove one of the keys used by SIMPLE+ without compromising security, and (2) we develop a more robust design that increases the resilience of the swarm to device compromise. Using tamarin, we demonstrate that both modifications preserve the desired security properties.
Non-volatile memory (NVM), also known as persistent memory, is an emerging paradigm for memory that preserves its contents even after power loss. NVM is widely expected to become ubiquitous, and hardware architectures are already providing support for NVM programming. This has stimulated interest in the design of novel concepts ensuring correctness of concurrent programming abstractions in the face of persistency and in the development of associated verification approaches. Software transactional memory (STM) is a key programming abstraction that supports concurrent access to shared state. In a fashion similar to linearizability as the correctness condition for concurrent data structures, there is an established notion of correctness for STMs known as opacity. We have recently proposed durable opacity as the natural extension of opacity to a setting with non-volatile memory. Together with this novel correctness condition, we designed a verification technique based on refinement. In this paper, we extend this work in two directions. First, we develop a durably opaque version of NOrec (no ownership records), an existing STM algorithm proven to be opaque. Second, we modularise our existing verification approach by separating the proof of durability of memory accesses from the proof of opacity. For NOrec, this allows us to re-use an existing opacity proof and complement it with a proof of the durability of accesses to shared state.
Read-Copy Update (RCU) is a key lock-free synchronisation mechanism that is used extensively in the Linux kernel. One use of RCU is safe memory reclamation in languages such as C/C++ that do not support garbage collection. Correctness of RCU is, however, difficult to verify, even when assuming sequentially consistent (SC) memory. In this paper, we develop and verify an RCU implementation under RC11 (a restricted version of C11 weak memory model, which includes relaxed and release-acquire accesses), increasing the verification challenge. Our proof technique is based on a notion of ownership, which we use to systematically track each thread’s read/write capabilities to each memory location. In our proof, we extend a recent Owicki-Gries logic for RC11, which we combine with our ownership model to show correctness. All our proofs have been mechanised in the Isabelle/HOL theorem prover.
This paper develops an operational semantics for a release-acquire fragment of the C11 memory model with relaxed accesses. We show that the semantics is both sound and complete with respect to the axiomatic model of Batty et al. The semantics relies on a per-thread notion of observability, which allows one to reason about a weak memory C11 program in program order. On top of this, we develop a proof calculus for invariant-based reasoning, which we use to verify the release-acquire version of Peterson’s mutual exclusion algorithm.
Deductive verification of concurrent programs under weak memory has thus far been limited to simple programs over a monolithic state space. For scalability, we also require modular techniques with verifiable library abstractions. This paper addresses this challenge in the context of RC11 RAR, a subset of the C11 memory model that admits relaxed and release-acquire accesses, but disallows, so-called, load-buffering cycles. We develop a simple framework for specifying abstract objects that precisely characterises the observability guarantees of abstract method calls. We show how this framework can be integrated with an operational semantics that enables verification of client programs that execute abstract method calls from a library they use. Finally, we show how implementations of such abstractions in RC11 RAR can be verified by developing a (contextual) refinement framework for abstract objects. Our framework, including the operational semantics, verification technique for client-library programs, and simulation between abstract libraries and their implementations, has been mechanised in Isabelle/HOL.
Deductive verification of concurrent programs under weak memory has thus far been limited to simple programs over a monolithic state space. For scalability, we also require modular techniques with verifiable library abstractions. We address this challenge in the context of RC11 RAR, a subset of the C11 memory model that admits relaxed and release acquire accesses, but disallows, so-called, load-buffering cycles. We develop a simple framework for specifying abstract objects that precisely characterises the observability guarantees of abstract method calls. Our framework is integrated with an operational semantics that enables verification of client programs that execute abstract method calls from a library it uses. We implement such abstractions in RC11 RAR by developing a (contextual) refinement framework for abstract objects. Our framework has been mechanised inIsabelle/HOL
In the interleaving model of concurrency, where events are totally ordered, linearizability is compositional: the composition of two linearizable objects is guaranteed to be linearizable. However, linearizability is not compositional when events are only partially ordered, as in the weak-memory models that describe multicore memory systems. In this paper, we present a generalisation of linearizability for concurrent objects implemented in weak-memory models. We abstract from the details of specific memory models by defining our condition using Lamport's execution structures. We apply our condition to the C11 memory model, providing a correctness condition for C11 objects. We develop a proof method for verifying objects implemented in C11 and related models. Our method is an adaptation of simulation-based methods, but in contrast to other such methods, it does not require that the implementation totally orders its events. We apply our proof technique and show correctness of the Treiber stack that blocks on empty, annotated with C11 release-acquire synchronisation.
Secure speculation is an information flow security hyperproperty that prevents transient execution attacks such as Spectre, Meltdown and Foreshadow. Generic compiler mitigations for secure speculation are known to be insufficient for eliminating vulnerabilities. Moreover, these mitigation techniques often overprescribe speculative fences, causing the performance of the programs to suffer. Recently Cheang et al. have developed an operational semantics of program execution capable of characterising speculative executions as well as a new class of information flow hyperproperties named TPOD that ensure secure speculation. This paper presents a framework for verifying TPOD using the Isabelle/HOL proof assistant by encoding the operational semantics of Cheang et al. We provide translation tools for automatically generating the required Isabelle/HOL theory templates from a C-like program syntax, which speeds up verification. Our framework is capable of proving the existence of vulnerabilities and correctness of secure speculation. We exemplify our framework by proving the existence of secure speculation bugs in 15 victim functions for the MSVC compiler as well as correctness of some proposed fixes.
Deductive verification techniques for C11 programs have advanced significantly in recent years with the development of operational semantics and associated logics for increasingly large fragments of C11. However, these semantics and logics have been developed in a restricted setting to avoid the thin-air-read problem. In this paper, we propose an operational semantics that leverages an intra-thread partial order (called semantic dependencies) induced by a recently developed denotational event-structure-based semantics. We prove that our operational semantics is sound and complete with respect to the denotational semantics. We present an associated logic that generalises a recent Owicki-Gries framework for RC11 (repaired C11), and demonstrate the use of this logic over several example proofs.
Additional publications
Book Chapters
[1] J. Derrick, G. Smith, L. Groves, and B. Dongol. A proof method for linearizability on TSO architectures. In Provably Correct Systems, NASA Monographs in Systems and Software Engineering, pages 61--91. Springer, 2017.
Journal Publications
[2] B. Dongol, R. Jagadeesan, and J. Riely. Transactions in relaxed memory architectures. PACMPL, 2(POPL):18:1--18:29, 2018.
[3] B. Dongol and R. M. Hierons. Decidability and complexity for quiescent consistency and its variations. Information and Computation, 2017. In press.
[4] J. Derrick, S. Doherty, B. Dongol, G. Schellhorn, O. Travkin, and H. Wehrheim. Mechanized proofs of opacity: A comparison of two techniques. Formal Aspects of Computing, 2017. Accepted 21 July, 2017 (Mechanisations available here: https://swt.informatik.uni-augsburg.de/swt/projects/Opacity-TML.html).
[5] B. Dongol, V. B. F. Gomes, I. J. Hayes, and G. Struth. Partial semigroups and convolution algebras. Archive of Formal Proofs, 2017, 2017.
[6] B. Dongol, I. J. Hayes, and G. Struth. Convolution as a unifying concept: Applications in separation logic, interval calculi, and concurrency. ACM Trans. Comput. Log., 17(3):15:1--15:25, 2016.
[7] B. Dongol and J. Derrick. Verifying linearisability: A comparative survey. ACM Comput. Surv., 48(2):19, 2015.
[8] B. Dongol and J. Derrick. Interval-based data refinement: A uniform approach to true concurrency in discrete and real-time systems. Sci. Comput. Program., 111:214--247, 2015.
[9] B. Dongol, I. J. Hayes, and P. J. Robinson. Reasoning about goal-directed real-time teleo-reactive programs. Formal Asp. Comput., 26(3):563--589, 2014.
[10] B. Dongol, I. J. Hayes, and J. Derrick. Deriving real-time action systems with multiple time bands using algebraic reasoning. Sci. Comput. Program., 85:137--165, 2014.
[11] I. J. Hayes, A. Burns, B. Dongol, and C. B. Jones. Comparing degrees of non-determinism in expression evaluation. Comput. J., 56(6):741--755, 2013.
[12] B. Dongol and I. J. Hayes. Deriving real-time action systems in a sampling logic. Sci. Comput. Program., 78(11):2047--2063, 2013.
[13] R. Colvin and B. Dongol. A general technique for proving lock-freedom. Sci. Comput. Program., 74(3):143--165, 2009.
[14]B. Dongol and A. J. Mooij. Streamlining progress-based derivations of concurrent programs. Formal Asp. Comput., 20(2):141--160, 2008.
[15]B. Dongol and D. Goldson. Extending the theory of Owicki and Gries with a logic of progress. Logical Methods in Computer Science, 2(1), 2006.
Refereed Conference Publications
[16] S. Doherty, B. Dongol, H. Wehrheim, and J. Derrick. Brief announcement: Generalising concurrent correctness to weak memory. In DISC, 2018. To appear.
[17] S. Doherty, B. Dongol, H. Wehrheim, and J. Derrick. Making linearizability compositional for partially ordered executions. In iFM, Lecture Notes in Computer Science. Springer, 2018. To appear.
[18] B. Dongol, R. Jagadeesan, J. Riely, and A. Armstrong. On abstraction and compositionality for weak-memory linearisability. In VMCAI, volume 10747 of Lecture Notes in Computer Science, pages 183--204. Springer, 2018.
[19] A. Armstrong and B. Dongol. Modularising opacity verification for hybrid transactional memory. In FORTE, volume 10321 of Lecture Notes in Computer Science, pages 33--49. Springer, 2017.
[20] A. Armstrong, B. Dongol, and S. Doherty. Proving opacity via linearizability: A sound and complete method. In FORTE, volume 10321 of Lecture Notes in Computer Science, pages 50--66. Springer, 2017. (Previous version: https://arxiv.org/abs/1610.01004).
[21] S. Doherty, B. Dongol, J. Derrick, G. Schellhorn, and H. Wehrheim. Proving opacity of a pessimistic STM. In OPODIS, volume 70 of LIPIcs, pages 35:1--35:17. Schloss Dagstuhl - Leibniz-Zentrum fuer Informatik, 2016.
[22] B. Dongol. An interval logic for stream-processing functions: A convolution-based construction. In FTSCS, volume 694 of Communications in Computer and Information Science, pages 20--35, 2016.
[23] B. Dongol and L. Groves. Contextual trace refinement for concurrent objects: Safety and progress. In ICFEM, volume 10009 of Lecture Notes in Computer Science, pages 261--278, 2016.
[24] B. Dongol and R. M. Hierons. Decidability and complexity for quiescent consistency. In LICS, pages 116--125. ACM, 2016.
[25] B. Dongol and L. Groves. Towards linking correctness conditions for concurrent objects and contextual trace refinement. In Refine@FM, volume 209 of EPTCS, pages 107--111, 2015.
[26] J. Derrick, B. Dongol, G. Schellhorn, O. Travkin, and H. Wehrheim. Verifying opacity of a transactional mutex lock. In N. Bjørner and F. D. de Boer, editors, FM, volume 9109 of LNCS, pages 161--177. Springer, 2015.
[27] B. Dongol, V. B. F. Gomes, and G. Struth. A program construction and verification tool for separation logic. In R. Hinze and J. Voigtländer, editors, MPC, volume 9129 of LNCS, pages 137--158. Springer, 2015.
[28] B. Dongol, J. Derrick, L. Groves, and G. Smith. Defining correctness conditions for concurrent objects in multicore architectures. In J. T. Boyland, editor, ECOOP, volume 37 of LIPIcs, pages 470--494. Schloss Dagstuhl - Leibniz-Zentrum fuer Informatik, 2015.
[29] J. Derrick, G. Smith, L. Groves, and B. Dongol. Using coarse-grained abstractions to verify linearizability on TSO architectures. In E. Yahav, editor, HVC, volume 8855 of LNCS, pages 1--16. Springer, 2014.
[30] G. Smith, J. Derrick, and B. Dongol. Admit your weakness: Verifying correctness on TSO architectures. In I. Lanese and E. Madelaine, editors, FACS, volume 8997 of LNCS, pages 364--383. Springer, 2014.
[31] J. Derrick, G. Smith, and B. Dongol. Verifying linearizability on TSO architectures. In E. Albert and E. Sekerinski, editors, iFM, volume 8739 of LNCS, pages 341--356. Springer, 2014.
[32] B. Dongol, J. Derrick, and G. Smith. Reasoning algebraically about refinement on TSO architectures. In G. Ciobanu and D. Méry, editors, ICTAC, volume 8687 of LNCS, pages 151--168. Springer, 2014.
[33] J. Derrick, B. Dongol, G. Schellhorn, B. Tofan, O. Travkin, and H. Wehrheim. Quiescent consistency: Defining and verifying relaxed linearizability. In C. B. Jones, P. Pihlajasaari, and J. Sun, editors, FM, volume 8442 of LNCS, pages 200--214. Springer, 2014.
[34] B. Dongol and J. Derrick. Simplifying proofs of linearisability using layers of abstraction. ECEASST, 66, 2013.
[35] B. Dongol, O. Travkin, J. Derrick, and H. Wehrheim. A high-level semantics for program execution under total store order memory. In Z. Liu, J. Woodcock, and H. Zhu, editors, ICTAC, volume 8049 of LNCS, pages 177--194. Springer, 2013.
[36] B. Dongol and J. Derrick. Data refinement for true concurrency. In J. Derrick, E. A. Boiten, and S. Reeves, editors, Refine, volume 115 of EPTCS, pages 15--35, 2013.
[37] B. Dongol, J. Derrick, and I. J. Hayes. Fractional permissions and non-deterministic evaluators in interval temporal logic. ECEASST, 53, 2012.
[38] B. Dongol, I. J. Hayes, L. Meinicke, and K. Solin. Towards an algebra for real-time programs. In W. Kahl and T. G. Griffin, editors, RAMICS, volume 7560 of LNCS, pages 50--65. Springer, 2012.
[39] B. Dongol and I. J. Hayes. Deriving real-time action systems controllers from multiscale system specifications. In J. Gibbons and P. Nogueira, editors, MPC, volume 7342 of LNCS, pages 102--131. Springer, 2012.
[40] B. Dongol and I. J. Hayes. Rely/guarantee reasoning for teleo-reactive programs over multiple time bands. In J. Derrick, S. Gnesi, D. Latella, and H. Treharne, editors, IFM, volume 7321 of LNCS, pages 39--53. Springer, 2012.
[41] B. Dongol and I. J. Hayes. Approximating idealised real-time specifications using time bands. ECEASST, 46:1--16, 2012. 11th International Workshop on Automated Verification of Critical Systems.
[42] B. Dongol and I. J. Hayes. Compositional action system derivation using enforced properties. In C. Bolduc, J. Desharnais, and B. Ktari, editors, MPC, volume 6120 of LNCS, pages 119--139. Springer, 2010.
[43] B. Dongol and I. J. Hayes. Enforcing safety and progress properties: An approach to concurrent program derivation. In ASWEC, pages 3--12. IEEE Computer Society, 2009.
[44] R. Colvin and B. Dongol. Verifying lock-freedom using well-founded orders. In C. B. Jones, Z. Liu, and J. Woodcock, editors, ICTAC, volume 4711 of LNCS, pages 124--138. Springer, 2007.
[45] B. Dongol. Formalising progress properties of non-blocking programs. In Z. Liu and J. He, editors, ICFEM, volume 4260 of LNCS, pages 284--303. Springer, 2006.
[46] B. Dongol. Towards simpler proofs of lock-freedom. In Preliminary Proceedings of the 1st Asian Working Conference on Verified Software, pages 136--146, 2006.
[47] B. Dongol. Derivation of Java monitors. In ASWEC, pages 211--220. IEEE Computer Society, 2006.
[48] B. Dongol and A. J. Mooij. Progress in deriving concurrent programs: Emphasizing the role of stable guards. In T. Uustalu, editor, MPC, volume 4014 of LNCS, pages 140--161. Springer, 2006.
[49] D. Goldson and B. Dongol. Concurrent program design in the extended theory of Owicki and Gries. In M. D. Atkinson and F. K. H. A. Dehne, editors, CATS, volume 41 of CRPIT, pages 41--50. Australian Computer Society, 2005.
Other
PhD Thesis
[50] B. Dongol. Progress-based verification and derivation of concurrent programs. PhD thesis, School of Information Technology and Electrical Engineering, The University of Queensland, Qld, Australia, October 2009.
Technical reports (not elsewhere published)
[51]B. Dongol, I. J. Hayes, and G. Struth. Relational Convolution, Generalised Modalities and Incidence Algebras. ArXiv e-prints, February 2017.
[52]B. Dongol and J. Derrick. Proving linearisability via coarse-grained abstraction. CoRR, abs/1212.5116, 2012.
[53]B. Dongol and I. J. Hayes. Reasoning about teleo-reactive programs under parallel composition. Technical Report SSE-2011-01, Division of Systems and Software Engineering Research, School of Information Technology and Electrical Engineering, The University of Queensland, QLD 4072, Australia, April 2011.
[54]B. Dongol and I. J. Hayes. Trace semantics for the Owicki-Gries theory integrated with the progress logic from UNITY. Technical Report SSE-2007-02, Division of Systems and Software Engineering Research, School of Information Technology and Electrical Engineering, The University of Queensland, QLD 4072, Australia, April 2007.