Dr Solofomampionona Fortunat Rajaona
Academic and research departments
Computer Science Research Centre, Surrey Centre for Cyber Security.About
Biography
I joined the Department of Computer Science as a Postdoctoral Researcher in February 2020, collaborating with Prof. Ioana Boureanu on the AUTOPASS project (EPSRC funded, 2020-2023) and the NCSC-funded ESKMARALD project (NCSC funded, 2023-2024). Since 2025, I have been working as a Research Assistant with Prof. Helen Treharne, Head of the School of Electrical Engineering and Computer Science.
I completed my undergraduate studies in Mathematics at the University of Antananarivo, Madagascar. I was then selected to pursue a Postgraduate Diploma in Mathematical Sciences at the African Institute for Mathematical Sciences, AIMS South Africa. Continuing my academic journey, I earned a Master’s degree in Computer Science at AIMS, working under the supervision of Prof Jeff Sanders. I later extended my master’s research into a Ph.D. in Mathematics, which I completed at the University of Stellenbosch in 2016. My doctoral work focused on reasoning about knowledge and privacy using program algebra. In 2016, I joined RWTH Aachen University as a visiting researcher, collaborating with the team led by Professor Joost-Pieter Katoen. Between 2018 and 2019, prior to joining the University of Surrey, I served as a lecturer in Mathematics and Programming at the École Normale Supérieure (ENS), University of Antananarivo.
ResearchResearch interests
- Design and Analysis of Secure Systems
- Logics and Formal Analysis of Programs
- Reasoning about Agents' Knowledge in AI Systems
Research Projects
AUTOPASS Project (led by Prof Ioana Boureanu, EPSRC-funded, 2020–2022)
I developed a framework for reasoning about and analysing privacy in cryptographic protocols. This included:
- Designing a protocol model and a non-classical logic language for specifying privacy properties.
- Building Phoebe, a model-checking tool that implements our theoretical framework.
- Conducting analyses of benchmark privacy-preserving protocols.
In 2024, we received an Amazon Research Award to further enhance Phoebe. I co-authored several conference papers, notably the Phoebe paper presented at CSF 2024, and our epistemic logic and AI reasoning work, which earned the Best Paper Award at FM 2023.
ESKMARALD Project (led by Prof. Ioana Boureanu, NCSC-funded, 2023–2024)
I investigated the security and privacy of the 3GPP AKMA protocol standard. Our work involved:
- Formalising security and privacy requirements for AKMA.
- Using formal verification tools such as Tamarin and Squirrel Prover to uncover privacy flaws.
- Proposing a privacy-enhanced version of AKMA.
This research resulted in two publications: Privacy analysis of AKMA at IEEE Euro S&P 2025 and Post-compromise security of AKMA at ACM ASIACCS 2025.
Future of Digital Power of Attorney (led by Prof. Helen Treharne, 2025–present)
Our current project explores the digitalization of powers of attorney schemes. We study:
- Risks of fraud introduced by digitalization.
- Opportunities to mitigate existing frauds and abuses in traditional systems.
- Usability challenges vulnerable individuals such as elderly people, requiring third-party digital assistance.
Research interests
- Design and Analysis of Secure Systems
- Logics and Formal Analysis of Programs
- Reasoning about Agents' Knowledge in AI Systems
Research Projects
AUTOPASS Project (led by Prof Ioana Boureanu, EPSRC-funded, 2020–2022)
I developed a framework for reasoning about and analysing privacy in cryptographic protocols. This included:
- Designing a protocol model and a non-classical logic language for specifying privacy properties.
- Building Phoebe, a model-checking tool that implements our theoretical framework.
- Conducting analyses of benchmark privacy-preserving protocols.
In 2024, we received an Amazon Research Award to further enhance Phoebe. I co-authored several conference papers, notably the Phoebe paper presented at CSF 2024, and our epistemic logic and AI reasoning work, which earned the Best Paper Award at FM 2023.
ESKMARALD Project (led by Prof. Ioana Boureanu, NCSC-funded, 2023–2024)
I investigated the security and privacy of the 3GPP AKMA protocol standard. Our work involved:
- Formalising security and privacy requirements for AKMA.
- Using formal verification tools such as Tamarin and Squirrel Prover to uncover privacy flaws.
- Proposing a privacy-enhanced version of AKMA.
This research resulted in two publications: Privacy analysis of AKMA at IEEE Euro S&P 2025 and Post-compromise security of AKMA at ACM ASIACCS 2025.
Future of Digital Power of Attorney (led by Prof. Helen Treharne, 2025–present)
Our current project explores the digitalization of powers of attorney schemes. We study:
- Risks of fraud introduced by digitalization.
- Opportunities to mitigate existing frauds and abuses in traditional systems.
- Usability challenges vulnerable individuals such as elderly people, requiring third-party digital assistance.
Publications
We give a general-purpose programming language in which programs can reason about their own knowledge. To specify what these intelligent programs know, we define a “program epistemic” logic, akin to a dynamic epistemic logic for programs. Our logic properties are complex, including programs introspecting into future state of affairs, i.e., reasoning now about facts that hold only after they and other threads will execute. To model aspects anchored in privacy, our logic is interpreted over partial observability of variables, thus capturing that each thread can “see” only a part of the global space of variables. We verify program-epistemic properties on such AI-centred programs. To this end, we give a sound translation of the validity of our program-epistemic logic into first-order validity, using a new weakest-precondition semantics and a book-keeping of variable assignment. We implement our translation and fully automate our verification method for well-established examples using SMT solvers.
We give a general-purpose programming language in which programs can reason about their own knowledge. To specify what these intelligent programs know, we define a “program epistemic” logic, akin to a dynamic epistemic logic for programs. Our logic properties are complex, including programs introspecting into future state of affairs, i.e., reasoning now about facts that hold only after they and other threads will execute. To model aspects anchored in privacy, our logic is interpreted over partial observability of variables, thus capturing that each thread can “see” only a part of the global space of variables. We verify program-epistemic properties on such AI-centred programs. To this end, we give a sound translation of the validity of our program-epistemic logic into first-order validity, using a new weakest-precondition semantics and a book-keeping of variable assignment. We implement our translation and fully automate our verification method for well-established examples using SMT solvers.
We define an epistemic logic or logic of knowledge, PL, and a formalism to undertake privacy-centric reasoning in security protocols, over a Dolev-Yao model. We are able to automatically verify all the privacy requirements that are commonplace in security-protocol verification (i.e., strong secrecy, anonymity, various types of unlinkablity including weak unlinkability), as well as privacy notions that are less studied (i.e., privacy regarding lists' membership). Our methodology does not vary with the property: it is uniform no matter the kind of privacy requirement specified and/or verified. We operate in the setting of a bounded number of protocol-sessions. We also implement Phoebe - a proof-of-concept model checker for this methodology. We use Phoebe to check all the aforementioned properties, and we also show-case it on the "benchmark" anonymity and unlinkability requirements of several well-known protocols.
We propose a new approach to the verification of epis-temic properties of programs. First, we introduce the new " program-epistemic " logic L PK , which is strictly richer and more general than similar formalisms appearing in the literature. To solve the verification problem in an efficient way, we introduce a translation from our language L PK into first-order logic. Then, we show and prove correct a reduction from the model checking problem for program-epistemic formulas to the satisfiability of their first-order translation. Both our logic and our translation can handle richer specification w.r.t. the state of the art, allowing us to express the knowledge of agents about facts pertaining to programs (i.e., agents' knowledge before and after a program is executed). Furthermore, we implement our translation in Haskell in a general way (i.e., independently of the programs in the logical statements), and we use existing SMT-solvers to check satisfaction of L PK formulas on a benchmark example in the AI/agency field.
We introduce a new framework, TrackDev, for encoding and analysing the tracing or "tracking" of an entity (e.g., a device) via its executions of a protocol or its usages of a system. TrackDev considers multiple dimensions combined: whether the attacker is active or passive, whether an entity is trackable in its every single appearances or just in a compound set thereof, and whether the entity can be explicitly or implicitly identified. TrackDev can be applied to most identification-based systems. TrackDev is to be applied in practice, over actual executions of systems; to this end, we test TrackDev on real-life traffic for two well-known protocols, the LoRaWAN Join and the 5G handovers, showing new trackability attacks therein and proposing countermeasures. We study the strength of TrackDev's various trackability properties and show that many of our notions are incomparable amongst each other, thus justifying the fine-grained nature of TrackDev. Finally, we detail how the main thrust of TrackDev can be mechanised in formal-verification tools, without any loss; we exemplify this fully on the LoRaWAN Join, in the Tamarin prover. In this process, we also uncover and discuss within two important aspects: (a) TrackDev’s separation between "explicit" and "implicit" trackability offers new formal-verification insights; (b) our analyses of the LoRaWAN Join protocol in Tamarin against TrackDev as well as against existing approximations of unlinkability by Baelde et al. concretely show that the latter approximations can be coarser than our notions.