Case study: COMMANDO-HUMANS project
The security of computer systems often depends on the way humans interact with them, but testing systems with humans in the loop brings a number of problems. In addition to time and cost implications, issues include limited or biased samples, lack of ecological validity due to people behaving differently in tests, and the impossibility of running some studies due to ethical, privacy or legal concerns.
Creating software capable of simulating and modelling how humans interact with security systems would therefore help security designers and implementers to save time and money – and offer more reproducible and possibly more accurate results than human-based testing.
The ‘COMMANDO-HUMANS’ project, which is jointly funded by EPSRC and Singapore’s National Research Foundation (NRF) as a result of the 2015 Joint Singapore-UK Research in Cyber Security call, aims to produce direct evidence that insecurity caused by human behaviours can be detected automatically by applying human cognitive models. Launched in April 2016, the project involves researchers from four different countries (UK, Singapore, Australia and Croatia).
The two year project aims to develop the first software framework that can automatically detect both security and usability problems without the need to involve real human users. The framework will be developed around human user authentication systems as a focused use case. When developed, this framework could be used in sectors such as banking to test both usability and security of user interfaces, and to compare the performance of different solutions.
The research focuses mainly on ‘micro’ human behaviours at the user interface level, but will also look at ‘macro’ behaviours related to higher-level cognitive processes such as human perception, decision making, human errors and adaptive learning. This will lay the foundation for follow-up research and help increase our knowledge of other human-related security issues such as social engineering and insider threats.
“By leveraging knowledge from cognitive psychology we aim to create models and software tools that can help us automatically analyse the human behaviour-related security of human user authentication systems. It is our belief that the models and tools we will develop in the project will attract researchers from a wide range of fields including cyber security, human computer interface and cognitive science. We plan to make our tools open-source so that the industry and society as a whole can benefit from our research.”
Dr Shujun Li, leading the research at Surrey