Events
We run a variety of events, from regular research seminars, schools activities to international conferences. You can see what events we have coming up by looking through the event calendar.
14th European Symposium on Language for Special Purposes
Monday 18 August 2003
The 14th European Symposium on Language for Special Purposes 18th-22nd August 2003 Communication, Culture, Knowledge University of Surrey.
4th International Conference of B and Z Users
Wednesday 13 April 2005
4th International Conference of B and Z Users, University of Surrey, Guildford, UK, 13-15 April 2005. Organised by APCB and the Z User Group.
Workshop on Biologically Inspired Information Fusion
Tuesday 22 August 2006
In conjunction with the Department of Psychology and the University of Manchester, in August 2006 we hosted the International Workshop on Biologically Inspired Information Fusion. The workshop was sponsored by the University of Surrey's Institute of Advanced Studies and the EPSRC under grant number EP/E012795/1.
Models of Concurrency and Open Computing
Friday 24 November 2006
Models of Concurrency and Open Computing: A one day seminar to commemorate the retirement of Mike Shields.
An introductory Overview of Cryptography
Wednesday 24 January 2007
Professor Fred Piper, Royal Holloway, University of London
Gate-level modelling and verification of asynchronous circuits using CSP_M and FDR
Thursday 1 February 2007
Professor mark Josephs, London South Bank University
The Semantic Gap in Image Retrieval
Wednesday 7 February 2007
Professior Paul Lewis, University of Southampton
Vision Based Measurement in the Power Generation and Food Handling Industries
Wednesday 21 February 2007
Professor Yong Yan, Department of Electronics, University of Kent
Management of software-intensive development projects
Wednesday 7 March 2007
Dr Hugh Deighton, LogicaCMG
Opening up the Black Box:The Semiconductor Industry’s Protected IP Initiative
Wednesday 21 March 2007
Doug Amos, Synplicity Inc.
Careers for students with a doctorate in computing
Wednesday 28 March 2007
Dr Russ Clark, University of Surrey
The Evolution of DRM - From Prevention Towards Deterrence
Wednesday 2 May 2007
Dr Stefan Katzenbeisser, Philips Research Eindhoven
How to Build an Effective Team - Evolving Neural Network Ensembles
Wednesday 9 May 2007
Professor Xin Yao, University of Birmingham
Digital Image Forensics
Wednesday 1 August 2007
Professor Yun Q Shi, New Jersey Institute of Technology
Fragile and Semi-fragile Reversible Data Hiding
Tuesday 7 August 2007
Professor Yun Q Shi, New Jersey Institute of Technology
Steganography and Steganalysis
Wednesday 8 August 2007
Professor Yun Q Shi, New Jersey Institute of Technology
Digital Watermarking
Wednesday 24 October 2007
Associate Professor Chang-Tsun Li, University of Warwick
Multimodal Interaction for Mobile Devices
Wednesday 31 October 2007
Professor Stephen Brewster, University of Glasgow
Augmented Reality for User-centred Urban Navigation Using Mobile Technology
Wednesday 14 November 2007
Dr Vesna Brujic-Okretic, City University
Temporal Verification of Parameterized Systems
Wednesday 12 December 2007
Professor Michael Fisher, University of Liverpool
Network resilience in the presence of adversarial behaviour: new systems and models
Wednesday 23 January 2008
Professor Erol Gelenbe, Imperial College London
A Semiotic Perspective on Pragmatic Web
Wednesday 6 February 2008
Professor Kecheng Liu, University of Reading
Semantic Integrated Services with Wireless Sensors
Wednesday 20 February 2008
Professor Chunming Rong, University of Stavanger & University of Oslo, Norway
Multiple Classifier Systems in Biometrics
Wednesday 5 March 2008
Professor Josef Kittler, University of Surrey
Composing Cryptography and Watermarking for Secure Embedding and Detection of Watermarks - A Marriage of Convenience
Wednesday 30 April 2008
Professor Ahmad-Reza Sadeghi, Horst Görtz Institute for IT Security at Ruhr-University Bochum, Germany
Intrinsic Quantum Computation
Wednesday 8 October 2008
Dr Karoline Wiesner, School of Mathematics and Centre for Complexity Sciences, University of Bristol
Machine learning in astronomy: time delay estimation in gravitational lensing
Wednesday 15 October 2008
Dr Peter Tino, Computer Science Department, University of Birmingham
Balancing security and usability in a video CAPTCHA
Wednesday 19 November 2008
Dr Richard Zanibbi, Rochester Institute of Technology, NY, USA
Neuromorphic Systems: Past, Present and Future
Wednesday 3 December 2008
Professor Leslie Smith, University of Stirling
Secure Channels and Layering of Protocols
Monday 9 February 2009
Prof Gavin Lowe, Computing Laboratory, University of Oxford
Mobile and Metadata Systems for Self-Made Media
Wednesday 18 February 2009
Risto Sarvas, Visiting Research Fellow, Digital World Research Centre, University of Surrey and Research Scientist, Helsinki Institute for Information Technology HIIT, Finland
The Integration of Action and Language in Cognitive Robots
Tuesday 31 March 2009
Prof Angelo Cangelosi, Adaptive Behaviour and Cognition Research Group, School of Computing and Mathematics
Formal Verification of an Occam-to-FPGA Compiler and its Generated Logic Circuits
Tuesday 21 July 2009
As custom logic circuits (e.g. field-programmable gate arrays, or FPGAs) have become larger, the limitations of conventional design flows have become more apparent. For large designs, verification by simulation is now impractical.
Making a Pitch
Wednesday 22 July 2009
Most technical people end up having to make pitches – explaining their work in order to secure the next round of funding - or to get technical work turned into real business. Sometimes you know you are going to have to pitch in a formal setting, but it can also be a spontaneous opportunity from a chance meeting.
Structure your Writing to Help Readers
Wednesday 19 August 2009
I shall briefly review the usefulness and shortcomings of readability formulas. I will then describe CLEAR, an online aid to plain writing which I am developing: it will colour-code submitted text to show the difficulty of every word and every sentence. But simplifying sentences and cutting out jargon is not enough. So the bulk of the talk will consist of advice on structuring documents to help readers find what they need and understand what they find.
International Workshop on Digital Watermarking
Monday 24 August 2009
The 8th International Workshop on Digital Watermarking (IWDW09) is a premier forum for researchers and practitioners working on novel research, development, and applications of digital watermarking, steganography, steganalysis, and forensics techniques for multimedia data. Recent developments apply techniques from advanced coding theory and formal verification in order to improve our understanding of robust watermarking systems and related security protocols.
Computer Pioneers: Ada Lovelace and Alan Turing
Saturday 5 September 2009
Ada Lovelace first suggested how a machine could be programmed to perform different functions. Alan Turing laid the mathematical basis for modern computer theory with his 'Turing Machine'. This display tells their story showing their contributions to the development of modern computers.
Code Wars: Competitive Programming
Wednesday 9 September 2009
How good are you at Connect 4? Could you explain your strategy to someone else? Could you code it up as a computer program? The Arena is a web-based system for hosting strategy game tournaments, but where the matches are played by the computer programs people submit to the system. Join us and learn how to create and submit your own program, and take on the rest of the world at Connect 4!
Web Application Development
Wednesday 9 September 2009
This 3 day course provides a comprehensive, practical introduction to the fundamental web related programming languages and development environments; the skills and techniques for building general-purpose state-of-the-art web systems including e-commerce sites, database-driven catalogues, online libraries, and an understanding of the Web 2.0 and Mashup concepts and requirements.
The Computer Ate My Vote
Thursday 10 September 2009
How will you know if your vote is counted in the next election? With technological advances, and more and more elections being run on computers, trustworthiness and transparency are critical to public confidence. Join us to find out about the latest advances in electronic election systems, and how they can enable you, the voter, to check the integrity of an election.
The Future Starts Here - From Splitting the Atom to Rollable TVs
Thursday 10 September 2009
An insightful look into the exciting, cutting-edge research happening right on your doorstep. Research students from the University of Surrey present a series of short talks showcasing current activities in fields ranging from nuclear physics to the latest developments in nanotechnology and communication devices. Discover how the science and technology of today will impact tomorrow's society.
A Formal Approach to the Analysis of Protocols Protecting IPR
Wednesday 16 September 2009
The primary benefit of digital content, the ease with which it can be duplicated and disseminated, is also the primary concern when endeavouring to protect the rights of those creating the content. Copyright owners wish to deter illicit file sharing of copyrighted material, detect it when it occurs and even trace the original perpetrator.
Enterprise Web Application Development
Wednesday 23 September 2009
The computer industry - and specifically enterprises - requires distributed and interoperable information systems in order to function and remain competitive. Distributed systems evolve continuously due to the cheaper availability of hardware and high-speed communications. This evolution has drawn new lines in systems architecture design development and allows big corporations to build robust, modular, and reusable components.
British Computer Society Lecture: The Future of the BCS
Thursday 24 September 2009
Alan will talk about the vision for the future of the BCS, as it undergoes a major transformation. He will outline the challenges and opportunities that the BCS faces, and the strategies that the Society is adopting to increase the engagement and relevance of the BCS within industry and within the wider community in the 21st Century.
Robust and Semi-fragile Watermarking Techniques for Image Content Protection
Tuesday 29 September 2009
The concept of robust and semi-fragile watermarking is described for copyright protection and authentication of digital images. A number of different transforms and algorithms used for robust and semi-fragile image watermarking are reviewed in detail. Four novel robust and semi-fragile transform based image watermarking related schemes are introduced. These include wavelet-based contourlet transform (WBCT) for both robust and semi-fragile watermarking, slant transform (SLT) for semi-fragile watermarking as well as applying generalised Benford’s Law to estimate the JPEG QF, then adjust the appropriate threshold for improving semi-fragile watermarking technique.
Adaptable Models and Semantic Filtering for Object Recognition in Street Images
Thursday 1 October 2009
The need for a generic and adaptable object detection and recognition method in static images is becoming a necessity today, given the rapid development of the internet and multimedia databases in general. Comparing with human vision, the computer vision is out-performed in terms of efficiency, accuracy and depth of understanding, as the computerised recognition is achieved at contextual level. In order to achieve recognition at semantic level, computer vision systems must not only be capable of recognising objects, regardless of the changes in appearance, location, and action, but also be able to interpret abstract non-observable concepts.
IET Surrey Local Network Event
Wednesday 7 October 2009
As the number of electronic files containing commercially sensitive or confidential information increases every day, the importance of identifying the originality of files is becoming a hot topic for enterprises as well as computing professionals. But how can this be done in a world where ever more sophisticated image manipulation tools are widely available?
End-to-End Verifiable Voting With Prêt a Voter
Monday 19 October 2009
Democracy depends on elections --- the people elect those to lead them and to make decisions for them. Any election is the difficult marriage of secrecy and verifiability in that we want all the votes to be secret so that no voter feels intimidated but free to vote according to her own heart and we want the election to be verifiable so that we can all rest assured that the outcome of the election does reflect the will of the people. Elections depend on people, procedures, software and hardware --- people stand for office, vote and count the votes and if, in the heat of the moment, they get a chance many of them would cheat to get ahead. To make cheating hard we have put in place procedures that have to be followed: the ballot box is shown to be empty at the start of election day and then it is sealed; ballots are cast into it one by one; at the close of the election the box is signed; it is safely transported to a counting place and only after checking signatures and lists is it opened and finally the votes are counted under close watch from election observers.
British Computer Society Lecture: Web Standards - Tomorrow's Web Today
Thursday 22 October 2009
Web evangelist Henny Swan will give an overview of trends in Web standards, with practical demonstrations showcasing new technologies and what they can do.
- Opera Software: innovative products and commitment to open Web standards
- Widgets and developing for the mobile Web
- Emerging standards: HTML5, CSS3, SVG/canvas and video
- Relevance for the Web, today and tomorrow
Linguistic Steganography using Automatically Acquired Paraphrases
Wednesday 25 November 2009
Linguistic Steganography aims to provide techniques for hiding information in natural language texts, through manipulating properties of the text, for example by replacing some words with their synonyms. Unlike image-based steganography, linguistic steganography is in its infancy with little existing work.
Mark My Words: Binary Watermarking Robust to Printing and Scanning
Thursday 26 November 2009
Binary Watermarking, robust to printing and scanning, is the process of imperceptibly hiding information in binary documents, typically text documents, so that the hidden information can still be recovered following the printing and scanning of a document. It presents a challenging problem, both in finding an imperceptible way to hide data within a sparse text document, and providing an embedding strategy that can handle the myriad of distortions introduced during printing and scanning. Our goal was to develop a scheme that had sufficient capacity to embed our proposed authenticating and localising watermark. Existing schemes did not provide sufficient capacity, requiring us to develop techniques to increase the embedding capacity whilst maintaining the robustness to printing and scanning.
Using High Assurance Components to Improve the Directed Use of Human Expertise
Wednesday 2 December 2009
Information Assurance solutions are usually made up of a variety of techniques which together a level of assurance to the information being used. It is currently beyond the state of the art to automate the identification, understanding and response to developing treat vectors and the operation of Information Assurance solutions over the long term demands very heavy levels of human expertise with all of the associated costs. Part of the issue here is that whereas we as the information assurance community (developers and assessors) do have a rigorous framework for the assessment of quality in certain types of high assurance components these techniques do not exist for large parts of the infrastructure on which assured components are reliant.
VitalPAC - ward-based patient data
Thursday 28 January 2010
A British Computer Society Lecture given by Dr Paul Schmidt, Portsmouth Hospitals NHS Trust
Information Security Research for the Move from "Need to Know" to "Need to Share"
Friday 19 February 2010
Information Security Research for the Move from "Need to Know" to "Need to Share" is part of a series of presentations during the semester. Dr Adrian Waller is a Technical Consultant for Information Security at Thales Research and Technology (UK).
Spatial Representation in Robotic Navigation by a Combination of Grid Cells and Place Cells
Tuesday 23 February 2010
Animals are capable of navigating through an environment. This requires them to recognise, remember and relate positions. Spatial representation is one of the main tasks during navigation. The brain seems to have a world centric positioning system such that we remember positions of objects in relation to a reference frame. Such spatial representation is believed to have been constructed in the hippocampus and related brain areas. Place cells, head direction cells, grid cells and border cells seem to transform vestibular information to spatial information in the brain. However, when psychological studies reveal how these areas are connected together, the process of transforming vestibular information to the kind of representation seen in place cells is still in question.
Student Pot Pourri
Thursday 25 February 2010
Selected undergraduate students from the Department of Computing at the University of Surrey will present the background and design stages of their final year project work. The event will provide the opportunity for the students to obtain feedback from a diverse audience of professionals, and will also allow the audience to discover the challenges facing current generations of near-graduate level students.
This year, we will hear about the Microsoft XNA computer game framework, an RSS tool that runs on the Google Android operating system, and a novel audio application.
Real World Application Security
Friday 26 February 2010
This presentation, on security issues, is given by Chris Seary, Charteris plc. Chris will be sharing his background both as a developer and as an auditor of large scale applications.
The talk will focus on common pitfalls in development, touching on all aspects of the lifecycle, from design through to testing and deployment. Different technologies used within the Microsoft technology stack will be analysed, demonstrating how tools have progressed.
Although Microsoft technologies are used for the demonstrations, much of the advice is applicable to other platforms. The presentation will also cover security solutions directly under the control of developers, such as WS-Security and fine-grain applications of encryption.
Chris Seary is an independent security consultant, providing advice to both Banking and Government. He is MVP, CISSP, ISO 27001 Lead Auditor, PCI DSS trained and CLAS. He frequently gives presentations and writes articles on IT Security. His specialism is securing enterprise scale applications.
Mobile CSP||B
Friday 26 February 2010
This presentation introduces Mobile CSP||B, a formal framework based on CSP||B which enables us to specify and verify concurrent systems with mobile architecture as well as the previous static architecture. In Mobile CSP||B, a parallel combination of CSP processes act as the controller for the B machines and these B machines can be transferred between CSP processes during the system execution
BIMA Seminar
Tuesday 2 March 2010
Interference between two competing stimuli has been extensively studied in many research areas including attention, information processing and cognitive control. For this study, both competition and cooperation of stimuli are explained by the developed Hopfield based Stroop model within the classical colour-word Stroop effect paradigm. Competition of stimuli occurs when the task is to name the colour for an incompatible colour-word and its colour (e.g. a word RED written in green), meanwhile the cooperation among stimuli can be observed when congruence (e.g. a word RED written in red) between both facilitates the response to the colour name. The Hopfield network is chosen for several reasons; we address the Stroop phenomenon as an association problem, the competition and cooperation of Stroop stimuli meets the pattern processing nature of the Hopfield network and the recall algorithm in Hopfield is biologically realistic. We have shown that, with a relatively simple but biologically plausible neural network of a single Hopfield network, our model is also able to predict the Stroop effect in comparison to the human performance.
Project Argus Professional
Friday 5 March 2010
Project Argus Professional is a multi-media Counter Terrorism presentation that takes the audience through an attack on a crowded city location. It analyses issues in the built environment that made the attack possible. It then challenges the audience to think how the likelihood of/or impact from such an attack could be reduced by the intelligent use of design and materials. The challenge for the 21st century is to make our built environment more resilient to terrorist attack without impinging on our ability to enjoy such places without draconian security measures.
Exploring Virtual Worlds from IRC to Second Life
Wednesday 10 March 2010
Second Life is a 3D multi-user role-playing online environment. Unlike other virtual worlds, created as games with set rules and stock characters, most of what goes on in Second Life is created by its users. This makes it an ideal playground for all sort of creative people. At any given time, there are no fewer than 20,000 people active in Second Life. Over a period of 60 days as many as one and a half million registered users log in.
Quality as a prerequisite for Security in Inter-operable Systems
Friday 12 March 2010
Considerable effort goes into specifying secure and security protocols and the equipment in which these are embodied. In most cases the specification concentrates on positive cases with very little concentration on failure modes.
This talk will concentrate on limitations that are imposed on our ability to make assertions about the security of a system where we are unable to understand the quality of the implementation. It will do so by examining the types of failure that have led to security system failures. Finally, the talk will examine some of the extant security protocols and show that these provide very little support for identifying and guaranteeing the quality of components networked together in a distributed system
The Principles of Evolutionary Programming
Tuesday 16 March 2010
This talk demonstrates the basic principles of biological evolution from the point of view of copying and mixing genetic data and goes on to illustrate how these principles have been used in evolutionary programming. Several published examples are examined in detail. Evolutionary programming is of interest for improving algorithms where the population of all possible algorithms is too large to be completely examined.
Corporate Espionage: Secrets Stolen, Fortunes Lost
Friday 19 March 2010
Paul King, Senior Security Adviser to CISCO, will give an overview of how organisations are at risk from corporate espionage - how organisations might be attacked and how they might reduce the risk. The talk will use Cisco's own organisation as an example. Paul will also discuss some of the research he is doing in this space.
7th Annual Computing Department PhD Conference
Tuesday 23 March 2010
On Tuesday 23 March, the Department will hold its 7th Annual PhD Conference.
The conference celebrates the work of all of our PhD students through presentations and posters, recognising their valuable contribution to computer science research. Two keynote speakers ( Professor Rudolf Hanka - University of Cambridge and David Krause - Varian Medical Systems Inc) will present motivational talks during the event.
Identity Defines the Perimeters in the Clouds
Thursday 25 March 2010
A British Computer Society lecture:
Adrian's presentation will aim to answer these questions:
* What are some of the key cloud choice drivers?
* Can you identify primary transformational SHIFTS required to enable secure, but collaborative, clouds?
* Why does Identity and Access Management have to SHIFT?
Can we have too much Security in our Information Systems? How much is good enough?
Friday 26 March 2010
Security is rarely seen as a business enabler - more an irksome and expensive disabler. Mike St John Green will construct the argument to show that it is an enabler, albeit expensive. We need to know how much to spend on security. He will show how to determine, in a systematic manner, the security features that are proportionate, answering the question, how much is good enough? Although Mike will be talking about how government tackles this problem, this issue applies to every business that relies on IT systems. This is really about risk management when applied to the security of IT systems.
Empirical Framework for Building and Evaluating Bayesian Network Models for Defect Prediction
Tuesday 27 April 2010
Software reliability is a crucial factor to consider when developing software systems or defining optimal release time. For many organisations ‘time to market’ is critical and avoiding unnecessary testing time whilst retaining reliable software is important.
The Cyber Threats, Managing the Risk to an Enterprise
Friday 30 April 2010
From the recent Google Aurora attacks, to the 'dark market' organised crime networks, we are entering a new era of especially organised, motivated and sophisticated cyber-threats. It is therefore more critical than ever that businesses pro-actively manage the risks to their Information.
The Delivery of Managed Security Services
Friday 7 May 2010
Tony Dyhouse will discuss some standards applicable to the fields of Information Assurance and Service Delivery, illustrating areas of commonality with regard to aim and approach.
The Future of Computer Forensics and What Industry Needs
Wednesday 12 May 2010
Our invited speaker is Dr Godfried Williams from the company Intellas UK, experts in Business Intelligence Security and Intelligent Forensics using AI techniques.
Security Awareness - The Common Sense Attribute
Friday 14 May 2010
A lecture delivered by Clinton Walker, Security Consultant at Logica.
Recent media reports covering major breaches of security claimed that they might have been prevented if staff awareness of security, procedures, appropriate data handling and security controls had been more reliable. Human error has become the biggest security concern for IT directors, end users and all parties concerned with data that’s held about them.
"Enterprises must recognise that simply trusting employees will inevitably prove detrimental to their security, their risk postures and their business interests," wrote Perry Carpenter, a research director at Gartner. Vnunet.com (10th Oct 2008).
An Overview of Image Processing Technology for Military Applications
Wednesday 19 May 2010
The presentation provides a brief review of current military needs together with an assessment of how image and data processing technology can be used to meet these capability gaps.
A range of examples are presented that show how image and data processing can be used within a variety of different applications ranging from airborne, maritime, and land-based platforms. These system examples are based on specific activities and programmes undertaken by Waterfall Solutions Ltd.
From T-cells to Robotic Sniffer Dogs
Wednesday 26 May 2010
There are many areas of bio-inspired computing, where inspiration is taken from a biological system and 'magically' transplanted into some engineered system.
In this talk, Jon Timmis will explore thoughts on a slightly more principled approach to bio-inspired system development, that hopefully does not include any magic, and discuss in the context of immune-inspired systems, some of the potential and pitfalls of using biological systems as inspiration. To help ground the talk, he will explore a case study from their recent work with DSTL in the development of an immune-inspired robotic sniffer dog detection system, inspired by a signalling mechanism in T-cells that are present in the immune system
Education, Education, Exploitation: it’s not just the economy, stupid!
Thursday 27 May 2010
Lecture by Dr. Bill Mitchell, Director of the BCS Academy
Computing education in schools is in a perilous state, university computing departments are under considerable strain and there is a pressing need for much better integration between the academic and business communities.
If income is less than food plus mortgage then do plan B
Wednesday 2 June 2010
Computers and software have played a huge if relatively unexpected role in my engineering career, in my three businesses (and in most other people’s), in my early 1970’s rugby club and in my life. I first saw a computer 43 years ago, I first used one 41 years ago (and helped build them) and hardware still hates me! But software, engineering in the head, opened the doors for my future.
The question is, what is today opening the doors for your future?
Tracking Surgical Instruments for Dexterity Assessment with Particle Filters
Wednesday 9 June 2010
Phil Smith will be discussing the tracking of surgical instruments for dexterity assessment with particle filters as his presentation for his MPhil-PhD transfer.
The style of medical training has emphasized more on standardized and objective assessment of clinical, academic and surgical knowledge. Traditionally in ophthalmology surgical skills are often assessed in the operating theatre environment with the supervising surgeon directly observing or providing feedback whilst watching a recording of the operation. This can be of great subjective variability and is not readily reproducible. Certain components of surgical skills can be determined by analyzing the movement of the instruments.
Block-based Image Steganalysis: Methodology and Performance Evaluation
Monday 5 July 2010
Traditional image steganalysis techniques are conducted with respect to the entire image. In this work, we aim to differentiate a stego image from its cover image based on steganalysis results of decomposed image blocks. As a natural image often consists of heterogeneous regions, its decomposition will lead to smaller image blocks, each of which is more homogeneous
Summer Meeting of NCAF in Department
Monday 12 July 2010
On 12th and 13th July the Department of Computing is hosting the Summer meeting of NCAF, the Natural Computation Applications Forum, a platform for exchange of ideas between academia and industry. The special theme is: Making Sense of Data - Theory and Practice.
Did Turing Dream of Electric People?
Wednesday 14 July 2010
Despite many advances in computational intelligence, it is clear that we have yet to achieve Turing’s dream of “intelligent machinery” – machines with human-level understanding. A broader reference point for intelligent behaviour which encompasses artificial agents is now being advocated (Cristianini 2010).
In this seminar, I will prompt discussion by asking some key questions, which include “have we forgotten Turing’s dream?” and “should we abandon human intelligence as our reference point?” And of course, I will also provide my own opinion as to what I think the way forward might be.
Neurodynamical Approach to Biologically Inspired Information Processing Model
Wednesday 28 July 2010
Biologically inspired computing studies the properties and mechanisms of information processing in nature and embeds this knowledge into artificial systems. Due to its adaptability to wider range of applications, neural network has been of interest in many research areas. Furthermore, the growing evidences from the neuroscience field have led to evolutions of artificial neural network (ANN). From the simple McCulloch-Pitts models, ANN has now in its third generation with spiking neuron network (SNN) models. SNN based model provides more meaningful interpretation of biological neural system. However, information encoding is a major challenge as the trade off for its realism.
An Attempt at Formalising the Species Concept
Wednesday 4 August 2010
Computer Scientists have a tendency to look at the systems they model and research from the angle of computer simulations. However, to clarify the nature of objects we simulate and mode, it is often worth to lean back and think a bit deeper out the nature of objects we are dealing with in a formal and mathematical framework since this can bring inconsistencies to light and give directions for new simulations.
Intelligent Systems and Bio-Inspired Optimization
Thursday 12 August 2010
Dr Runkler will give a short overview of the research activities on intelligent systems at Siemens Corporate Technology. In this talk, particular focus will be given to bio-inspired optimization methods and their applications, including ant colony optimization, wasp swarm optimization, fuzzy decision making and fuzzy weighted aggregation. These methods will be illustrated on several real-world industrial applications, including delivery logistics, cash management, car manufacturing, communication networks, maintenance scheduling, and electronics assembly.
What can optical illusions teach us about vision?
Wednesday 25 August 2010
The first in a series of NICE Research Group presentations.
We see the world around us in immense detail, in real time, and with no apparent effort. Yet we also seem to consistently misinterpret certain stimuli: optical illusions.
In this presentation, Dr Corney will be demonstrating a variety of optical illusions and discussing their nature and cause. Dr Corney will present some recent work using artificial neural networks as a simple model of vision, including the perception of illusions, and he will also discuss the implications for other visual agents, including animals and robots
Tracking instruments in cataract surgery
Wednesday 1 September 2010
This is the next lecture in the series of NICE presentations.
Phacoemulsification is one of the core surgical skills in ophthalmic training and the most common procedure in ophthalmology. This work is a part of the project that aims to develop a tool to measure the surgical competence and technical skill through analysing the instrument movement in surgical videos.
In this talk, Phil Smith will describe an approach that is able to track surgical instruments in cataract surgery using particle filters with a motion and colour based detector. Where experiments have shown it is possible to track an instrument even when prior information regarding its appearance is limited
A Neural Fraud Detection Framework Automatic Rule Discovery
Wednesday 15 September 2010
Fraud is a serious and long term threat to a peaceful and democratic society; the total cost of fraud to the UK alone was estimated by the Association of Chief Police Officers to be at least £14bn a year. One such fraud is payment card fraud - to detect this fraud, organisations use a range of methods, with the majority employing some form of automated rules-based Fraud Management System (FMS). These rules are normally produced by experts and it is often an expensive and time-consuming task, requiring a high degree of skill. This analytical approach fails to address the fraud problem where the data and relationships change over time.
Military Tactics in Agent-based Intrusion Detection for Wireless ad hoc Networks
Wednesday 22 September 2010
Wireless Ad hoc Networks (WAHNs) offer a challenging environment for conventional Intrusion Detection Systems (IDSs). In particular WAHN have a dynamic topology, intermittent connectivity, resource constrained device nodes and possibly high node churn. Researchers over the past years have encouraged the use of agent-based IDS to overcome these challenges. In this work we propose the use of military tactics to optimise the operations of agent-based IDS for WAHN.
Reliability, Security and Privacy Issues Raised when Monitoring the Elderly
Thursday 23 September 2010
British Computer Society Debate
Adrian Seccombe (formerly Chief Information Security Officer, Eli Lilly), Ian Wells (Royal Surrey County Hospital), chaired by Roger Peel (Department of Computing, University of Surrey)
From Language to Vision: Dynamic Context Analysis in Large-Scale Systems
Friday 1 October 2010
In this talk, Dr Lilian Tang will discuss the information variability in natural language and vision systems, the ambiguity caused by noisy data and their processing modules, and how context /reasoning can be modelled in order to perform application tasks in large-scale systems. This is to enable a system not just to deal with uncertainty and variability, but more to adapt to its unpredictable environment. She will review her progress so far in this form of contextual modelling and this will lead on to some open research challenges which now need to be addressed.
Bio-inspired mechanisms for arrays of custom processors
Tuesday 5 October 2010
Until recently, the ever-increasing demand of computing power has been met on one hand by increasing the operating frequency of processors and on the other by designing architectures capable of exploiting parallelism at the instruction level through hardware mechanisms such as super-scalar execution. However, both these approaches seem to be reaching (or possibly have already reached) their practical limits, mainly due to issues related to design complexity and cost-effectiveness.
Computing at School Hub Launch Event
Wednesday 20 October 2010
On 20th October, the Department of Computing will be hosting the launch event of the newest Computing at School Hub. The event is primarily aimed at teachers in and around the Guildford area, and is intended towards forming connections with and amongst local Computing and ICT teachers, as well as elaborating an agenda for future Hub events.
A Formal Analysis of Buyer-Seller Watermarking Protocols
Monday 25 October 2010
Copyright owners are faced with the task of limiting illicit file sharing of multimedia content. With this aim, Buyer-Seller Watermarking protocols are proposed to act as a suitable deterrent to file sharing by providing the copyright owner with adequate evidence of illegal distribution if and only if such illicit behaviour has occurred. A recent survey of BSW protocols concluded that only heuristic approaches to the security analysis of such protocols had been adopted in the literature and that formal analysis of the security of such schemes is a research direction worth pursuing.
Computing at School - what a BCS Branch can do
Thursday 28 October 2010
Our meeting this month features the recent formation of a "Computing At School" Hub based around the BCS Guildford Branch and the Department of Computing at the University of Surrey.
There is an increasing realisation that the study of computing topics in schools is not laying the foundations for a lifelong appreciation of the subject, but instead just training students in skills related to particular current technologies. The BCS, Microsoft Research and several other industrial and educational organisations have created "Computing At School" (CAS; see http://www.computingatschool.org.uk/) as a vehicle to incubate improvements in the school-age study of computing, and to support teachers with high-quality resources to enable this change.
The BCS Guildford Branch is the first BCS Branch [to our knowledge!] to launch a local Computing At School Hub. This will be based in Guildford, but attracted teachers from south London and West Sussex as well as from Surrey and Hampshire to its Launch event last week. So far, there are about 10 Hubs country-wide, but we hope to promote our model to other Branches.
This month's meeting will introduce the Surrey CAS Hub to the BCS Guildford Branch, and outline the role that our industrial and commercial members can play to help it to achieve its goals. In particular, we will be exploring particular technologies that could be used to provide motivating learning experiences for mass teaching of the foundations of computing - such as the mobile phone.
Evolving Legged Robots Using Biologically Inspired Optimization Strategies
Friday 29 October 2010
When designing a legged robot a small change in one variable can have a significant effect on a number of the robot’s characteristics, meaning that making tradeoffs can be difficult. The algorithm presented here uses biologically inspired optimization techniques to identify the effects of changing various robot design variables and determine if there are any general rules which can be applied to the design of a legged robot. Designs produced by this simulation are compared to existing robot designs and biological systems, showing that the algorithm produces results which require less power and lower torque motors than similar existing designs, and which share a number of characteristics with biological systems
Zipf’s Law for Image Forensics
Monday 1 November 2010
Zipf’s law, one of the empirical laws, was originally used to analyse the probability of occurrence of an event in mathematical fashion. For instance, it can be used to describe the relationships between the popularity rank of words and their frequency of use in a natural language. Similarly, it can be shown that there is a mathematical pattern between the size of the population in a country and the size of its cities.
MBDA and Open Innovation
Tuesday 2 November 2010
The speaker at this seminar will be Mr Mohan Ahad, Assistant to Chief Technologist at MBDA, and Managing Director at Microlaunch Systems Ltd. MBDA is Europe's largest supplier of Guided Weapons. With reductions in defence budgets it has to look for innovative ways of developing technology for its military customers by leveraging capability from the civil sector.
The talk will give an overview of the company, the funding mechanism available for low level technology and its ambitions for participating EU Framework Programmes.
All are welcome so please make a note in your diary to attend.
Computational Intelligence to design self-organising manufacturing systems
Tuesday 9 November 2010
Designing complex, self-organising systems is challenging. It requires to find local, decentralised rules for the agents which result in a good global performance of the overall system. In this talk, two approaches are presented at the example of a self-organising manufacturing system where local dispatching rules are used for decentralised scheduling.
The first approach supports a human designer by revealing the weaknesses of an examined manufacturing system. This is achieved by automatically searching for easy-to-analyse problem instances where the applied dispatching rule performs poorly.
The other approach is to generate the dispatching rules automatically by simulation-based Genetic Programming.
Supervised Learning Algorithms for Multilayer Spiking Neural Networks
Thursday 11 November 2010
The current report explores the available supervised learning algorithms for multilayered spiking neural networks. Gradient descent based algorithms are one of the most used learning methods for rate neurons. The back-propagation version for spiking neurons firing a single spike, SpikeProp, promises the same learning abilities as for artificial neural networks. Systematic investigations on this learning method show that SpikeProp requires more computations than back-propagation and a reference start time is critical for convergence. These issues require significant improvements to the gradient descent learning method for spiking neural networks in order for an efficient algorithm to be developed. Further developments include a learning algorithm for input and output neurons with multiple spikes, and a general learning rule for recurrent networks.
Implementation: A Practical Route to take Neural Computing Research into Business and Industry Applications
Friday 12 November 2010
Translational Research is one of the latest Government's key words; nowadays, if you want to get a grant then your scientific research needs to be translated into practical applications that have impact. Working within a commercial environment is not straightforward but it is a two-way street, often the practical application helps to further drive the research. The application of neural computing to help industry address some pressing business needs has the potential to improve its performance in a number of keys areas. Over 15-years of delivering innovation into business, a series of projects and approaches are discussed. Some have been successes, some have been failures, but what is the common theme?
Passive Image Forensic Techniques for Source Identification
Thursday 18 November 2010
Recently, much interest has developed in identifying reliable techniques that are capable of accurately ‘uncovering the truths’ regarding the pre and post- processing of a digital image, without the requirement of actively injecting a digital watermark or signature into the image data. Whilst watermarking schemes have been shown to be useful for protecting the integrity of the image, there always exists the underlying risk that the watermark data might be forcibly or accidentally removed. When this happens, the image is effectively stripped of its identity, and its integrity is extremely difficult to prove. Forensic techniques aspire to achieve similar objectives but do not rely on the strength of embedded data. Instead, the ambition is to identify the facts of an image, based solely on the data provided.
Multi-bit watermarking robust to Stirmark
Monday 22 November 2010
Substantial interest in digital watermarking over the last 15 years resulted in a considerable number of different watermarking systems that have been proposed. However, despite this extensive literature on watermarking, not much progress has been made in tackling one of the most devastating attacks on these systems:
the original Stirmark attack introduced by Peticolas et al.
The attack has been described as the software equivalent of a high resolution print-scan attack which is part of a much larger class of random bending attacks (RBAs).
Optimised Agent-Based Intrusion Detection for Wireless Ad Hoc Networks
Tuesday 23 November 2010
Wireless Ad hoc Networks (WAHN) offer a challenging environment for conventional intrusion detection systems (IDS). In particular WAHN have a dynamic topology, intermittent connectivity, resource constrained device nodes and possibly high node churn. Researchers over the past years have encouraged the use of agent-based IDS to overcome these challenges.
Towards a Unified Framework for Intelligent Systems & Robotics
Thursday 25 November 2010
This talk introduces a theoretical framework based on approximate reasoning, it first extends Euclidean transformations into quantity space via the proposed fuzzy qualitative algebra, next, system behavior is represented by a set of automatically generated sampling data, then, data analysis methods are selected to extract features of the dataset according to individual application context, finally system behavior and corresponding data features are integrated at the system level. The framework is presented in terms of robotics, it has been adapted into applications with encouraging results such as hand gesture recognition, human motion analysis. A unified approach to human/prosthetic hand gesture recognition and results in vision/capture based human motion analysis will be reported in the talk. The framework is intended to provide a foundation towards a unified representation by “gluing” hybrid representations.
The Young Rewired State
Thursday 25 November 2010
The Identity Dilemmas, can Watermarking Help?
Monday 29 November 2010
A discussion aimed at exploring how Watermarking might be able to help with the Identity Dilemmas. All are welcome to attend so please enter this event in your diary.
Understanding Technological Paradigm Formation: Modelling Industries as Parallel Adaptive Search Mechanisms
Wednesday 1 December 2010
The combination of a dominant (de-facto standard) design and associated search heuristics constitute a `technological paradigm'. Such technological paradigms may emerge as industries evolve, altering the nature of innovative search from exploration to incremental improvement along a `technological trajectory'. Disagreements exist as to the conditions of design standardisation and the relationship between standardisation and related shifts in innovation emphasis.
Lionhead Studios and Fable III
Wednesday 1 December 2010
Jonathan Shaw from Lionhead Studios will be giving a talk in the normal CompSoc slot on Wednesday 1st December 2010 at 5:00pm in LTM (note the different venue for this week only).
Jonathan will be bringing along the Fable III SDK for a demo. All are welcome.
Accessibility Assessment and Simulation
Thursday 2 December 2010
The core concept of the proposed PhD research is to empower the accessibility of ICT and non-ICT technologies by introducing an innovative user modelling technique for the elderly and disabled. This new user modelling methodology will be able to describe in detail all the possible disabilities, the affected by the disabilities tasks as well as the physical, cognitive and behavioural/psychological characteristics of any user. An extension of UsiXML[1] language will be developed, in order to express the Virtual User Models in a machine-readable format. Research will be conducted in order to determine how the values of various disability parameters vary over individuals and if these values follow any common probability distribution (e.g.: Gaussian, Poisson, etc.).
Advanced Signal Processing Algorithms for Brain Signal Analysis
Friday 3 December 2010
Most of the techniques and algorithms used for other applications such as communication, acoustics, and different biomedical engineering modalities can be extended to brain signal analysis. Spatial or temporal resolution limitation and the effect of noise and artifacts in the brain signals can be mitigated by processing of multichannel and/or multi-modal (such as joint EEG-fMRI) data using appropriate algorithms. Here, we may look at very recent techniques developed for analysis (noise and artifact removal, dynamics, source detection, localization, and tracking, prediction, etc.) of brain signals, and discuss various directions for future research.
Security of Near Field Communication Transactions with Mobile Phones
Monday 6 December 2010
Google CEO, Eric Schmidt, announced on the 15th of November 2010 the plan for their next generation of Android based mobile phones to become electronic wallets by making use of Near Field Technology (NFC). NFC is contactless technology based on high frequency RF tags already found in contactless cards like the Oyster. Little research has been carried out on how secure the services offered by NFC are, one of the reasons being its reliance on proximity (~10cm). Attacks that have been carried out used expensive antennas and other equipment. They have also been targeted at contactless cards and not mobile phones where other side channels exist like Bluetooth and Wi-Fi, making crosstalk and information leakage a security concern.
Using MDE to Generate Formal Models
Thursday 9 December 2010
Formal analysis is based on ensuring that a model preserves particular system properties. Defining the model and specifying the properties requires specialist expertise, the formal model and properties are typically written by hand, based on some informal definition. These definitions range from English written requirements, Unified Modelling Language (UML) models to Domain Specific Language (DSL) descriptions. Our work focuses on the investigation of whether the formal models can be automatically generated from their corresponding informal definition.
Intelligent Information Retrieval in the Deep Web: an adaptable semantic model for retrieving, indexing and visualising Web Knowledge
Thursday 9 December 2010
Humans communicate through different signals. Due to an ability to perceive things, our species can perform this communication accurately and efficiently even when noise is introduced or the signal is presented in different formats. Computer technologies aid in cognitive processing and can, to some degree, support intellectual performance and enrich individuals’ minds. We operate (and design systems that operate) using analogies, such as reasoning, comparisons, and synonymity. In the end: is analogy a shared abstraction? Does it derive from mathematics? Is it high-level perception in shared structure theory?
A Dynamically Adaptive Semantic-Driven Model for Efficiently Managing and Retrieving Resources in Large Scale P2P Networks
Friday 10 December 2010
To build a scalable, robust and accurate P2P network, the network must be able to manage efficiently large amounts of information. Thus, a critical challenge in P2P networks is to collectively transform resources to a repository of semantic knowledge to accurately and efficiently discover resources.
Digital Forensics for JPEG2000 and Motion JPEG2000
Friday 17 December 2010
With the advancement of imaging devices and image manipulation soft-ware, production, development and manipulation of digital images can now be done by almost everyone. For this reason, the task of tracking and protecting digital data (images, videos etc.) has become very difficult. To provide adequate policing over the use of digital content, both Active and Passive security techniques are followed. Digital watermarking is an active approach that involves pre-processing an image in order to protect it.
Is Arguing in the Real World too Costly? An exploration into the practicality of implementing argumentative reasoning software components
Monday 20 December 2010
In everyday life human decision-making is often based on arguments and counter-arguments. Decisions made in this way have a basis that can be easily referred to for explanation purposes as not only is a best choice suggested, but also the reasons of this recommendation can be provided in a format that is easy to grasp.
Formal Verification of Systems Modelled in fUML
Thursday 13 January 2011
Much research work has been done on formalizing UML diagrams, but less has focused on using this formalization to analyze the dynamic behaviours between formalized components. In this work we propose using a subset of fUML (Foundational Subset for Executable UML) as a semi-formal language, and formalizing it to the process algebraic specification language CSP, to make use of FDR2 as a model checker.
Associative Network Models of Hippocampal Declarative Memory Function
Friday 21 January 2011
The hippocampus is widely believed to mediate mammalian declarative memory function, and it has been demonstrated that single pyramidal neurons in this cortical region can encode for the presence of multiple spatial and non-spatial stimuli. Furthermore, the rate and phase of firing - with respect to theta oscillations in the local field potential – can be dissociated, and may thus encode for separate variables. This has led to the suggestion that hippocampal processing may operate using a dual (rate and temporal) coding mechanism.
Generative Web Information Systems
Monday 24 January 2011
This PhD project aims to realize a new type of information system, more dynamic and less opaque to its owners, specified with structured natural language models and queried through hypermedia. To accomplish this, we focus on Semantics of Business Vocabulary and Rules (SBVR) as a modelling language, Representational State Transfer (REST) as an interface paradigm and Relational Databases as the persistence mechanism. All three of these technologies have declarative underpinnings, focusing on the ‘what’ rather than the ‘how’, which is why their combination is feasible and effective. By creating appropriate mappings to align these technologies, we create a core platform for Generative Information Systems.
Advanced Signal Processing Algorithms for Brain Signal Analysis
Friday 28 January 2011
Most of the techniques and algorithms used for other applications such as communication, acoustics, and different biomedical engineering modalities can be extended to brain signal analysis. Spatial or temporal resolution limitation and the effect of noise and artifacts in the brain signals can be mitigated by processing of multichannel and/or multi-modal (such as joint EEG-fMRI) data using appropriate algorithms.
Here, we may look at very recent techniques developed for analysis (noise and artifact removal, dynamics, source detection, localization, and tracking, prediction, etc.) of brain signals, and discuss various directions for future research.
Department's First Olympic Event
Monday 31 January 2011
The Department is holding a sports event on Monday 31st January to which all Year 3 students, PGT and PGR are invited to take part. Sports like 5-a-side football, badminton, squash and fun games will be set up in the newly built Surrey Sports Park.
The "First Computing Olympics" is a new event and it is hoped that competing teams will be formed and a large number of entrants will contribute to its success. The Bench will provide refreshments after the hard work of competing
Energy Consumption and Information Processing in Neurons
Tuesday 1 February 2011
The nervous system is under selective pressure to generate adaptive behavior but at the same time is subject to costs related to the amount of energy neural signalling consumes. Characterizing this cost-benefit trade-off is essential for understanding the function and evolution of nervous systems, including our own.
Self-Organization of Neural Systems - An Evolutionary and Developmental Perspective
Friday 4 February 2011
New Methods for EEG and ERP-based Analysis of Mental Fatigue
Thursday 10 February 2011
FATIGUE is a common phenomenon that exists in our everyday life which is the state of reduced performance and can have mental or physical component.
The state of reduced performance of the operators that relates to the fatigue has been caused many disasters which many of them are not well known to the public.
Department of Computing UCAS Day
Wednesday 16 February 2011
The Department will be holding another UCAS day to meet, interview and welcome invited prospective students. Staff will be setting up presentations and be present to speak to and answer queries from students. Please contact our UG Admissions office for further information.
Prêt a Voter with Acknowledgement Codes
Thursday 17 February 2011
A scheme is presented in which a Pretty Good Democracy style acknowledgement code mechanism is incorporated into Prêt a Voter. Voters get immediate confirmation at the time of casting of the correct registration of their receipt on the Web Bulletin Board. As with PGD, the registration and revealing of the acknowledgement code is performed by a threshold set of Trustees. Verification of the registration of the vote is now part of the vote casting and therefore more convenient for the voters. This verification mechanism supplements the usual verification on the Web Bulletin Board mechanism, that is still available to voters.
Multi-Level Security (MLS) - What is it, why do we need it, and how can we get it?
Friday 18 February 2011
MLS has been a field of study in computer science for decades, and MLS systems have been developed and deployed for high assurance defence and government applications. However, in recent years other users with less stringent security requirements have been talking about their need for "MLS", and have been attempting to use traditional MLS solutions in their systems. In this talk, we take a look at the varied applications that are claimed to require "MLS" and attempt to reconcile their different interpretations of the term. We then survey existing and proposed MLS technologies, discuss some of their drawbacks when compared with these applications' requirements, and propose some areas for future research.
The speaker is Dr Adrian Waller, Technical Consultant at Thales.
A blind steganalysis scheme for H.264/AVC video based on collusion sensitivity and two-stage noise classification
Monday 21 February 2011
For the H.264/AVC video stream with covert data by collusion-irresistent steganography, a blind video steganalysis scheme is proposed based on the collusion sensitivity and noise classification. It exploits temporal frames weighted averaging (TFWA) for collusion to improve the capabilities of host approximation and watermark removal, instead of the traditional temporal frames averaging (TFA). To overcome the interferences by motion and illuminance variation, a content change factor (CCF) is defined to adaptively classify the noise existing in prediction error frames (PEF). For passive steganalysis, final decision is made by the center of mass (COM) feature of histogram characteristic function (HCF). Experimental results demonstrate that the proposed approach can cope with temporal-domain, transform-domain and even spread spectrum based steganographic algorithms. For a stego-video with embedding strength of 10%, it can achieve a probability of correct detection (PCD) about 99.82%.
Compressed Sensing and its Applications
Thursday 24 February 2011
Compressed Sensing (CS) framework which is linked with the sparse recovery problem has been recently introduced and applied to solve numerous problems. Measurement matrix has a key role in the CS to sample the signal/images. It has been recently shown that optimization of this matrix can increase the quality of reconstruction. In this talk we first introduce the CS theory. Then, the advantages of the measurement matrix optimization and our proposed strategies for this purpose are discussed.
Finally, we review some applications and extensions of CS such as functional Magnetic Resonance Imaging (fMRI) and Watermarking.
The Delivery of Managed Security Services
Friday 25 February 2011
The second in the Technologies and Applications seminar series, presented by Tony Dyhouse.
Tony Dyhouse will discuss some standards applicable to the fields of Information Assurance and Service Delivery; illustrating areas of commonality with regard to aim and approach. Different mechanisms for the protection of CIA will be discussed from a point of view of risk transference and third party provision of services, including a look at potential conflict of interest and how that can be addressed. Finally, a view on advancing technology and Cloud services.
Non-negative Matrix Factorization and its Application to fMRI
Thursday 3 March 2011
Non-negative matrix factorization (NMF) has been widely used for analyzing multivariate data. NMF is a method which creates a low rank approximation for positive data matrix and because of non-negativity constraint it has found interesting applications in image processing where he data is inherently positive. Functional Magnetic Resonance Imaging (fMRI) is an imaging technique which provides useful anatomical and functional information of brain. Analyzing data provided by the fMRI helps to investigate brain function.
In this talk, we first give a brief introduction about different algorithms for fMRI analysis. Then, the application of Non-negative matrix factorization to fMRI data and our proposed algorithm for this purpose will be discussed and its superiority to other data decomposition techniques such as BSS will be emphasised for such data.
Quality as a prerequisite for Security in Interoperable Systems
Friday 4 March 2011
Considerable effort goes into specifying secure and security protocols and the equipment in which these are embodied. In most cases the specification concentrates on positive cases with very little concentration on failure modes.
This talk will concentrate on limitations that are imposed on our ability to make assertions about the security of a system where we are unable to understand the quality of the implementation. It will do so by examining the types of failure that have led to security system failures.
Finally, the talk will examine some of the extant security protocols and show that these provide very little support for identifying and guaranteeing the quality of components networked together in a distributed system
Break our Stego System - The BOSS Challenge
Monday 7 March 2011
Supervised Learning Algorithm for Spiking Neural Networks
Thursday 10 March 2011
Neural networks based on temporal encoding with single spikes are biologically more realistic models as experimental evidence suggests that biological neural systems use the exact time of action potentials to encode information. Moreover, it has been demonstrated that networks of spiking neurons are computationally more powerful than sigmoidal neurons. In order to reach the computational power of spiking neurons, efficient learning algorithms must be used. This presentation explores the available supervised learning methods in artificial and spiking neural networks
Security and Commerce: Why Business Care and What's Happening in Practice
Friday 11 March 2011
A brief introduction into why IT security has become increasingly important to businesses over recent years: what has driven the increasing use of IT in transactional business and why this has caused a focus on security. We will also discuss the type of threats that business is aware of and what it is they believe they are responding to. This we will use as the backdrop to describing some of the worst “real” incidents and how these might differ from the threat that business was preparing to meet. We will then go on to talk about how software vendors view IT security and how this is driving their efforts to secure their products. This will focus primarily on the approach that Microsoft have taken over recent years.
Synapse Complexity: Origins and Organization
Thursday 17 March 2011
Professor Seth Grant from the Genes to Cognition Programme, Wellcome Trust Sanger Institute, Hinxton, Cambridge, will visit the University of Surrey to give a presentation to the Department of Computing and all are welcome to attend.
For over a century it has been known that the synapse – the junction between nerve cells – is of fundamental importance in organizing brain circuits and behavior.
Corporate Espionage: Secrets Stolen, Fortunes Lost
Friday 18 March 2011
This presentation is given by Mr Paul King, Senior Security Advisor at CISCO. Paul will give an overview of how organisations are at risk from corporate espionage - how organisations might be attacked and how they might reduce the risk. The talk will use Cisco's own organisation as an example. Paul will also discuss some of the research he is doing in this space.
NFC Technology: What is it? Protocols used? Future researches
Monday 21 March 2011
Near field communication (NFC) is a standard-based wireless communication technology that allows data to be exchanged between devices that are a few centimeters apart. In this presentation three major elements will be discussed: the concept of NFC, the different protocols used (their pros and cons) and the potential areas of research.
Error Concealment Techniques for Multi-View Sequences
Thursday 24 March 2011
The H.264/MVC standard offers good compression ratios for multi-view sequences by exploiting spatial, temporal and interview image dependencies. The performance of this coding scheme is optimal in error-free channels, however in the event of transmission errors it leads to the propagation of the distorted macro-blocks, degrading the quality of experience of the user. In this presentation we will review the state-of-the-art error concealment solutions and look into low complexity concealment methods that can be used with multi-view video coding. Error resilience techniques that help error concealment will also be discussed.
Dealing with the Transition from Existing to Future systems
Thursday 24 March 2011
A British Computer Society event. This event is open to Members and Non-Members. Students are particularly welcome.
Please see the Branch website for further details.
Assuring the security of our Information Systems: How much is good enough?
Friday 25 March 2011
Universally Verifiable Electronic Voting Schemes With Re-encryption Mixnets
Monday 28 March 2011
Democracy entirely depends on the elections, which must be robust and fair without cheating and electoral frauds. Voters must be sure that their vote has remained unaltered and has been correctly tallied. The election system should prevent any possible coercion and should be robust even if the official authorities are not trusted. There is a recent example where frauds and systems’ misbehaviour were reported by voters (Florida, 2000). When security properties like integrity, privacy, anonymity, confidentiality and verifiability are not supported or they have limited functionalities, attacks can be made enabling a third party to learn the voters vote. All these lead to a fundamental question: how can the voter trust the voting procedure and the announced results?
Department's 8th PhD Student Conference
Wednesday 30 March 2011
Bridging the Computational Sensory Gap
Thursday 31 March 2011
A long standing aim for computer science has been to build ‘intelligent machines’. Building computer models of the human brain has the potential to achieve this aim, either through developing an artificial brain, or by understanding how the brain computes to replicate intelligence. However, despite significant advances, we have yet to realise this potential. There appears to be a clear gap between modelling the brain for neuroscience and applying what we have learnt about brain-like computation to real-world, practical problems. On the one hand, models based on high-level cognition have been developed which can process real-world inputs. These cognitive architectures may show us broadly how the brain achieves certain function, but are too simplistic for practical purposes. On the other hand, large-scale brain simulations have been developed which model brain dynamics, but are not designed to replicate intelligence. In this seminar we will explore these issues and debate some of the possible long-term answers which involve bridging the gap between cognitive architectures and large-scale simulations, particularly for sensory processing.
The Cyber Threats, Managing the Risk to an Enterprise
Friday 1 April 2011
From the recent Google Aurora attacks, to the 'dark market' organised crime networks, we are entering a new era of especially organised, motivated and sophisticated cyber-threats. It is therefore more critical than ever that businesses pro-actively manage the risks to their information.
The Law of Tendency to Executability and its Implications
Wednesday 6 April 2011
The Law of Tendency to Executability states that all useful descriptions of processes have a tendency towards executability. Attempts to rise above the perceived low abstraction level of executable code can produce increased expressive power, but the notations they engender have a tendency to become executable. This has many consequences for software; its creation, evolution and deployment. It also has wider implications. The automation that drives this tendency also raises fundamental questions about how human decision making can remain inside the execution loop.
A Unified Computational Model of the Genetic Regulatory Networks Underlying Synaptic, Intrinsic and Homeostatic Plasticity
Thursday 7 April 2011
It is well established that the phenomena of synaptic, intrinsic and homeostatic plasticity are mediated – at least in part – by a multitude of activity-dependent gene transcription and translation processes. Various isolated aspects of the complex genetic regulatory network (GRN) underlying these interconnected plasticity mechanisms have been examined previously in detailed computational models. However, no study has yet taken an integrated, systems biology approach to examining the emergent dynamics of these interacting elements over longer timescales. Here, we present theoretical descriptions and kinetic models of the principle mechanisms responsible for synaptic and neuronal plasticity within a single simulated Hodgkin-Huxley neuron. We describe how intracellular Calcium dynamics and neural activity mediate synaptic tagging and capture (STC), bistable CaMKII auto-phosphorylation, nuclear CREB activation via multiple converging secondary messenger pathways, and the activity-dependent accumulation of immediate early genes (IEGs) controlling homeostatic plasticity. We then demonstrate that this unified model allows a wide range of experimental plasticity data to be replicated. Furthermore, we describe how this model can be used to examine the cell-wide and synapse-specific effects of various activity regimes and putative pharmacological manipulations on neural processing over short and long timescales. These include an examination of the interaction between intrinsic and synaptic plasticity, each dictated by the level of activated CREB; and the differences in functionality generated by STC under regimes of reduced protein synthesis. Finally, we discuss how these processes might contribute to maintaining an appropriate regime for transient dynamics in putative cell assemblies within contemporary neural network models of cognitive processing.
Security Issues for Developers using Microsoft Technologies
Friday 8 April 2011
Chris Seary, Consultant at Charteris, will be giving two talks. The first will cover real world application security from an auditor's perspective. It goes through many of the common security issues arising from lack of secure development practice. This will give demonstrations of injection attacks on a web site.
The second talk will cover the newer WS-Security toolset for SOAP web services. It will show examples of code, configuration and the communications used.
A new robust watermarking system based on the DCT domain
Monday 11 April 2011
The algorithm takes full advantage of local correlation of the host image pixels and the masking characteristics of the human visual system, it chooses DCT blocks by comparing the value of the DCT low frequency coefficients and the amount of the nonzero DCT coefficients of each block. After the embedding process is completed, transforming the DCT coefficients from the frequency domain to the spatial domain produces some rounding errors, because the conversion of real numbers to integers will cause some information loss. The paper uses genetic algorithm to deal with the rounding errors. The experimental results show that, the algorithm can not only make sure the quality of the embedded image and the invisibility of the watermark, but also robust to common image operates, and JPEG compress
FREE JAVA WORKSHOP FOR COMPUTING TEACHERS
Tuesday 12 April 2011
The topics available come from our 11 week first year undergraduate course. Participants would be able to work at their own pace through practical examples with support from University staff.
Cultural-Based Particle Swarm Optimization for Multiobjective Optimization
Thursday 14 April 2011
Our next speaker in this series of seminars will be Professor Gary Yen from the Oklahoma State University. All are welcome to attend.
Evolutionary computation is the study of biologically motivated computational paradigms which exert novel ideas and inspiration from natural evolution and adaptation. The applications of population-based heuristics in solving constrained and dynamic optimization problems have been receiving a growing interest from computational intelligence community. Most practical optimization problems are with the existence of constraints and uncertainties in which the fitness function changes through time and is subject to multiple constraints.
Introduction to Identity and Access Management
Wednesday 20 April 2011
A Pareto-based Approach to Multi-Objective Machine Learning
Thursday 12 May 2011
Machine learning is inherently a multi-objective task. Traditionally, however, either only one of the objectives is adopted as the cost function or multiple objectives are aggregated to a scalar cost function. This can be mainly attributed to the fact that most conventional learning algorithms can only deal with a scalar cost function. Over the last decade, efforts on solving machine learning problems using the Pareto-based multi-objective optimization methodology have gained increasing impetus, thanks to the great success in multi-objective optimization using evolutionary algorithms and other population-based stochastic search methods.
A Cauchy Distribution based Video Watermark Detection for H.264/AVC in DCT Domain
Monday 16 May 2011
Compared with Generalized Gaussian distribution (GGD), Cauchy distribution is superior to describe the statistical distribution of the Intra-coded DCT coefficients in H.264/AVC. For the bipolar additive watermark in H.264/AVC video stream, a Cauchy distribution based detection algorithm is proposed by ternary hypothesis testing. Experimental results show that the proposed approach can achieve more than 80% on average for the accuracy of watermark detection.
Building on Existing Security Infrastructures
Wednesday 18 May 2011
Professor Chris Mitchell, from Royal Holloway, will be our next speaker.
A Software Engineering Cock-up
Wednesday 25 May 2011
This cautionary tale is about an apparently trivial software project which didn't go too well. After posing some questions, it explains what happened, mentions some useful survival tools and techniques that would certainly have made things go better and ends with a couple of stories with enduring relevance. Relevant are 'luck' and quality – corner stones of personal success which are hard to define and even harder to achieve.
Difficulties in Learning and Teaching to Program Object Oriented Programming Concepts in the Computer Science Higher Education Community
Wednesday 25 May 2011
Programming is a major subject in Computer Science (CS) departments. However, students often face difficulties on the basic programming courses due to several factors that cause these difficulties. Maybe the most important reason is the lack of problem solving abilities that many students show. Due to their lack of general problem solving abilities, students don’t know how to create algorithms, resulting in them not knowing how to program.
Exploration of Working Memory
Thursday 26 May 2011
Working memory refers to a limited capacity part of the human memory system that is responsible for the temporary storage and processing of information while cognitive tasks are performed. We will explore how memories can be represented by extensively overlapping groups of neurons that exhibit stereotypical time-locked spatiotemporal time-spiking patterns, called polychronous patterns and make further assumptions regarding the polychronous group span of different brain regions in association to working memory.
Cyberwarfare - Threats and Responses
Thursday 26 May 2011
Cyberwarfare is a subject that has received a great deal publicity since the attacks on Estonia and Georgia and the Stuxnet malware in particular. Nation states are now devoting much more attention to the damage that can be inflicted upon them without a shot being fired and the probability that it will happen to them.
Transitioning a Clinical Unit to Data Warehousing
Friday 27 May 2011
This research proposes a method for developing a data warehouse in a clinical environment while particularly focusing on the requirements specification phase. It is conducted primarily to target organizations whose requirements are not clearly defined and are not yet aware of the benefits of implementing a data warehouse. By integrating key ideas such as the agile manifesto, maintaining data quality, and incremental and prototyping approaches, it provides a platform for collaboration and participation between users and designers, as well as identifying relevant processes and their additional value. It is also important to note that this work was performed in the context of a Clinical Unit with limited experience of IT, and limited budget. An important research objective was to demonstrate how to obtain significant “buy-in” to a data warehouse solution at low-cost, and minimal risk to the clinical unit.
Formal Verification of Trustworthy Voting Systems
Tuesday 31 May 2011
Fair elections are essential processes in ideal representative democracies since ancient Greece. Thus, as being an indispensable part of fair elections, a various number of trustworthy voting systems has been designed and improved over decades. However, due to insufficient amount of proofs, the lack of trustworthiness of such systems still precipitates quite a number of system attacks violating citizens' privacy, modifying election results, which have as a consequence controversial elections and unfair democracies.
Ensembles of Classification Methods for Data Mining Applications
Thursday 2 June 2011
Data Mining is the use of algorithms to extract the information and patterns derived by the knowledge discovery in databases process. Classification is a major data mining task.
Classification maps data into predefined groups or classes. It is often referred to as supervised learning because the classes are determined before examining the data. In this research work, new hybrid classification methods are proposed using classifiers in a heterogeneous environment with using voting and stacking mechanisms and their performances are analyzed in terms of error rate and accuracy.
A Classifier ensemble was designed using a k-Nearest Neighbour (k-NN), Radial Basis Function (RBF), Multilayer Perceptron (MLP), and Support Vector Machine (SVM). The feasibility and the benefits of the proposed approaches are demonstrated by means of data sets like intrusion detection in computer networks, direct marketing, signature verification. Experimental results demonstrate that the proposed hybrid methods provide significant improvement of prediction accuracy compared to individual classifiers.
The Case for and Against Biomimetic Brain Machine Interface
Wednesday 8 June 2011
Dr Kianoush Nazarpour from the Motor Control Group at the Institute of Neuroscience, Newcastle University will be our speaker on this occasion. All are welcome to attend.
Biomimetic brain-machine interfaces (BMI) have evolved from experimental paradigms exploring the neural coding of natural arm and hand movements to mathematically advanced real-time neural firing rates decoders. However, despite recent decoding algorithms with increasing levels of performance and sophistication, BMI control remains slow and clumsy in comparison to natural movements. Therefore, considerable improvements are required if these devices are to have real-life clinical applications.
Emergent Constraints in Technological Change: The Formation of Exemplar Technologies and their Effect on the Direction of Future Search
Thursday 9 June 2011
This work suggests that population-level selection of artefact designs produced by firms facing an ill-structured design problem favours the formation of a dominant design with a set of 'high pleiotropy' elements, affecting many product functions. Selective expansion of a technological artefact's active 'design space' may embed a negative heuristic within the design, effectively 'locking-in' earlier design choices. In the absence of sufficient variety-generating mechanisms, competition will result in a dominant design within an industry. This research describes how selection at the population level may interact with the local search routines of firms to produce a dominant design embodying 'frozen' dimensions. Such a design may be seen to form part of a technological paradigm. The investigation nests Koen Frenken's existing model of technological paradigms within an evolutionary population-based model. Entropy statistics indicate several exploratory stages that emerge endogenously via interaction of selection at the firm and population levels
Tools for CSP - Overview and Perspectives
Wednesday 15 June 2011
Our speaker will be Dr Markus Roggenbach, from the Department of Computer Science, Swansea University.
Taking the "Children & Candy Puzzle" (see below) as a master example, we discuss what the various tools for the process algebra CSP offer for modelling and verifying systems. Here, we focus especially on interactive theorem proving for CSP, as exemplified in Steve Schneider's work or in our tool CSP-Prover. Besides the power to analyze infinite state systems, the theorem proving approach offers the possibility for deeper reflections on CSP. Here we discuss how it allows one to verify the algebraic laws of the language, and, furthermore, how it allows to prove meta results such as the completeness of axiomatic semantics.
Children & Candy Puzzle: "There are k children sitting in a circle. In the beginning, each child holds an even number of candies. The following step is repeated indefinitely: Every child passes half of her candies to the child on her left; any child who is left with an odd number of candies is given another candy from the teacher. Claim: Eventually, all children will hold the same number of candies."
Landscape Analysis of Bayesian Network Structure Learning Algorithms
Wednesday 22 June 2011
Bayesian Networks (BN) are an increasingly important tool for mining complex relations in large data sets. A major focus of current research is in efficient and effective ways of learning those essential interactions between variables, known as structure, that allow efficient factorisation of the joint probability distribution of the data. This in turn provides a platform for prediction, inference and simulation.
Watermarking Seminar
Monday 27 June 2011
In this research, we propose a camera identification technique based on the conditional probability features (CP features). Specifically we focus on its performance for detection of images sources which has been taken using cameras from different models. By using 4 cameras, we demonstrate that the CP features are able to perfectly match the test images with its source in 8 over 10 independent tests conducted. Additionally, the CP features are also able to perfectly match the cropped and compressed test images with its source in 9 over 10 independent tests. These findings provide a good indication that CP features are beneficial in image forensics.
The Modelling and Analysis of Buyer-Seller Watermarking Protocols
Wednesday 29 June 2011
The primary benefit of digital content, the ease with which it can be duplicated and disseminated, is also the primary concern when endeavouring to protect the rights of those creating the content. Copyright owners wish to deter illicit file sharing of copyrighted material, detect it when it occurs and even trace the original perpetrator. Embedding a unique identifying watermark into licensed multimedia content enables those selling digital content to trace illicit acts of file sharing to a single transaction with a single a buyer. However, evidence of such illicit activity must be gathered if and only if the buyer truly shared the content for a seller to prove such behaviour to an arbitrator. For this purpose, Buyer-Seller Watermarking (BSW) protocols have been developed to be used in conjunction with digital watermarking schemes.
Variable-Length Codes for Joint Source-Channel Coding
Thursday 30 June 2011
Since the introduction of Huffman codes back in 1952, variable-length codes have been used in several data compression standards, including the latest video coding standards such as H.264, usually as part of their entropy coding sub-systems. Although not as good as other data compression schemes, such as for example Arithmetic Coding, they still prove popular in practical implementations due to their simplicity. However, from early on it was realized that variable-length codes suffer from error-propagation under noisy conditions. Several techniques have been proposed to mitigate this behaviour, including the use of synchronisation codewords, self-synchronising codes and reversible variable-length codes.
When Computational Intelligence Meets Computational Biology
Wednesday 6 July 2011
In this talk, Dr Shan He will briefly introduce his multi-disciplinary research in the areas of computational intelligence and computational biology.
Firstly he will introduce his ongoing research in applying Computational Intelligence, e.g., evolutionary computation to metabolomics. Then Dr He will present a novel ensemble-based feature selection algorithm for discovering putative biomarkers from high-dimensional omics data. Finally, he will present his work in simulating the evolution of animal self-organising behaviour using evolutionary agent-based modelling.
Developmental Evaluation in Genetic Programming
Monday 11 July 2011
We investigate interactions between evolution, development and lifelong layered learning in a combination we call Evolutionary Developmental Evaluation (EDE). It is based on a specific implementation, Developmental Tree-Adjoining Grammar Guided GP (DTAG3P).
Enhancement of Multiple Fibre Orientation Reconstruction in Diffusion Tensor Imaging by Single Channel ICA
Monday 11 July 2011
To date, diffusion tensor imaging (DTI) is the only non-invasive tool available to reveal the neural architecture of human brain white matter. Advances in DTI techniques have shown great potential in the study of brain white matter related diseases such as depression, traumatic brain injury and Alzheimer's disease (AD). In DTI, a reliable reconstruction of neural fibre structure relies on the accurate estimation of fibre orientation distribution function (fODF) from each individual voxel in diffusion weighted images (DWI).
Robust and Semi-fragile Watermarking Techniques for Image Content Protection
Tuesday 12 July 2011
With the tremendous growth and usage of digital images nowadays, the integrity and authenticity of digital content is becoming increasingly important and of major concern to many government and commercial sectors. In the past decade or so, digital watermarking has attracted much attention and offers some real solutions in protecting the copyright and authenticating the digital images. Four novel robust and semi-fragile transform based image watermarking related schemes are introduced. These include wavelet-based contourlet transform (WBCT) for both robust and semi-fragile watermarking, slant transform (SLT) for semi-fragile watermarking as well as applying the generalised Benford’s Law to estimate JPEG compression, then adjust the appropriate threshold for improving the semi-fragile watermarking technique.
Video Watermarking and Forensics
Monday 18 July 2011
Video Watermarking and Forensics is the next talk in the Watermarking series of events.
Scalability Aspects of Remote Voting Systems
Tuesday 19 July 2011
Advances in electronic voting have made it possible to run more robust and transparent elections than previously possible with traditional paper based voting. The higher security guarantees given through electronic voting intends to raise standards of democracy in modern society and reduce the possibility of election malpractices. Despite having these obvious benefits, the rate at which electronic voting systems are adopted across the world, especially for legally binding elections, has been very slow. Only a few countries have successfully conducted elections at national level such as Estonia and Switzerland. The voter population in these countries are comparatively lower and the voting systems are in most instances integrated to existing security infrastructures. However, in recent elections there has been speculation as to the adequacy of security used for these elections.
Estimation of Single Trial ERPs and EEG Phase Synchronization with Application to Mental Fatigue
Thursday 21 July 2011
Monitoring mental fatigue is a crucial and important step for prevention of fatal accidents. This may be achieved by understanding and analysis of brain electrical potentials. Electroencephalography (EEG) is the record of electrical activity of the brain and gives the possibility of studying brain functionality with a high temporal resolution. EEG has been used as an important tool by researchers for detection of fatigue state. However, their proposed methods have been limited to classical statistical solutions and the results given by different researchers are somehow conflicting. Therefore, there is a need for modification of the existing methods for reliable analysis of mental fatigue and detection of fatigue state.
Participatory Sensing: Qualitative Changes in Information and Social Networks
Thursday 21 July 2011
Recent technological advances have caused an infrastructural paradigm shift and the rapid growth of communities that are connected by virtual means. The value of the Web is growing constantly, with ever more users joining and contributing to the network. Fortunately, unlike conventional social networks, the connections in a virtual setting are clearly visible for analysis
Orthogonal Least Squares Regression: An Efficient Approach for Parsimonious Modelling from Large Data
Wednesday 27 July 2011
The orthogonal least squares (OLS) algorithm, developed in the late 1980s for nonlinear system modelling, remains highly popular for nonlinear data modelling practicians, for the reason that the algorithm is simple and efficient, and is capable of producing parsimonious nonlinear models with good generalisation performance. Since its derivation, many enhanced variants of the OLS forward regression have been developed by incorporating the recent developments from machine learning. Notably, regularisation techniques, optimal experimental design methods and leave-one-out cross validation have been combined with the OLS algorithm. The resultant class of OLS algorithms offers the state-of-the-arts for parsimonious modelling from large data.
Perception at the Green Man Festival
Friday 19 August 2011
Einstein's Garden at the Green Man Festival this year will be supported by the Department of Computing. As part of the "Science at Play" exhibit, Matthew Casey will be lending his expertise to a series of activities themed around perception.
The Cyber Threat: Into the Danger Zone!
Wednesday 21 September 2011
Our next Department Seminar speaker will be Dr Alastair MacWillson, Global Managing Director at Accenture Technology Consulting, London.
With the number of ‘cyber attacks’ on the rise, targeting government and industry across the globe, it is clear that most organisations are now facing a whole new category of threat. At the same time, inherent weaknesses in enterprise IT and ineffective approaches to information security are putting organisations at risk as never before. There is a growing realisation that confronting these advanced threats calls for a whole new doctrine of defence. Keeping pace with the digital arms race requires constantly re-evaluating your position against the threats and adapting your information security strategies. Intelligence gathering has become an essential core competency for every security team.
Development and Analysis of Advanced Image Steganalysis Techniques
Wednesday 5 October 2011
Steganography is the art of providing a secret communication channel for the transmission of covert information. At the same time, it is possible that it can be used by cybercriminals to conceal their works. This potential illegal use of steganography is the basis for the objectives in this thesis. This thesis initially reviews the possible flaws in current implementations of steganalysis. By using images from different camera types, this thesis confirms the expectation that the steganalysis performance is significantly affected by the differences in image sources. In this thesis we prove that image compression in a steganalysis process has an impact on the steganalysis performance, as claimed in the literature. A review of currently available steganalysis techniques, along with a proposal to overcome the said problems is also presented in this thesis.
An Artificial Neuromodulatory System for Improved Control of a Walking Robot
Thursday 6 October 2011
The autumn series of Nature Inspired Computing and Engineering Research Group seminars begins with our first seminar on Thursday 6th October.
This talk will present a controller tuning algorithm inspired by the ‘Bayesian brain’ hypothesis; that is the brain models its environment in terms of probabilities, and uses approaches similar to those used in Bayesian statistics to make decisions. The tuning algorithm combines this theory with current understanding of neuromodulatory system, specifically the idea that neuromodulation is a mechanism for adjusting the hyperparameters of learning algorithms. It has been applied to three different components of a walking robot controller; the leg coordination component, which guides the robot towards a target while avoiding obstacles, the trajectory planning component which calculates the paths of each individual leg, and the tracking controller, which ensures the desired path is followed. The final controller demonstrates adaptability and robustness, as well as being reliable and improving efficiency by reducing power and torque requirements..
MSF Seminar 1
Friday 7 October 2011
This seminar given by Dr Shujun Li will kick off a series of seminars run by the MSF group.
A general method for recovering missing DCT coefficients in DCT-transformed images is presented in this work. We model the DCT coefficients recovery problem as an optimization problem and recover all missing DCT coefficients via linear programming. The visual quality of the recovered image gradually decreases as the number of missing DCT coefficients increases. For some images, the quality is surprisingly good even when more than 10 most significant DCT coefficients are missing. When only the DC coefficient is missing, the proposed algorithm outperforms existing methods according to experimental results conducted on 200 test images. The proposed recovery method can be used for cryptanalysis of DCT based selective encryption schemes and other applications. We also discuss possible extension of the optimization model to some other problems in multimedia coding, and security and forensics.
Singular Spectrum Analysis and its Application in Physiological Signal Separation
Thursday 13 October 2011
Most of the subspace based signal separation methods are applicable when sufficient number of signal mixtures are available which require multichannel recordings. Separation of signal sources from single channel recordings on the other hand is often of very poor quality, if not impossible, since so-called subspaces of the signal components are unknown.
Singular spectrum analysis (SSA) deals with decomposition of the data into more meaningful subspaces where the desired signal components are characterised. Periodic signals, spikes, and those for which some a priori knowledge is available can be well defined in the eigenspace of the SSA. Some applications of this approach will be explained and a new SSA-based adaptive filter for recovery of periodic physiological signals from their single channel mixtures will be presented.
MSF Seminar 2b
Friday 14 October 2011
This paper introduces a novel area of research to the Image Forensic field; identifying High Dynamic Range (HDR) digital images. We create and make available a test set of images that are a combination of HDR and standard images of similar scenes. We also propose a scheme to isolate fingerprints of the HDR-induced haloing artefact at “strong” edge positions, and present experimental results in extracting suitable features for a successful SVM-driven classification of edges from HDR and standard images. A majority vote of this output is then utilised to complete a highly accurate classification system.
MSF Seminar 2a
Friday 14 October 2011
In the past few years, semi-fragile watermarking has become increasingly important to verify the content of images and localize the tampered areas, while tolerating some non-malicious manipulations. Moreover, some researchers proposed self-restoration schemes to recover the tampered area in semi-fragile watermarking schemes. In this paper, we propose a novel fast self-restoration scheme resisting to JPEG compression for semi-fragile watermarking. In the watermark embedding process, we embed ten watermarks (six for authentication and four for self-restoration) into each 88 block of the original image. We then utilise four (44) sub-blocks' mean pixel values (extracted watermarks) to restore its corresponding (88) block's first four DCT coefficients for image content recovering. We compare our results with Li et al. and Chamlawi et al. DCT related schemes. The PSNR results indicate that the imperceptibility of our watermarked image is high at 37.61 dB and approximately 4 dB greater than the other two schemes. Moreover, the restored image is at 24.71 dB, approximately 2 dB higher than other two methods on average. Our restored image also achieves 24.39 dB, 22.98 dB 21.18 dB and 19.98 dB after JPEG compression QF=95, 85, 75 and 65, respectively, which are approximately 2.5 dB higher than other two self-restoration methods.
NICE Seminar 3
Thursday 20 October 2011
Next in the series of the autumn seminars. All are welcome to attend. Details to follow.
MSF Seminar 3
Monday 24 October 2011
Digital video are widely used in today’s society due to the availability of a wide range of affordable digital video cameras with different specifications and functions. The manipulation of digital video is made simple with easily available processing tools, making it harder to trust them. This is where the role of digital forensics becomes important; to ensure the integrity of the evidence is restored. Digital forensics helps by providing some essential information about a video, such as to tracing the source of a digital video to the device that captured it. In this research, we propose a video camera identification technique based on the conditional probability features (CP Features). Specifically we focus on its performance for identification of video sources using cameras of different models. Using three cameras of different model, we demonstrate that the CP Features are able to correctly match the test video frames with their source. These findings provide a good indication that CP Features are suitable for digital video forensics.
NICE Seminar 4
Thursday 27 October 2011
The next in the series of NICE seminars. Details to follow.
BCS Seminar
Thursday 27 October 2011
A British Computer Society event. This event is open to Members and Non-Members. Students are particularly welcome.
Please see the Branch website for further details.
NICE Seminar 5
Thursday 3 November 2011
Dr Daniel Bush is our next seminar speaker in this series of NICE seminars. All are welcome to attend.
MSF Seminar 4 (A paper reading seminar)
Friday 4 November 2011
Virtually all optical imaging systems introduce a variety of aberrations into an image. Chromatic aberration, for example, results from the failure of an optical system to perfectly focus light of different wavelengths. Lateral chromatic aberration manifests itself, to a first-order approximation, as an expansion/contraction of color channels with respect to one another. When tampering with an image, this aberration is often disturbed and fails to be consistent across the image. We describe a computational technique for automatically estimating lateral chromatic aberration and show its efficacy in detecting digital tampering.
NICE Seminar 6
Thursday 10 November 2011
Dr André Grüning will be our NICE seminar presenter this week. Details to follow. We look forward to a good turnout for this interesting talk.
MSF Seminar 5
Friday 11 November 2011
In this paper we have presented SULFA (Surrey University Library for Forensics Analysis) for benchmarking video forensics. This novel video library has been built for the purpose of video forensics specifically related to Camera identification and integrity verification. It contains original as well as forged video files, which will be freely available through university of surrey website. There are about 150 videos, collected from three camera sources, which are Canon sx220 (codec H.264), Nikon S3000 (codec MJPEG) and Fujifilm S2800HD (codec MJPEG). Each video is approximately 10 sec long with resolution of 240x320 and 30 frames per second. All videos have been shot after careful considering both temporal and spatial video characteristics. In order to present life like scenarios various complex and simple scene have been shot with and without using camera support (tripod).
MSF Seminar 6
Friday 18 November 2011
Near field communication (NFC) is a standard-based wireless communication technology that allows data to be exchanged between devices (computers, TV, mobile, ...) that are a few centimeters apart. There will be a significant number of interesting applications (payment, ticketing, e-keys, mobile coupons, ...). One of these applications is mobile coupons (mCoupons), where users can get coupons from NFC mCoupon issuer (smart poster on the street or a NFC tagged newspaper) just by touching their (NFC-capable) mobiles. However, it would cause huge losses for companies if these coupons issued in uncontrolled way. Therefore, a secure protocol is needed to meet mCoupons requirements. Moreover, it must be formally verified, as all secure protocol must be, before building the system in the reality.
There is a proposed mCoupon protocol in the literature. Ali has formally analysed the protocol by using CasperFDR2. This is result in an attack founded. However, whether this attack is feasible in the really is another challenge especially with the different communication nature of NFC.
Dream or Nightmare? An HR director's perspective of managing the graduate talent pool into work
Wednesday 23 November 2011
NICE Seminar 8
Thursday 24 November 2011
MSF Seminar 10
Monday 23 January 2012
In this discussion, we build upon our novel research in High Dynamic Range (HDR) imaging for Image Forensics by comparing the 'halo' artifact with a similar artifact caused by JPEG compression, known as the 'ringing' artifact (Gibb's Phenomenon). We briefly discuss the relationship between these two artifacts, how they appear in the Fourier transform space, and how reliable our scheme is at distinguishing between the two. We also analyse and evaluate each step of our algorithm in order to optimise it for producing more accurate results. Finally, a new framework for detecting the halo artifact is presented in reference to existing schemes that detect the ringing artifact, and our initial results will be discussed.
Cyber Security Threat Landscape and Microsoft Strategy
Wednesday 25 January 2012
MSF Seminar 11
Monday 6 February 2012
This is the last MSF group seminar we scheduled for the last semester, which was originally planned on 30th Jan but postponed to 6th Feb. After this seminar we will have a the new series of our group seminars for the new semester. This time Miss Hui Wang will report her study on a recent paper related to her research on semi-fragile watermarking for self-restoration.
Email forensics
Wednesday 8 February 2012
Practical Optimisation of Novel Continuously Variable Transmission Design
Thursday 9 February 2012
Advanced optimisation techniques have been empirically successful in hundreds of applications; however their complexity often means they are not utilised by individuals from other disciplines whom do not necessarily have the correct background or time to fully understand their potential. Whilst these techniques perform exceptionally well on test problems designed to catch out simpler optimisation approaches, in practical situations they are not always necessary. This presentation presents a remarkably simple optimisation technique that has been successfully used to improve the specific efficiency and general characteristics of a novel transmission system. Results are compared to a canonical genetic algorithm and discussed in relation to practical considerations.
The Accumulation Theory of Ageing
Thursday 16 February 2012
Lifespan distributions of populations of quite diverse species such as humans and yeast seem to surprisingly well follow the same empirical Gompertz-Makeham law, which basically predicts an exponential increase of mortality rate with age. This empirical law can for example be grounded in reliability theory when individuals age through the random failure of a number of redundant essential functional units.
However, ageing and subsequent death can also be caused by the accumulation of "ageing factors", for example noxious metabolic end products or genetic anomalies, such as self-replicating extra-chromosomal DNA in yeast.
We first show how Gompertz-Makeham behaviour arises when ageing factor accumulation follows a deterministic self-reinforcing process. We go then on to demonstrate that such a deterministic process is a good approximation of the underlying stochastic accumulation of ageing factors where the stochastic model can also account for old-age levelling off of mortality rate.
Identity Issues and Management
Friday 17 February 2012
6th Form Lecture Series - Public-Key Cryptography
Wednesday 22 February 2012
Cryptography has been around for thousands of years but the new Public-Key Cryptography is the foundation of secure internet communication and you use it every time you access a secure website. This talk will give a brief history and explain how public-key cryptography is so extraordinary and how it works using surprisingly simple mathematics (prime numbers, multiplication and powers), how you can construct your own public-key system, and how it is used in the real world across a wide range of applications.
Double our research income: Sharing my personal experience for EU Funding
Thursday 23 February 2012
While NICE group seminars are supposed to share specific research activities, Prof Jianmin Jiang believe that it might be more useful to talk about funding to encourage share of experiences and secret weapons among colleagues in the department. There is an old Chinese saying that “I throw stones out in the hope to attract pieces of gold in”, which can be used to describe Prof Jianmin Jiang's motivation of this talk.
In this presentation, Prof Jianmin Jiang will talk about: (i) a brief introduction of European FP7 open funding schemes under call-9 (deadline 17th of April); (ii) How to bid for funding in FP7, his personal view; (iii) his personal plans and research activities.
The Delivery of Managed Security Services
Friday 24 February 2012
Tony Dyhouse will discuss some standards applicable to the fields of Information Assurance and Service Delivery; illustrating areas of commonality with regard to aim and approach. Different mechanisms for the protection of CIA will be discussed from a point of view of risk transference and third party provision of services, including a look at potential conflict of interest and how that can be addressed. Finally, a view on advancing technology and Cloud services.
Automatic Detection of Apneas and Other Medical Conditions Through Analysis of Breathing Patterns During Sleep
Thursday 1 March 2012
Snoring may not only be unpleasant, but may also indicate some serious medical problems. Quite a few medical conditions are related to certain breathing sounds during sleep. Thus pattern analysis on such information becomes necessary. We present a system that is capable of recording overnight breathing sounds along with sensor data, and performs almost real time analysis using signal processing techniques. The analysis results can be used as a first indication concerning some serious medical conditions such as apnea, hypopnea and others. The system may also potentially be used for general sleep quality analysis and measurement. The first results from 400 patients show a high level of accuracy.
Also let me add that since this is a sleep related talk, and I wont be too technical, it is suitable for the people from the sleep clinic I know exists in the University.
Quality as a prerequisite for security in interoperable systems
Friday 2 March 2012
Considerable effort goes into specifying secure and security protocols and the equipment in which these are embodied. In most cases the specification concentrates on positive cases with very little concentration on failure modes.
This talk will concentrate on limitations that are imposed on our ability to make assertions about the security of a system where we are unable to understand the quality of the implementation. It will do so by examining the types of failure that have led to security system failures.
Finally, the talk will examine some of the extant security protocols and show that these provide very little support for identifying and guaranteeing the quality of components networked together in a distributed system.
Professional Information and Network Security – A Risky Business
Wednesday 7 March 2012
Dr Nowill will present a view of Cyber and Cyber Security from the Telecommunications perspective, as well as covering the sort of work that professionals address in information and network security and related opportunities. This will be drawn from a wide range of experiences and examples with links to the National Cyber Security Strategy and how industry contributes, as well as how individuals may contribute at a personal level.
Reservoir Computing and Spike Timing Dependent Plasticity
Thursday 8 March 2012
Reservoir computing and the liquid state machine model have received much attention in the literature in recent years. We investigate using a reservoir composed of a network of spiking neurons, with synaptic delays, whose synapses are allowed to evolve using a tri-phasic spike timing-dependent plasticity (STDP) rule. The results of using a tri-phasic STDP rule on the network properties are compared to those found using the more common exponential form of the rule. It is found that each rule causes the synaptic weights to evolve in significantly different fashions giving rise to different network dynamics.
Cyber: The Industry – Government Partnership
Friday 9 March 2012
In recent years it has become clear that Government carries explicit or implicit responsibility for maintaining a nation’s security. Doctrine, policy and Government spend is well established in areas such as defence. However, in the emerging areas of cyber security, now identified by the UK Government as a Tier 1 threat, the relationship is different. In this case, security is provided primarily by industry experts, following the rapidly changing capabilities of the wider IT and communications industry. As such, Government(s) have little control over these global industries and must seek more sophisticated leveraged models to ensure the nation’s security is protected. Andrew will discuss some of the issues and developments associated with this subject, drawing upon his recent work for the Cabinet Office and parts of industry.
Data Mining of Portable EEG Signals for Sports Performance Analysis
Thursday 15 March 2012
The mental ability of an athlete is as crucial as their physical performance. Achievement in high performance sport requires an appropriate ‘state of mind’, which is trained alongside the physical activity. However, quantification of mental state is needed to identify, train and improve it through coaching. With the advent of a new generation of portable, compact EEGs we can measure the neurocognitive activity of an athlete’s brain. We present evidence suggesting that the ‘state of mind’ of an athlete can be measured and compared. Measurements are taken from youth, near elite and elite (GB team) archers investigating:
- quantification of archer EEG signals
- correlation of EEG data across shots
- correlation of EEG data across archers
Results demonstrate that there are measureable changes in EEG patterns during a shot with evidence suggesting that the patterns vary as a function of skill level, but not necessarily as a function of score.
This work was sponsored by the Surrey EPSRC KTA award and was done in collaboration with
- Matthew Casey & Alan Yau, Department of Computing, University of Surrey
- Keith M Barfoot, Alpha-Active Ltd
- Andrew Callaway, Centre for Event & Sport Research, Bournemouth University
9th Annual Computing PhD Conference
Wednesday 21 March 2012
On Wednesday 21st March 2012, the Department will hold its 9th Annual PhD Conference. The conference celebrates the work of all of our PhD students through presentations and posters, recognising their valuable contribution to computer science research. The purpose of the conference is to give students an opportunity to experience a conference environment as well as providing a showcase of the current research being performed in the department.
Professor Sir Christopher Snowden, the Vice Chancellor of the University of Surrey, will give an opening address at the conference, and two keynote speakers will present motivational talks (one in the morning and the other in the afternoon): Dr Alastair MacWillson, the Global Managing Director of Accenture Technology Consulting's global security practice, and Professor Dave Robertson, Head of Informatics, University of Edinburgh.
More detail about the conference can be found at here. The tentative programme of the conference is available at http://www.compconf.org.uk/agenda_phd_conference.pdf.
Hierarchical Multi-Label Classification
Thursday 22 March 2012
Hierarchical Multi-Label Classification is a complex classification problem where an instance can be assigned to more than one class simultaneously, and these classes are hierarchically organized with superclasses and subclasses, i.e., an instance can be classified as belonging to more than one path in a hierarchical structure. We investigate the use of a neural network method and a genetic algorithm in this classification task, and compared their performances with other methods in the literature.
Security in cyberspace: what is "good enough"?
Friday 23 March 2012
Cyberspace is the new buzz word but what is cyberspace? And what do we mean to "make it secure"? What is the role of the developer? Security is rarely seen as a business enabler - more an irksome and expensive disabler. How can we make security a valued capability? How does a developer determine what security features are proportionate, how much is good enough? And how do you demonstrate necessity and sufficiency to someone else, such as the person paying for it? This is really about risk management when applied to the security of IT systems but in this new connected world of cyberspace where the stakes are so much higher.
Surgical Skill Assessment through Instrument Motion Analysis (SENTIMENT)
Thursday 29 March 2012
The formal assessment of surgical skills has grown in importance over recent years, with increasing evidence that unstructured systems of assessment have poor reproducibility, large inter-observer variation and lack of quantifiable measures. A paradigm shift has therefore begun, with the emergence of more objective and quantitative tools devised to complement current practice. In this work we aim to, through computer vision algorithms, determine if instruments can be tracked with sufficient accuracy during surgery so that the quantitative data obtained can be directly related to surgical performance, across a range of cataract types, surgeons, and equipment.
This will therefore potentially contribute to: feedback on dexterity performance during surgery; formative assessment of surgical performance; surgical training and through a combination of all of the above, improved patient safety. We present a robust algorithm based upon SURF point detection and optical flow that is capable of measuring instruments movement throughout the course of many operation procedures. In addition, the current experiments have shown that such measurements are able to separate different levels of surgeons based on their operation videos and estimate their surgical skills.
The Cyber Threats, Managing the Risk to an Enterprise
Friday 30 March 2012
The corporate IT network is a battle-ground of a range of modern threats. Cybercriminals looking to make financial gain, attention-seekers trying to create headlines, professional hackers aiming to steal sensitive documents, and even moles acting as legitimate employees all create significant risks for those tasked with defending the network. To make matters worse, the traditional technology and methods of these defenders has failed to keep pace with the ingenuity of the attackers and the industrialisation of their methods. This seminar will give an overview of the threat landscape and draw on real case studies from investigations done by the Detica Treidan Cyber Intrusion Detection team. Examples of social engineering tricks, exploits, custom malware, and threat correlation will be presented along with descriptions of some of the cutting-edge techniques being used to detect them.
Innovations in data storage: MB to TB
Wednesday 2 May 2012
Steve will give a view on data storage from the inside of some of the key innovations that facilitated the growth of storage density and arguably underpinned the Information Age.
Reservoir Computing
Thursday 3 May 2012
Reservoir Computing is a paradigm that has emerged during the last decade as viable model of how generic neural circuits can be applied to a range of classification and regression machine learning tasks. Shown to deal elegantly with a range of real-world spatio-temporal signals such as speech and human motion, Reservoir Computing has demonstrated an advantage over more orthodox neural network techniques.
In this talk, we reveal our work on extending a Reservoir Computing model that can learn long-term, non-stationary data -- 50 years of weather data in our case. To allow the neural circuit to adapt to seasonal and climatic changes over the time period, we apply a range of regulated Hebbian based plasticity rules to the neural synapses, as observed in real, biological neural networks from neuroscience.
Multi-Level Security (MLS) – What is it, why do we need it, and how can we get it
Thursday 3 May 2012
MLS has been a field of study in computer science for decades, and MLS systems have been developed and deployed for high assurance defence and government applications. However, in recent years other users with less stringent security requirements have been talking about their need for "MLS", and have been attempting to use traditional MLS solutions in their systems. In this talk, we take a look at the varied applications that are claimed to require "MLS" and attempt to reconcile their different interpretations of the term. We then survey existing and proposed MLS technologies, discuss some of their drawbacks when compared with these applications' requirements, and propose some areas for future research.
Security issues for developers using Microsoft technologies
Friday 4 May 2012
His first presentation will demonstrate application security threats, showing actual code exploits and how they can be prevented. This is based on Chris' experience as a security consultant, and also his time working as a developer. The presentation will involve actual demonstrations of various types of web site attack, with full code examples. Chris will then proceed to give an overview of the secure application lifecycle within a large organisation, and some of the issues faced. How do banks keep ahead of both external attackers and internal threats, such as rogue traders?
The second presentation will look at application specific methods for securing communications. This will delve into subjects such as WS-Security and WS-Federation. This is true application-level security, incorporating XML encryption methods. Many third party applications now offer a WS-Security authentication suite, allowing complex web service security facilities, such as federated identity.
Digital & Electronics Forensics Defined
Tuesday 8 May 2012
The talk will look at what is Digital Forensics as compared to what is termed "computer forensics" and will focus on mobile devices and solid state "Flash" memory, the tools and the techniques used to recover extant and deleted data. Mark will look at a case example; the East Midlands Printer Bomb. Mark will conclude with a look at how current trends in technology could question conventional wisdom in the recovery of data from digital devices.
Diploid Evolution in Varying Environments
Thursday 10 May 2012
This research explores the impact of regional changes in conditions on the development of distinct groups in a population of diploid organisms.
In practice it involves computer modelling the growth of separate colonies of plants in different environments and developing appropriate measures of the changes in the population.
Security Aware - Cloud Computing Security & Best Practices
Friday 11 May 2012
Are you ‘if-ing’ about floating to the clouds? Are you hesitant about losing control and becoming weightless, having to deal with the hassle of changing tapes, refreshing equipment, costly overheads? It’s not as bad as you think…you just need to plan and look ahead and don’t let anything cloud your vision. Just focus on what really matters to your business (longevity, cost, overheads, and the risk). Be aware and understand the pros and cons of migrating to the cloud. Knowledge is power!
Areas to cover include:
- What is cloud computing?
- The drivers
- Barriers to adoption
- Approaches to cloud adoption
- The risks and issues
- Top concerns as quoted by CSA – 7 threats to cloud security
- Compliance and security best practices
Banking Security: Attacks and Defences
Wednesday 16 May 2012
Designers of banking security systems are faced with a difficult challenge of developing technology within a tightly constrained budget, yet which must be capable of defeating attacks by determined, well-equipped criminals. This talk will summarise banking security technologies for protecting Chip and PIN/EMV card payments, online shopping, and online banking. The effectiveness of the security measures will be discussed, along with vulnerabilities discovered in them both by academics and by criminals. These vulnerabilities include cryptographic flaws, failures of tamper resistance, and poor implementation decisions, and have led not only to significant financial losses, but in some cases unfair allocation of liability. Proposed improvements will also be described, not only to the technical failures but also to the legal and regulatory regimes which are the underlying reason for some of these problems not being properly addressed.
Optimisation and Prediction of Computational Fluid Dynamic Mesh using Evolutionary Algorithms and Neural Network Surrogate Models
Thursday 17 May 2012
This research aims to use Evolutionary Algorithms (EA) to optimise Computational Fluid Dynamic (CFD) mesh for a turbulent jet. The Star-CD CFD package is used to construct, solve and post process the mesh and the Covariance Matrix Adaption Evolutionary Strategy (CMA-ES) algorithm is used to optimise the mesh. A Recurrent Neural Network (RNN) is also trained to predict converged CFD results, from un-converged data, aiming to reduce computation time when CFD simulations are needed for optimisation of either the CFD mesh or design of turbulent jets.
Results from a mesh optimisation loop were not positive, so attention was focused on training the RNN to predict converged CFD results and preliminary findings from this work have been encouraging.
In this talk we present the motivation, method, results and conclusions of all the work undertaken to date, as well as our future plans and ways in which the research can be further explored.
The Real Effects of Password Policies
Wednesday 23 May 2012
Users are often considered the weakest link in the security chain because of their poor security behaviour. One area with a vast amount of evidence related to poor behaviour is that of password management.
We have a pretty good idea of the extent to which this behaviour impacts on the individual user’s personal security. Unfortunately, we don’t know what the impact of this kind of behaviour by a number of organisational employees is, on a larger scale, nor do we know how best to intervene so as to improve the general security of an organisation as a whole. Current wisdom mandates the use of policies to curb insecure behaviours but it is clear that this approach has limited effectiveness. Unfortunately, no one really understands how the individual directives contained in the policies impact on the security of the eco-system. Sometimes directives have unexpected side-effects which are not easily anticipated.
It would be very difficult to answer this question in a real-life environment. I will describe a simulation engine which models an organisation with employee agents using a number of systems over an extended period. The simulation is tailorable, allowing tweaking of particular system-wide settings in order to implement policy dictats so as to determine their potential impact on the security of the organisation’s systems.
This tool supports security specialists developing policies within their organisations by quantifying the longitudinal impacts of particular rules.
Time-space trade-offs in cryptographic enforcement mechanisms for interval-based access control policies
Wednesday 30 May 2012
The enforcement of authorization policies using cryptography has received considerable attention in recent years. Enforcement mechanisms vary in the amount of storage and the number of key derivation steps that are required in the worst case. These parameters correspond, respectively, to the number of edges and the diameter of the graph that is used to represent the authorization policy. In this talk we will consider a particular class of access control policies and the associated graphs. We then present a number of techniques for constructing a new graph that has a smaller diameter than the original graph but enforces the same authorization policy.
The talk is not really about access control or cryptography. Rather, the problem of trade-offs in cryptographic access control gives rise to interesting constructions for reducing the diameter of directed acyclic graphs without adding too many edges. It should be accessible to a general computer science audience.
Unsupervised Ensemble Learning and Its Application to Temporal Data Clustering
Wednesday 30 May 2012
Temporal data clustering can provide underpinning techniques for the discovery of intrinsic structures and can condense or summarize information contained in temporal data, demands made in various fields ranging from time series analysis to understanding sequential data. In the context of the treatment of data dependency in temporal data, existing temporal data clustering algorithms can be classified in three categories: model-based, temporal-proximity and feature-based clustering. However, unlike static data, temporal data have many distinct characteristics, including high dimensionality, complex time dependency, and large volume, all of which make the clustering of temporal data more challenging than conventional static data clustering. A large of number of recent studies have shown that unsupervised ensemble approaches improve clustering quality by combining multiple clustering solutions into a single consolidated clustering ensemble that has the best performance among given clustering solutions. Hence my research concentrates on ensemble learning techniques and its application for temporal data clustering tasks.
Data Mapping and Transformation – Applications in Healthcare Computing
Wednesday 6 June 2012
The costs of delivering healthcare are made much higher by the poor level of integration between healthcare IT systems. Attempts to define information standards for the exchange of healthcare data, by international organisations such as Health Level 7 (HL7), have been at best partially successful. I describe an approach to integration based on semantic data mapping – mapping diverse data formats onto a common UML model of information, and automatically generating data transforms – which has the potential to reduce the cost and complexity of integrating healthcare IT systems. Progress in applying this approach in the UK is described.
Joint Source Coding and Encryption using Chaos and Fractals
Tuesday 12 June 2012
Source coding and encryption are the major operations required in the transmission of large amount of confidential information via a public network. Traditionally, these operations are performed independently. For example, the source sequence is first compressed using arithmetic coding; then the compressed sequence is encrypted using Advanced Encryption Standard (AES). In this seminar, two applications of nonlinear systems for joint source coding and encryption will be presented. For the lossless reconstruction of general binary sequences, a simple piecewise linear chaotic map is employed for simultaneous arithmetic coding and encryption. For lossy image compression, the integration of selective encryption in fractal image coding will be described.
The interactive metaclustering and visualization approach for Clustering and Visualization of Genomic Data
Tuesday 19 June 2012
When dealing with real data, clustering becomes a very complex problem, usually admitting many reasonable solutions. Moreover, even if completely different, such solutions can appear almost equivalent from the point of view of classical quality measures such as the distortion value. This implies that blind optimisation techniques alone are prone to discard qualitatively interesting solutions.
In this seminar, an alternative approach to clustering, including the generation of a number of good solutions through global optimisation, the analysis of such solutions through meta clustering and the final construction of a small set of solutions through consensus clustering is presented.
An Analysis of the Ant Swarm Behaviour for Quarum Sensing: A New Direction for Bio-inspired Computing in Optimization
Tuesday 21 August 2012
Ant traffic flow increases with growing density. This characteristic phenomenon is different from any other systems of traffic flow. In this talk, I would describe a computational model for density-independent traffic flow in ant colonies that transport to new nests. Ants have two types of swarm behaviour: emigration and foraging. A precedence model in computational ecology focused on foraging trails. However, ants move on a much larger scale during emigration. They gauge nest density by frequent close approaches among them and time the transport of colony. This density assessment behaviour known as quorum sensing has not been discussed in the context of traffic flow theory. Based on the behaviour, we model ant traffic flow that is organized without the influence of changes in population density of colonies. The proposed model predicts that density-independent ant traffic flow only depends on the frequency of mutual close approaches. I would show how to verify this estimation of our model in comparison with robust empirical data that ant experts obtained from field researches. I would indicate how to organize a study of computational ecology, and in which direction you may expect technical contributions using the proposed model.
Supervised Learning –> From Biology to Computational Models
Friday 24 August 2012
Ability to learn from instructions or demonstrations is one of the fundamental properties of the brain that is necessary to acquire new knowledge or to develop novel skills and behavioural patterns. Although the concept of instruction-based learning has been studied for several decades now, the biological basis of this process remains unrevealed.
Some questions that need to be addressed are: where and how to search for the instruction-based learning in the brain? What is the neural representation of instructive signals? How do the biological neurons learn to generate desired outputs given these instructions?
In the talk I will discuss a biologically plausible model of supervised learning that addresses the above questions. I will demonstrate properties of the model in the context of such tasks as prediction, classification or internal representations. I will argue that supervised learning can contribute to reliable and precise spike-based information processing in the nervous system even in the presence of noise of different origin.
Evolvable Systems Engineering – Overview HRI-EU
Wednesday 29 August 2012
After an overview of the activities of the Honda Research Institute Europe, Prof Dr Bernhard Sendhoff will outline evolutionary optimization under the constraint of robustness. Different application domains will demonstrate the practical aspects of robust optimization. Systems engineering aims at integrating many different criteria into the optimal design of systems. He will argue that for biological as well as technical systems the high degree of spatial and temporal variability is an integral aspect of system design. Looking beyond robustness, Prof Dr Bernhard Sendhoff will introduce the biological concept of evolvability in the technical context of systems engineering. Prof Dr Bernhard Sendhoff will interpret evolvability as the capability of a technical system to respond to changes in the system's environment rapidly, efficiently and successfully.
Finally, Prof Dr Bernhard Sendhoff will present a simplified application targeting the improvement of the evolvability of a technical system.
Internet Voting in Australia
Thursday 13 September 2012
Internet voting is in the process of being adopted in Australia. The state of New South Wales recently used Internet voting on a large scale in the 2011 NSW State General Election. However there was a wide range of serious failures with the NSW iVote system in areas including transparency, scrutiny, engineering, risk assessment and oversight. In particular iVote did not provide verifiability, and moreover it experienced a number of critical incidents during the election. In this talk I will discuss these problems and the steps needed to avoid them in future. I will also give a brief overview of the plans in Victoria for Internet voting.
PhD Within Three Years: How to Make a Mission Impossible Possible
Thursday 4 October 2012
In this seminar, I'll talk about some do's and don'ts in PhD study, in particular when our students are required to complete their PhD thesis in no more than 3.5 years after undergraduate study, which is almost a mission impossible. This talk is based on "Useful Things to Know About PhD Thesis Research" by H.T. Kung and "10 easy ways to fail a PhD" by Matt Might, and of course my own experience and understanding of doing a PhD. Hopefully these discussions can help our PhD students to achieve a mission impossible.
From Captcha to Captchæcker: Can we automate security and usability analysis of CAPCTHAs?
Monday 8 October 2012
CAPTCHAs are everywhere these days. Security and usability evaluation of CAPTCHA schemes is still an art rather than a science in the sense that it has to be done on an ad hoc basis and many steps have to be done manually. In this talk, the following questions will be focused: can we automate the security and usability evaluation process and if so to what extent? A new concept called Captchæcker (= Captcha + Checker) is proposed to automate the usability evaluation part based on machine learning, and to semi-automate the security evaluation part based on a dataflow programming framework called Reconfigurable Multimedia Coding (RMC, formerly known as Reconfigurable Video Coding = RVC). Some preliminary research results will be described and future work is explained.
The State of the Art of Multiple-Winners-Take-All Networks: Formulations, Models, and Applications
Tuesday 9 October 2012
Winner-take-all is a general rule commonly used in many applications such as machine learning and data mining. K-winners-take-all is a generalization of winner-take-all with multiple winners. Over the last twenty years, many K-winners-take-all neural networks and circuits have been developed with varied complexity and performance. In this talk, I will start with several mathematical problem formulations of the K-winners-take-all solutions via neurodynamic optimization, then present several K winners-take-all networks with reducing model complexity based on our neurodynamic optimization models. Finally, we will introduce the best one with the simplest model complexity and maximum computational efficiency. Analytical and Monte Carlo simulation results will be shown to demonstrate the computing characteristics and performance. The applications to parallel sorting, rank-order filtering, and information retrieval will be also discussed.
Hyperspectral Imaging and its Applications
Wednesday 10 October 2012
What do bruised fruit, the United States of America’s Declaration of Independence and crime scenes have in common? Our understanding of them has all been enhanced by the use of hyperspectral imaging.
Hyperspectral imaging cameras can determine if objects being viewed are hot or cold, wet or dry, their fat and sugar content and the presence of certain chemical elements. Therefore, it has a diverse range of applications in areas such as pharmaceuticals, food technology and homeland security. Whereas conventional colour cameras capture light in just three spectral windows, hyperspectral cameras have the ability to capture an entire section of the electromagnetic spectrum at every pixel. There are a number of different techniques for hyperspectral image capture including pushbroom and optically tuned filters. New capture techniques are also being developed.
In the past hyperspectral cameras were bulky and expensive and so were mostly used by the military for remote sensing and surveillance applications. Today’s hyperspectral cameras are almost as small as a standard video camera. These latest developments in camera technology are moving hyperspectral imaging from the aircraft and the military surveillance station to the laboratory and the production line.
The astonishing range of industries where these laboratories and factories are based emphasises the relevance and importance of the growth of hyperspectral imaging technology.
In agriculture hyperspectral imaging can be used to determine if soft fruit, such as apples, are bruised below the surface and likely to have a short shelf life. Similarly, HSI technology can be used in Biomedical Engineering to reveal the extent of burns and bruises below the skin of the human body.
Hyperspectral imaging is playing an increasingly important role in forensic technologies. The detection of fingerprints at crime scenes and the analysis of inks to detect forged documents can all be carried out using the technology. Hyperspectral imaging has even helped to bring new insights to old documents. The Library of Congress’ Preservation Research and Testing Division has carried out work on discarded drafts of the American Declaration of Independence to uncover crossed out words. This research has helped to give modern historians a deeper understanding into Thomas Jefferson’s thought process.
As hyperspectral imaging generates an entire section of the electromagnetic spectrum in real time for every pixel of an image, the sheer volume of data it produces can be enormous. Therefore, it requires large data storage and throughput; efficient data reduction algorithms; and intelligent and selective image capture to develop a complete system.
In general, the end user in a laboratory or factory only need a standalone turnkey system to solve a particular problem, such as whether a pharmaceutical product is counterfeit or the extent of bruising in the fruit. Such systems require new and state of the art image processing algorithms to reach correct decisions in real time.
The keynote will give an overview of Hyperspectral Imaging technology and its applications to information-processing tasks.
Challenges in Interpreting Electronic Health Records and a Case Study in Calibrating eGFR
Thursday 11 October 2012
Electronic health records contain a wealth of information that has not been fully exploited. In the UK, clinical practices have been computerised since the 1990's whereas hospital episode data have been available since about 2005. The clinical informatics has gone significant advancement. It is possible to retrieve millions of patient records over time and across vendors and clinical practices.
In the first part of the talk, I will first present some challenges when processing and modelling health records. In the second part, I will present a case study based on the Quality Improvement Chronic Kidney Disease data set which contains nearly a million patient records. This case study shows how machine learning or pattern recognition techniques can be used to solve a data calibration problem, which otherwise, would have prevented the data from being used for epidemiology studies and worst, could lead to unnecessary referral of patients to specialists.
I will conclude the talk with a personal but possibly biased view of where research should be focused. There is plenty of room for contributions and opportunities for collaboration.
Network Analysis Characterising Structure, Complexity and Learning
Wednesday 17 October 2012
This talk will focus on how graph-structures can be compactly characterised using measurements motivated by diffusion processes and random walks. It will commence by explaining the relationship between the heat equation on a graph, the spectrum of the Laplacian matrix (the degree matrix minus the weighted adjacency matrix) and the steady-state random walk. The talk will then focus in some depth on how the heat kernel, i.e. the solution of the heat equation, can be used to characterize graph structure in a compact way. One of the important steps here is to show that the zeta function is the moment generating functions of the heat kernel trace, and that the zeta function is determined by the distribution of paths and the number os spanning trees in a graph. We will then explore a number of applications of these ideas in image analysis and computer vision.
The Theory of Darwinian Neurodynamics
Thursday 18 October 2012
There are informational replicators in the brain. Response characteristics, e.g. orientation selectivity in visual cortex, are known to be copied from neuron to neuron. Through STDP small neuronal circuits can undertake causal inference on other circuits to reconstruct the topology of those circuits based on their spontaneous activity. Patterns of synaptic connectivity are replicating units of evolution in the brain. The space of predictions is unlimited; brains do sparse search in this model space. We've shown that Darwinian dynamics is efficient for sparse search compared to algorithms that lack information transfer between adaptive units. Darwinian dynamics in the brain implements approximate Bayesian inference.
Heterogeneous Classifier Ensembles for EEG-Based Motor Imaginary Detection
Thursday 25 October 2012
EEG signal classification is a challenging task in that the nature of the EEG data may vary from subject to subject, and change over time for the same subject. To improve classification performance, we propose to construct heterogeneous classifier ensembles, where not only the base classifiers are of different types, but they have different input features as well. The classification performance of the proposed method has been examined on Berlin BCI competition III data sets IVa. Our comparative results clearly show that heterogeneous ensembles outperform single models as well as ensembles having the same input features.
Is DC-less JPEG compression possible?
Monday 29 October 2012
In this talk, Kadoo will be talking about his research done at Surrey Computing. It is about the application of discrete optimization to JPEG image compression. The main idea is to use global discrete optimization to achieve more advanced intra prediction of DC coefficients. The goal is to improve compression efficiency by significantly reducing the number of DC coefficients that need to be encoded. Therefore, this technique is called DC-less JPEG image compression.
The 5th Wave of Computing – do we have the skills to deliver?
Wednesday 31 October 2012
The latest wave of technology development is being matched by equally significant changes in the way businesses, and organisations of all kinds, are doing business. Could these changes make the UK a third world economy?
- This combination of change is totally unprecedented
- Application of technology is everything
- But who has the combination of skills and knowledge to take full advantage of these changes?
One thing is for certain, those who have the skills and see the opportunities will be the winners.
Innovative Water and Energy Processes – Modelling Opportunities
Thursday 1 November 2012
Water is not just the essential ingredient for life, but also a fundamental factor in the economy and security of any country. Coupled with increased population and climate change effect, the availability of food, water, and energy are the biggest challenges that the world faces. Over the next two decades water demand will exceed water supply by about 40% according to many scientific studies and reports. Food and energy shortages have also been described by the UK Government's Chief Scientific Advisor, Prof. Sir John Beddington, to create the "perfect storm" by 2030.
The provision of drinkable supplies through desalination could offer a sustainable solution to the drinking water problem but also presents a technical challenge too.
Seawater and brackish water are desalinated by thermal distillation and membrane methods such as reverse osmosis (RO) and electrodialysis. All these methods involve high operating and investment costs. RO is the most widely used desalination techniques, while thermal methods are mainly used in the Gulf countries. However, the high operating cost of RO is due to essential pre-treatment, scaling, bio-fouling and the high-energy consumption.
Novel desalination and renewable power generation membrane processes have been invented and developed at the Centre for Osmosis Research and Applications at the University of Surrey in collaboration with Modern Water plc. The modelling opportunities in membrane processes involve flow and mass transfer of Forward and Reverse Osmosis Processes as well as hydrodynamic and colloidal Interactions. The CORA team is also working on thermally driven membrane processes such as membrane distillation as well as direct contact heart transfer exchanger for both desalination and power generation.
The talk will present the principles of the desalination and power generation processes, results and the areas which needs modelling and optimisation.
Stereopsis and 3D content conversion (from cinema-to-3DTV)
Monday 5 November 2012
The ability to provide more exciting, informative and entertaining end-user visual experience has created an enormous interest among the viewers towards 3D content. Whereas traditional 2D video is sufficient for describing details of captured scenes, 3D video can provide more realistic representation of the same scene with the additional value of depth. The success of today’s 3D video services requires that the end users meet a satisfactory level of perceptual quality. 3D cinema and 3DTV have grown in popularity in recent years. Filmmakers have a significant opportunity in front of them given the recent success of 3D films. In this talk we investigate whether this opportunity could be extended from cinema to home in a meaningful way.
“3D” perceived from viewing stereoscopic content depends on the viewing geometry. This implies that the stereoscopic-3D content should be captured for a specific viewing geometry in order to provide a satisfactory 3D experience. However, although it would be possible, it is clearly not viable, to produce and transmit multiple streams of the same content for different screen sizes. To solve the above problem, we analyze the performance of several disparity-based transformation techniques, which could be used for cinema-to-3DTV content conversion.
A Model for Learning the Optimal Control of Saccadic Gaze Shifts
Thursday 8 November 2012
Human beings and many other species redirect their gaze towards targets of interest through rapid gaze shifts known as saccades, which are made approximately three to four times every second. While small saccades only rely on eye movements, larger ones result from coordinated movement of both eyes and head at the same time. Experimental studies have revealed that during saccades, the motor system manifests certain characteristics such as a stereotyped relationship between the relative contribution of eye and head to total gaze shift. Various optimality principles and several neural architectures have been suggested by researchers to explain these characteristics, but they do not involve incremental learning as a mechanism of optimization. Here, we suggest an open-loop neural controller with an adaptation mechanism which minimizes a proposed cost function. Simulations show that the characteristics of gaze shifts generated by this model match the experimental data in many aspects, in both head-restrained and head-free conditions. Therefore, our model can be regarded as a first step towards bringing together an optimality principle, a neural architecture, and an incremental learning mechanism into a unified control theory of saccadic gaze shifts.
Stegobot: a covert social network botnet
Tuesday 13 November 2012
Stegobot is a proof of concept new-generation botnet where bots communicate over unobservable communication channels. It is designed to spread via social-malware attacks and steal information from its victims. Unlike conventional botnets, Stegobot traffic does not introduce new communication endpoints between bots. Command and control information as well as stolen sensitive information are relayed using steganographic techniques piggybacking over the image sharing behavior of users in a social network. Hence stolen information travels along the edges of the victims' social network. The current implementation is based on a simple routing algorithm called restricted flooding. The tuning of the steganographic channels is a key security parameter. It works surprisingly well in real world experimental deployments; even when tuned very conservatively (against detection) it is capable of channeling sensitive payloads of close to 100MB to the botmaster. See press coverage in the New Scientist, MSNBC, Times of India, and a few others.
Emergence and Dynamics of Sensorimotor Circuits for Language, Memory and Action in a Model of Frontal and Temporal areas of the Human Brain
Wednesday 14 November 2012
I will present a neurocomputational model that we developed to simulate and explain, at cortical level, word learning and language processes as they are believed to occur in motor and sensory primary, secondary and higher association areas of the (inferior) frontal and (superior) temporal lobes of the human brain. Mechanisms and connectivity of the model aim to reflect, as much as possible, functional and structural features of the corresponding cortices, including well-documented (Hebbian) associative learning mechanisms of synaptic plasticity. The model was able to explain and reconcile seemingly incongruous results on neurophysiological patterns of brain responses to well-learned, familiar sensory input (words) and new, unfamiliar linguistic material (pseudowords), and made novel predictions about the complex interactions between language and attention processes in the human brain. To test the validity of these predictions we carried out a new MEG study in which we presented subjects with familiar words and matched unfamiliar pseudowords during attention demanding tasks and under distraction. The experimental results indicated strong modulatory effects of attention on the brain responses to pseudowords, but not on those to words, fully confirming the model's predictions.
In the second part I will illustrate how the same six-area network architecture, implementing the same functional features, can be used to model and explain also cortical mechanisms underlying working memory processes, in the language as well as in the visual domain. In particular, I will present new simulation results that provide a mechanistic answer to the question of why "memory cells" (neurons exhibiting persistent activity in working memory tasks that require stimulus information to be kept in mind in view of future action) are found more frequently in prefrontal cortex and higher sensory areas than in primary cortices, i.e. far away from the sensorimotor activations that bring about their formation (a phenomenon that we refer to as "disembodiment" of memory). The results point to the intrinsic connectivity of the sensorimotor cortical structures within which the correlation learning mechanisms operate as to the main factor determining the observed topography of memory cells.
Benchmarking for semi-fragile DCT based watermarking for Image Authentication and Restoration
Monday 19 November 2012
High quality of recovered image need much more information input as watermark. However, if we want guarantee a certain level of invisibility of watermark and its robustness to some normal image processing, the capacity of watermark is limited. It’s hard to balance each other to get a good performance. This paper reports a systematic approach for benchmarking semi-fragile watermarking algorithms for authentication and restoration. The benchmarking approach is based on a generic dataflow framework of all semi-fragile authentication-restoration watermarking systems and it defines a set of performance metrics that reflect different aspects of the overall performance. Both non-malicious manipulations and attacks are modelled by a channel simulator between the sender and the receiver. The benchmarking process can be done at two levels: system level and component level. The system level benchmarking is the one normally conducted in the literature where a number of algorithms are compared without considering their internal structures. The component level benchmarking is done by reconfiguring only one component of an existing algorithm in order to observe the influence of the component to the final performance of the whole system. Following the benchmarking approach, a software system was developed to benchmark three selected semi-fragile authentication-restoration watermarking algorithms working in DCT domain and variants of the three algorithms. The benchmarking results lead to some insights about how to further improve performance of existing algorithms.
Topology Optimization of Trusses by Vectorized Structure Approach
Thursday 29 November 2012
A new method called vectorized structure approach is proposed for truss topology optimization, which effectively increases or reduces the structure elements by iteratively adding or removing nodes and bars. The approach was developed on structural vectorization using nodes’ coordinates. The mapping relationship between structural vectors and realistic structures is established by the geometry composition analysis rules. Therefore, the approach can generate the optimal solution contains the given set of required nodes. Two kinds of topology optimization methods are considered. The first method sorts vector elements in every mapping process and the second method just consider sorting elements itself to be part of optimizing process. The numerical experiments have shown that both methods are effective.
Surrey Computing to host the next CryptoForma meeting
Thursday 29 November 2012
Surrey Computing will be hosting a one-day meeting on November 29th on Cryptography and Security. The meeting is one in a series run within the the EPSRC CryptoForma network which aims to build an expanding network in computer science and mathematics to support the development of formal notations, methods and techniques for modelling and analysing modern cryptographic protocols. This work increases security and confidence in such protocols and their applications (e.g. in e-commerce and voting), to the benefit of protocol designers, businesses, governments, and application users.
Rule Based Estimation of JPEG2000 Compression Rates by Using Double Compression Calibration
Monday 3 December 2012
Processing history recovery is a branch of digital forensic that deals with non-malicious processing such as image compression. Different types of compression leaves different artefacts which make it harder to have a universal audit trail process. This paper investigates the processing history of a JPEG2000 compressed images. We present a novel rule based compression strength estimation technique for JPEG2000 images. The technique uses a no-reference (NR) perceptual blur metric and double compressions calibration to produce a heuristic rule based algorithm, which classifies an image into three compression categories. These categories are derived subjectively and represent high, medium and low compression for a compression range of 0.1 to 0.9 bits per pixel (bpp). The technique works based on the assumption that blur artefact contained in a JPEG2000 compressed image can be linked to blur artefact contained in the calibrated double compressed image. Based on this observation, three ratios can be calculated by comparing the levels of blur of the target image and its calibrated editions, which can then be used to form three rules each corresponding to a category of JPEG2000 compression strength. For our experiments we have used 100 images to identify rules and 100 for blind testing of our scheme. Comprehensive testing showed that the compression rate of JPEG2000 images can be estimated into high, medium and low compression category with 90% accuracy.
Modelling Brain Cortical Connectivity Using Diffusion Adaptation
Thursday 6 December 2012
The concept of brain connectivity is introduced to explore and understand the organized structure of cortical regions, during the execution of a specific task. Popular methods for measuring brain connectivity include synchronization, coherency and correlation among EEG sites and exploiting the statistical properties of the EEG signals to define the direction of the information flow between the cortical regions. Our approach on the problem of brain connectivity uses the theory of time-space adaptive filters. We use a network of adaptive filters that is self-organised in space and evolves through time in order to model a certain pattern of propagation of EEG signals. Exploiting the properties of these filters can lead us to a robust and flexible method, which can be used in a variety of applications.
Safeguarding identity using biometrics: Is it time-tested?
Monday 10 December 2012
There is an ever increasing need of identity authentication in our daily life, from making purchase online and unlocking personal devices to accessing secure premises. Although physical lock, PIN and gesture (which is common for mobile devices) offer convenient solutions, only biometrics based authentication can provide the ultimate means of validating the identity credential. The use of biometric authentication entails many questions, especially when it is used over a long period of time and in different locations: Will the system performance degrade over time? How will its performance be affected by different acquisition environments, such as the office environment, public locations, and outdoor?
Encoding Spatio-Temporal Spiking Patterns Through Reward Modulated Spike-Timing-Dependent Plasticity
Thursday 13 December 2012
Generating temporally precise sequences of spikes in response to spatio-temporal patterns of synaptic input is a fundamental process of neural activity: an important example would include coincidence detecting neurons in the primary auditory cortex, that fire in response to synchronous, temporally precise incoming spike trains. This is a non-trivial task however, since typically a neuron receives on the order of 10000 synaptic connections: only a small subset of this input can be considered to convey some meaningful task relevant signal, whilst the rest can be considered noise.
By approximating this random background activity as Gaussian white noise, we investigate how networks of noisy neurons can learn to reliably encode spatio-temporal input spiking patterns as temporally precise output sequences of spikes, specifically through reward modulated spike-timing-dependent plasticity.
Common Mistakes When Building Authentication into Apps
Wednesday 16 January 2013
Mobile applications only become really useful if combined with cloud-based services. We have observed that the increasingly short time to market may cause serious design flaws in the security architecture. In this talk I will highlight some flaws discovered in the past.
For example, we looked at nine popular mobile messaging and VoIP applications and evaluated their security models with a focus on authentication mechanisms. We find that a majority of the examined applications use the user's phone number as a unique token to identify accounts; they contain vulnerabilities allowing attackers to hijack accounts, spoof sender-IDs or enumerate subscribers. Other examples pertain to (already fixed) problems in cloud-based storage services such as Dropbox.
MAKING THE ELEPHANT DANCE: The Drive for Innovation in Largescale Software-Intensive Systems
Thursday 24 January 2013
Innovation in software and product delivery is essential to every company’s success. However, larger companies have particular challenges driving innovation into their products and practices due to their culture, size, and context. In this presentation we look at innovation challenges in largescale software development and delivery organizations. We particularly focus on agile software development methods being used in such organizations and their role in innovation. While there have been many successes in creating innovative software solutions, too often there have been disappointments delivering that software as part of a governed product delivery approach. This presentation looks at the challenges to agile product delivery, and examines the role and characteristics of an agile organization in delivering innovative solutions.
Automated Image Analysis: the Clinical Reality and the Information Technology Challenge
Thursday 31 January 2013
Imaging in ophthalmology has changed faster than our understanding of what these images might mean. The magnitude of the images taken in clinical and research setting and subsequently received for analysis was initially limited by the speed of the slides being developed or by the quality of the polaroids. The last 20 years have seen an unprecedented rise both in the number and the quality of digital images in ophthalmology.
The Ophthalmic Image Analysis Centres (commonly referred to as Reading Centres) now mainly deal with digital images. In broad terms, these come from three sources: screening/clinical settings, epidemiological studies and clinical trials. The requirements for these three are governed by different principles and the time and money involved are vastly different as well. Screening/clinical setting requires fast but reliable approaches some with immediate decision making, others can be slightly delayed, but usually only by days. Epidemiological studies examine thousands of patients and require reliable and reasonably cheap solutions, especially as many patients exhibit no pathology. Clinical trials demand rigorous and time-honoured grading approach with exhaustive quality control approaches.
Both clinicians and researchers have been looking for (semi)automated approaches in order to be able to provide faster and more reliable care to patients with the elimination of at least part of the human factor.
During this seminar we will discuss the clinical and research requirements of image analysis in ophthalmology.
Local and Global Dynamics of Neuronal Activity in the Neocortex: Implications for the Function of Sleep
Wednesday 6 February 2013
In the last decades a vast empirical and theoretical knowledge about sleep mechanisms has been accumulated. Surprisingly, the function of sleep still remains elusive. Moreover, the question of “why do we sleep?” is now being gradually replaced with a more fundamental one: “what is sleep?” The basic problem with defining sleep is that on one hand it is a state, inasmuch as it is different from other states, such as wakefulness, and on the other hand it is a process, since it is not static, but evolves in time and space in a highly complex manner. There are multiple temporal scales relevant for sleep process – starting from a fast millisecond scale of individual neuron spiking, that changes depending on whether the animal is awake or asleep, and up to a scale of hours or even days, at which the overall amount and architecture of sleep is regulated. With respect to spatial scales sleep also shows astounding complexity – indeed, it is associated with characteristic changes at levels as remote from each other as an individual neuron and the whole brain. Understanding the mechanisms underlying spatio-temporal dynamics of sleep will help us to understand not only what sleep is, but also why it is necessary.
Online Identity, Security and Applications
Friday 15 February 2013
The issue of identity, identity management and identity cards is hotly debated in many countries, but it often seems to be an oddly backward-looking debate that presumes outdated "Orwellian" architectures. Is Big Brother the only alternative to anarchy?
In the modern world, surely we should be debating the requirements for national identity management schemes, in which identity cards may or may not be a useful implementation, before we move on to architecture. If so, then, what should a national identity management scheme for the 21st century look like? Can we assemble a set of requirements understandable to politicians, professionals and the public? We've certainly had some difficulty to date. One reason might be that we lack a compelling, narrative vision. As a result, we panic into building legacy systems that will subvert the rational goals of worthwhile scheme or set up "security theatres".
If you understand the technology, I will argue, it can deliver far more than the politicians, professionals and public imagine: In particular, it can deliver the apparently paradoxical result of more security and more privacy by exploiting chip cards, mobile phones, the internet biometrics and cryptography. The UK Cabinet Office "Identity Assurance Architecture" (IDA) and the US Department of Commerce "National Strategy for Trusted Identities in Cyberspace" (NSTIC) are a place to start, but I will set out a high-level vision of what the future identity infrastructure should look like: Dr. Who's psychic paper. Not only is this a simple, clear vision that is familiar to the expert and layperson alike, but it's a very useful artistic representation of the capabilities of the technology. I will further suggest that a utility implementation of identity infrastructure can deliver the on this vision in a practical way, and that all of the technology needed to create the identity scheme for the future already exists.
10th Annual Computing Department PhD Conference
Wednesday 13 March 2013
The 10th Annual Computing Department PhD Conference will be taking place on Wednesday 13 March 2013 at the Treetops and Cedar Rooms, Wates House.

