Friday 8 April 2022
Inaugural University of Surrey Annual Open Research Lecture post event report
This event has passed
The inaugural University of Surrey Annual Open Research Lecture took place on 8 April 2022. The successful event, which welcomed close to 150 attendees, was opened by Professor Emily Farran, Academic Lead Research Culture and Integrity and Professor David Sampson, Surrey’s Pro-Vice-Chancellor, Research and Innovation.
Dr Karen Salt, Deputy Director, R&D Culture and Environment, gave an inspirational Lecture on the topic of “Open and Transparent Research Culture”.
The engaging poster session that followed showcased 30 posters from individuals and teams across the full range of career stages, representing a broad array of disciplines including Literature and Languages, Physics, Health Sciences and Electrical and Electronic Engineering.
The final session of the day was a stimulating panel discussion on the topic of “Incentivising Open Research across all disciplines”, with panel members Grace Gottlieb (Head of Research Policy, UCL), Dr Karen Salt (UKRI), Dr Marton Ribary (Winner of the Surrey 2021 Open Research Award), and Professor Marcus Munafo (Head of UK Reproducibility Network). The event closed with an opportunity for networking at the drinks reception.
Watch the Open Research lecture Watch the panel discussion
Open Research poster award 2022
“Exploring Open Research opportunities to enrich the development of a PhD project” by Tamala Anderson was voted 'Most inspiring Open Research poster' by event attendees.
Open Research posters
Posters encompassed the full range of stages along the Open Research journey, from researchers who are just starting out, to researchers who are showcasing their Open Research practices.
Early-stage research posters
These posters describe planned research that has not yet been conducted. The poster session provided the opportunity to receive pre-study feedback, allowing researchers to make meaningful adjustments to their study before it is run.
Exploring Open Research opportunities to enrich the development of a PhD project
Postgraduate Research Student
This poster will document my exploration of Open Research activities and practices, with the aim of identifying and incorporating these opportunities to enrich my PhD as I develop the overall project. I would like to document my journey, my thoughts and assessments of relevance and appropriateness for this project, and the benefits as well as costs of implementation of an Open Research approach. An ancillary goal of improving non-academic practitioners’ awareness and application of the research might also be met.
Current proposed methods
My project seeks to extend research connecting psychological wellbeing with place attachment through a mixed-methods set of four studies. The first study analyses the presence of environmental psychology concepts in practitioner-facing guidance. Study 2 is planned as a qualitative study to explore wellbeing outcomes in, and place characteristics of, local favourite places. Study 3 will likely draw secondary data from the UK Household Longitudinal Study, looking for relationships between neighbourhood belonging and health and wellbeing. Study 4 suggests an experimental study to investigate effects of place attachment on psychological restoration from stress.
Initial considerations of Open Research practices include
Pre-registration for all study designs; a collaborative ‘citizen science’ approach and online platform for Study 2; connecting the UK Household Longitudinal Study to additional, cross-disciplinary, secondary data might benefit Study 3 and further the application of secondary data. Additional practices may be discovered as the application of Open Research is investigated.
Input and feedback
What ‘best practice’ steps for investigation and assessment of Open Research practice have been missed? Any additional resources and strategies to overcome perceived barriers? I am especially interested in Open Research resources addressing interdisciplinary issues related to design and the built environment – have any of these been overlooked?
Through the lens of Developmental Coordination Disorder (DCD/Dyspraxia): Experiences of a late diagnosis
Research Assistant, University of Surrey/Assistant Psychologist, CNWL NHS Trust
In this study we aim to investigate the impact of late diagnosis for individuals with Developmental Coordination Disorder (DCD/Dyspraxia). This investigation is important because, whilst the condition has a prevalence rate of 5%, there is a paucity of research into DCD, and it is recognised as one of the least understood neurodevelopmental disorders. Consequently, many individuals with DCD reach adulthood without receiving a diagnosis and/or appropriate support. The study was developed following a consultation with members of the Dyspraxia Foundation, who highlighted the impact of receiving a diagnosis later in life. We are specifically interested in exploring the emotional impact of a late diagnosis and the individuals’ sense of identity, currently and retrospectively.
The study will consist of 15 semi structure interviews with individuals who received a diagnosis of DCD aged 30years or later. The data will be analysed using a thematic analysis which will highlight common themes associated with emotions, self-identity and others’ perceptions of individuals who received a late diagnosis of DCD. We have submitted this project as a registered report which will allow us to receive peer-review feedback ahead of data collection, this will improve the quality of our study and help to eliminate bias. Although a possible drawback of this approach is that preparation of the study can take some time due to the reviewing stages, a benefit to submitting our study as a registered report is that it will allow us to gain detailed feedback form reviewers.
Open research practice
Our study, including protocol and interview schedule will be available on the Open Science Framework. We hope that through sharing our study, this will support further research in this area.
Areas of feedback
We would appreciate feedback on the efficacy of our research questions and how open research may be used to support professionals working with similar populations, particularly those with neurodevelopmental disorders.
Open research practices in qualitative research: A beginner’s guide
Rumination refers to negative repetitive thoughts about past experiences, being an important predictor of mental health conditions1,2,3. We will conduct a qualitative study to shed light on what rumination means to people experiencing it and how it affects people’s beliefs about themselves and the world. Some qualitative studies have already investigated the role of rumination and its perception in clinical populations including patients with chronic pain4 and depression5, as well as specific healthy or subclinical populations such as women with marital conflicts6 and vulnerable young people seeking mental health treatment7. However, no qualitative research has been conducted yet on the relationship between the perceived functionality of rumination and the cognitive and emotional challenges experienced by young adults prone to rumination. A better understanding of such difficulties could play an important role in developing novel, personalised treatments to boost the resilience of this vulnerable population to common mental health conditions.
Our qualitative research will consist of semi-structured interviews to better understand the subjective experiences of the participants. We will conduct the study on 25 young individuals (aged between 18 and 35 years) experiencing high levels of rumination, as assessed via the Rumination Response Scale (RRS)8. We will transcribe the interviews and then employ thematic analysis to individuate the themes that come up and compile the ideas underlying each theme.
Proposed Open Research practice(s)
- Qualitative preregistration on OSF9
- Open qualitative data
- Sharing the codebook10
- Publication in an Open Access journal
Areas for input and feedback
The use of open data in qualitative research on mental health.
The reproducibility EEG checklist: creating a checklist for cognitive EEG studies to promote Open Science
Anna Maria Manti
The Importance of Open Science in EEG data
Brain imaging techniques such as electroencephalography (EEG) provide a relatively easy, cost-effective, and direct approach to measure brain function. However, not many EEG datasets are currently available and best practices on EEG data standards and sharing are only recently emerging (Pernet et al., 2019; Valdes-Sosa et al., 2021). This can be partly explained by logistic constraints on voluminous data storage, transfer, and online computational power (Maciocci et al., 2019). Here we aim to provide an ‘open research practices’ checklist which is easy to follow together with a dataset and analysis pipelines of an experiment investigating the effects of sound on memory.
The present study
This dataset uses a classic working memory (WM) delayed match-to-sample paradigm (Berger et al., 2019) with and without simultaneous periodic auditory stimulation targeting the theta frequency range (4-8Hz). Thirty-one subjects (25 females, age range: 18 to 29 years old, mean age 20.9 ± 2.37 SD, 29 right-handed and 2 left-handed) participated in one laboratory session where they were exposed to 5Hz, 7Hz and sham (i.e., no sound) stimulation. Results reveal that different frequencies show different effects on the EEG as well as an increase in theta midline power during maintenance of information in WM (Michels et al., 2008).
Using this project to promote reproducibility
Important steps will be implemented to engage in Open Science practices. Such steps include the standardisation of the EEG and behavioural files into BIDS format (https://bids.neuroimaging.io/), sharing of data analyses pipelines including scripts for data analyses and dataset registration to the OSF (https://osf.io/dashboard). This dataset could be useful in teaching and research. The proposed checklist can be employed by other labs to improve data sharing and reproducible practices while assisting with ongoing and future multicentre EEG studies such as EEGManyLabs (https://osf.io/yb3pq/).
Assessing the impact of LEGO® construction training on spatial and mathematical skills: reflecting on the strengths and challenges of a stage 1 registered report
Dr Emily McDougal
There is a known association between LEGO® construction ability, spatial thinking and mathematical abilities. The aim of this study is to determine whether this relationship is causal, by measuring the impact of Lego construction training on Lego construction ability and a range of spatial and mathematical abilities. This study has in principle acceptance as a Stage 1 Registered Report. My unique perspective is that I joined the project as the postdoctoral researcher after the study had received in principle acceptance. I will use this opportunity to describe the strengths and challenges of this Open Research Practice from this personal perspective, as well as in the context of the wider project. For example, the Registered Report allowed for a seamless transition between postdoctoral researchers, as the study protocol and analysis plan were clearly outlined. Conversely, there have been barriers to making changes to the study protocol following piloting, which, due to the pandemic, took place a year after the study received in principle acceptance. I will also describe the study for context. In this study 206 children aged 7 to 9 years will take part in one of three training packages: physical Lego training; digital Lego training; and control training (craft activities). Each training package comprises twelve 30-minute sessions, delivered over a six-week period as a lunch time club in schools. Children will complete tasks before and after the training to measure impact of the training, including: Lego construction ability, spatial skills (disembedding, visuo-spatial working memory, spatial scaling, mental rotation, and a number line task) and mathematical abilities (geometry, arithmetic, and mathematical problem solving). We predict improvement in both spatial and mathematical skills for both Lego interventions, relative to the control condition.
Virtual consultations for people with learning disabilities, their families and healthcare providers: A co-design study to aid implementation in everyday practice. A study proposal/early-stage research
Dr Freda Elizabeth Mold
Senior Lecturer in Integrated Care
Virtual consultations (VCs) have been around for a while, but initial adoption was not high, and problematic. Implementing VCs in primary care (via telephone/email/video) has been expedited in recent months, but they can widen healthcare inequalities. Little is known about the use of online health services for PwLDs. What evidence exists shows the need to support accessibility for users, by exploring the needs and preferences of PwLD themselves and to develop better guidelines for use.
To support People with Learning Disabilities (PwLDs) and their families to access and benefit from virtual consultations (VCs).
This study will use an experience-based co-design (EBCD) approach. The study comprises of VC observations, interviews and priority and co-design events with people with learning disabilities, their families and healthcare providers (Primary and Community/Social Care Providers).
Proposed Open Research practice(s)
The central premise of this study is on co-production – working with our health and social care collaborators and experts by experience throughout. Study findings will be used to co-design tangible resources, such as best-practice guidance, training, and support materials to positively change VC experiences and practice for PwLDs, their families and healthcare providers. Our research team, collaborators and experts have worked closely, from study inception to make this study possible. We aim to continue this close partnership through to dissemination of project outputs through various openly available outlets.
Input and feedback
We would welcome ideas about how to widen our dissemination plans for our project outputs.
A study protocol for the validation of a primary carebased data-driven algorithm to predict pancreatic cancer in the UK setting: challenges of open research using routine healthcare data
Postgraduate Research Student
Overall cancer survival has increased over the recent decades, but the dismal survival rates of pancreatic cancer have not changed in the last 40 years1. This is attributed to late diagnosis as diagnosis is challenging. Data-driven approaches, including prediction algorithms that use a combination of symptoms have been developed to aid earlier detection and diagnosis. One such algorithm is ENDPAC (Enriching New-Onset Diabetes for Pancreatic Cancer)2. ENDPAC was developed is the US primary care setting. The aim of this project is to validate ENDPAC for the UK setting.
A retrospective case-control study using the nationally representative Oxford-Royal College of General Practitioners Clinical Informatics Digital Hub (ORCHID) database will be undertaken. ORCHID holds over 10 million primary care electronic healthcare records including nearly 11,000 people diagnosed with pancreatic cancer (cases). Healthcare records of cases will be compared to matched controls. ENDPAC will be employed to predict pancreatic cancer. Its predictive power will be evaluated using the Area Under the Receiver Operating Characteristic Curve (AUC) analysis.
Routinely collected data are a rich resource that should be used to improve healthcare. However, they are currently underutilised in research due to restricted access as well as privacy and ethical implications. This project will serve as a case study to support the safe and trustworthy use of patient data in research.
Open Research practices
The study protocol will be published, enabling peer review of methods. The software developed in this project will be deposited in repositories such as GitHub to enable scrutiny and reuse. Only de-anonymised patient records will be accessed to preserve privacy. Results will be published open access in peer reviewed journals.
Open Research challenges
Datasets that consist of healthcare records cannot be made open access. Only authorised and trained researchers can access this type of data.
How can Affordance Based Design Improve collaboration efficacy in the workplace to accelerate value-driven industry 5.0
Industry 5.0 advocates a more sustainable, human-centric and resilient industry, prioritising and providing the "best conditions for innovation to flourish” (Breque et al., 2021). The research case is a London Borough Council’s organisational move to hybrid working in office workplaces and maintenance/repair workshop environments. What value-adding tools in a collaborative hybrid workplace can improve service delivery and staff wellbeing through the application of user-centric Affordance Based Design (ABD) theory (James Gibson, 1966)? Gibson developed the discipline of ecological psychology. “Affordance” refers to the positive or negative relationship of an object to an animal/human within an environment.
How can ABD improve workplace collaboration efficacy?
The research seeks to enable and measure:
- How intuitive, user-friendly spatial design and equipment can improve wellbeing and collaboration.
- Greater workplace cross-functional collaboration.
- How sensors and artificial intelligence can analyse anonymous spatial behaviour, movement, and collaboration analytics in real-time to inform better workplace design.
The mixed-method research applies the ABD method within a convergent parallel design structure. Multiple forms of evidence will provide a “thick description” to increase understanding for an organisation’s move toward hybrid working and workplaces supporting greater collaboration. Involvement of all levels of staff will create the engagement factors influencing affordance values and will test the value of human-centric collaboration.
We have prospectively registered our research via the Open Science Framework and over the period of the research, the data will be available on the OSF project page (OSF; https://osf.io/6tzax). A data management plan (DMP)will be implemented and maintained throughout the project to simplify and avoid any issues with the sharing of the research outputs as it is shared through collaborative networks and with that in mind, all data will be anonymized. The research follows the FAIR guiding principles (Wilkinson et al, 2016) that the data be Findable, Accessible, Interoperable and Reusable.
Promoting Open Research throughout the evaluation of the development of the Burdett National Transition Nursing Network
The transition from child into adult services is a crucial time in the health of young people with a long-term health condition, yet processes are disjointed and often fall short of what is required for effective transition. Following the successful development of an exemplar Model of Care for Transition at Leeds Teaching Hospital Trust (LTHT), a National Transition Nursing Network is being implemented across England. A formal evaluation of this quality improvement model for transition will be conducted by a research team at The University of Surrey, in partnership with LTHT, funded by the Burdett Trust for Nursing.
A multi-centre concurrent mixed methods design will be utilised with qualitative (interviews/case studies), and quantitative descriptive (surveys) data collected simultaneously over three phases. Participants will include young people, parents, and professionals involved in the young person’s transition journey. Phase 1 includes a realist synthesis of the literature to understand what works for whom, under what circumstances during young people’s transition from child into adult services.
Proposed Open Research practices
Promoting Open Research, the study team will work in partnership with lead professionals from the National Transition Nursing Network throughout. This will involve open collaboration on the Realist Synthesis, including early protocol registration, and joint authorship on both the protocol and final realist synthesis publications. Young people will also be instrumental in steering the study, with a Transition Advisory Group already established. Digital newsletters and social media already provide platforms for the sharing of study information with regular updates publicly available on LTHT’s website. These platforms will be used to share emerging findings, and new learning, as well as joint, open access publications.
Ideas to increase study visibility from the start, and suggestions of methods to share our ongoing findings would be helpful. Suggestions for wider dissemination at the end of the study would be greatly appreciated.
Open Community for Food Consumer Science (COMFOCUS)
Dr Lada Timotijevic
Associate Professor; Head of Department of Psychological Sciences; Deputy Director of the Food, Consumer Behaviour and Health Research Centre
In collaboration with: Wageningen University, Holland; Aarhus University, Denmark; German Institute of Food Technology (DIL), Germany, Institute of Agri-food Research and Technology, Spain; University of Bologna, Italy; University of Turku, Finland; Slovakian Institute of Agriculture, Slovakia.
The current food system has reached a crisis point, producing a range of undesirable outcomes such as health inequalities, obesity, loss of biodiversity, and environmental degradation. Understanding food consumption and people’s relationship with food is a major step towards food systems transformation. Food consumer science is addressing this problem as a discipline that aims to understand how people come to learn about, desire, acquire, use, and dispose of foods, food products and services, and activities available in the marketplace to satisfy their needs. However, food consumer science is currently fragmented, its data are scattered and its approaches insufficiently harmonised, which impacts its ability to deliver impactful results.
Open Research practice
The European project COMFOCUS mission is to advance the open food consumer science community beyond its current level of fragmentation that prevents it from being a user-relevant data-rich science it could be in support of healthy food choice public policies and private strategies. The project is developing a research infrastructure that will enable open access to institutions and COMFOCUS Knowledge platform; open data that will be standardised and comply with FAIR and RRI standards; and open community fostered through networking and long-term strategy for European behavioural and social scientists interested in people’s relationship with food.
Open Research method
It brings together social and computer scientists to harmonise constructs, measurements and develop standards for new modes of research. It will develop data structures, ontologies and services for data extraction, as well as appropriate data governance approaches. This will facilitate open food consumer science through open data, open protocols, open facilities and open knowledge.
COMFOCUS would like to receive input on: developing open data management/data governance practices; how to open up the labs and facilities within Surrey; technological know how to enable external researchers to utilise the Virtual Reality (XR) lab.
Completed research (traditional) posters
These posters present research that has already been completed.
Enzymatic Degradation of Polyethylene Terephthalate (PET) Plastic: A Sustainable Approach
Commonly used plastics are synthetic or semisynthetic polymers derived from fossil fuels, their unique properties offer them resistance to natural degradation (Asmita et al., 2015). PET polymer is composed of terephthalic acid and ethylene glycol chain, which gives it superior barrier properties (Shah et al., 2008), 41.56 and 73.39 million metric tons of PET were produced in 2014 and 2020 (Statista, 2018). Of the one million PET bottles sold every minute, 91% are not recycled, only 6 out of 9% recycled are re-used (Nace, 2017), it takes 400 years for a single PET bottle to naturally degrade.
We used molecular techniques to engineer bacteria to produce enzymes active against PET. These esterases were originally identified in actinomycetes (such as Thermobifida fusca) and b-proteobacteria (such as Ideonella sakaiensis), they were cloned on a broad-host plasmid for expression in Pseudomonas and Acinetobacter, for use in the biodegradation of pollutants and production of environment-friendly bioplastics like polyhydroxyalkanoates. The expression system used was designed in such a way that different genes of interest could be integrated to overexpress the target enzymes for the degradation, remediation, and or detoxification due to spillages or contamination by petroleum hydrocarbons, its products, or any other targeted pollutant.
We successfully secreted enzymes from our engineered organisms and their activities were confirmed on polycaprolactone polyester as a substrate. The enzymes degrade PET nanoparticles and crystalline films in the subsequent experiments.
PET esterases can be engineered, recombinantly expressed from host strains, and be used to degrade PET plastic. The resulting metabolites could be used as a suitable feedstock for the engineered strains to produce molecules of value. Access to open research information and resources contributed immensely toward the success of this research, to which the outcomes were made public through workshops and conferences in support of research reproducibility and replicability. Keywords: Plastic, PET, enzymes, cloning, pollution.
Automatic analysis 5.8.0: Demonstration of integrated and responsive open-source development
Dr Tibor Auer
Lecturer in Biological Psychology
Automatic analysis (aa) allows the construction of complex workflows with great flexibility and convenience by tight integration of tools and the automatic connection of the various steps (termed modules in aa). Additionally, aa emphasize quality assurance by providing relevant diagnostics for steps or participants. Its flexibility also allows the assessment of the numerical instability and the analysis-related variance.
aa has been developed with an initial focus on MRI. We integrated the EEGLAB and the FieldTrip toolboxes and compiled a new set of modules to extend aa’s functionalities to the M/EEG community. We also provided an example workflow based on an openly available dataset to demonstrate the most common use cases.
Targeted projects improved the robustness of aa parallelisation and a self-sufficient deployment by providing documentation and concise examples. We currently work on ensuring aa’s compatibility with Windows to support users beyond the *nix ecosystem.
We complemented the existing coordinated development with version control on GitHub with continuous integration (CI) using GitHub Actions and GitHub-hosted runners to facilitate testing and deployment. This solution is freely available and ensures better generalisability because it relies on a generic configuration and environment. The GitHub repository has been linked with Zenodo, which allows DOI assignment and improves visibility and recognition of the project.
The implemented changes improve aa’s robustness and offer its benefits for a more diverse community. The improved transparency in the development and the application allows more agile and responsive development while it also reduces the technical debt for contribution and application.
aa became a powerful, versatile, and accessible tool for a growing community.
Automatic analysis improves reproducibility and supports a wide range of neuroimaging use cases. GitHub provides a powerful environment for collaborative coding, testing, deployment, and user support.
Physionet ECG database as a resource for pilot studies in cardiovascular research
Paroxysmal Atrial Fibrillation (PAF) is a cardiovascular condition that causes a patient’s heartbeat to be irregular and erratic. Currently the gold standard for diagnosis of PAF is to view an episode on an electrocardiograph (ECG), but due to the paroxysmal nature of the condition often by the time the patient is in hospital the episode has passed. A patient would then be connected to an ambulatory ECG for 1-2 weeks to record any future episodes; however, this can be uncomfortable for the patient. This motivates the need to determine the likelihood of a patient having PAF from a normal sinus rhythm ECG.
The aim of my research is to develop a machine learning tool that uses three analysis methods to predict with a high accuracy if a patient has experienced PAF or not. I intend to use complexity, restitution and symmetric projector attractor reconstruction (SPAR) as part of my model. Each of these have been used individually to predict PAF and other cardiovascular conditions within humans and animals but never as a combination of all three methods.
A machine learning tool’s accuracy can be greatly improved by performing cross validation on a large data set, the more data available the better insight we have into its accuracy. However, gaining access to large amounts of ECG signals, particularly PAF, is quite difficult, this is why an open database such as physionet is useful for creating accurate models. It allows us to test the classifications of the models early on so that parameters and methods can be adjusted or changed to improve the overall accuracy.
I would be grateful to receive input and feedback on not only my approaches to using my analysis techniques but also in how I use my open source data to improve the accuracy of my model.
An Open Research approach to investigating how eye-tracking technology has been used as a tool to evaluate social cognition in intellectual disability
Postgraduate Research Student
Eye-tracking technology can provide information about social-cognitive processes, without the need for explicit responses or verbal demands. Here, I present a systematic review investigating how eye-tracking technology has been used as a tool to evaluate social cognition among individuals with an intellectual disability, in which we took an Open Research approach. When I came to pre-register the review, I noticed popular guidance were more focused on intervention and outcome, rather than detailed description of methodology. As the purpose of pre-registration is to facilitate transparency and robustness, whilst constraining reporting bias, I wanted to follow a framework that best suited our review. I used the Non-Interventional, Reproducible, and Open Systematic Reviews (NIRO-SR; Topor et al., 2021) guidelines and framework – a 68-item checklist supporting planning, pre-registering, and reporting of non-interventional studies in systematic reviews. Being able to address each item not only maximised transparency of our review processes, but also meant we begun the review with a comprehensive protocol written – supporting efficient and reliable data extraction and synthesis across reviewers. Searches were conducted in PsycINFO, MEDLINE, Embase and Web of Science, and through mailing lists (ID-Research-UK, COGDEVSOC, Dev-Europe). We included both grey literature and peer-reviewed research, as we did not want to overlook the risk of publication bias, and consequently, over-estimate the effectiveness of eye-tracking technology. The review identifies a relatively substantial amount of research; yet variability in eye-tracking protocol and heterogeneity of stimuli used. In addition, studies were often limited by sample size and at times ran exploratory analyses - increasing the potential for sample dependent results and Type 1 error. We recommend presenting protocols transparently, and developing a bank of open-access, validated eye-tracking stimuli, to encourage replication of findings and opportunities for data sharing. Collaborative and open methods will strengthen theoretical and clinical implications regarding social cognition in intellectual disability.
Using open data and code to investigate brain-behaviour relationships
Postgraduate Research Student
Neuromodulation uses sound, electricity, or magnetism to change how the brain works, with the aim of changing behaviour. However, current neuromodulation technology cannot reach its full potential until there is a mechanistic understanding of how simulation influences brain network dynamics, and as a result, behaviour. Utilising open data and code, I have developed a pipeline that identifies recurrent patterns of brain network connectivity and their dynamics, as well as measures of complexity. I will then assess how these metrics relate to behaviour across a diverse set of cognitive tasks.
The Human Connectome Project (HCP) is an open, high-quality functional magnetic resonance imaging (fMRI) dataset of 1200 young adults resting or completing tasks. Using a subset of 200 participants, we will use open-source code for Leading Eigenvector Dynamic Analysis (LEiDA) to identify the shared connectivity states within and between experimental conditions.
We have made several additions to the LEiDA pipeline, including statistical and algorithmic complexity metrics. Complexity is a measure of entropy and provides clues about the predictability of the brain’s dynamics at both the regional and whole-brain level.
Finally, we compare these metrics to each other and behavioural performance metrics to assess which brain metrics, if any, are predictive of task performance. I have created and tested this pipeline on n=60 participants and am now scaling up to n=200.
Proposed Open Research practices
My study utilises validated open-source data and code; likewise, any additions to the LEiDA pipeline and any papers from this study will be made open access. Further recommendations to increase the openness and replicability of any part of this study are welcome. Collaboration, big data, and open access code are crucial tools needed to understand our brain and effectively utilise neuromodulation for clinical and research purposes.
Fully open-sourced music source separation and speech quality enhancement systems
Postgraduate Research Student
Music source separation (MSS) aims at separating different sound sources (e.g. drums, bass) from a mixture audio file, which has several applications such as karaoke and music remixing. Our study proposed a new model to predict complex-valued ideal ratio mask with deep ResUNet architecture and channel-wise subband feature, which is a subband representation that can reduce the computational cost. On the MUSDB18HQ tests, our model achieves an 8.92dB separation performance on the vocal track and an average score of 6.97dB on vocals, drums, bass, and other tracks. In ISMIR 2021 music demixing challenge, our system ByteMSS ranked 2nd in the vocal track and 5th on average score. On following the open-research practice, the training code, pretrained model as well as demo colab is fully open-sourced:
Speech quality enhancement is a crucial topic on improving online communication or recorded speech quality, especially after the remote working scheme become popular in recent years. In our work, we proposed a novel speech quality enhancement architecture, VoiceFixer, based on Melspectrogram restoration module and a speaker-independent neural vocoder synthesis module. We found that utilizing the prior knowledge from the pretrained vocoder can facilitate high-quality speech generation, especially on super-resolution tasks. And we found joint performing multiple quality enhancement tasks become feasible on the low-dimensional Melspectrogram. Our proposed system can achieve general speech super-resolution, dereverberation, de-clipping, enhancement, and equalisation in one pass. Objective scoring experiments show that our system has clear advantages over baselines and achieves good restoration quality on real-world test cases. Our training code, pretrained models as well as data sets are publicly available to facilitate open-research practice:
Showering smartly in tourism accomodations
Dr Pablo Pereira Doel
Lecturer in Hospitality Information Technology; Digital Lab Commercialisation Officer; ESRC-SeNSS Research Fellow; Sustainability Fellow at Institute for Sustainability
Depletion of freshwater is a global environmental threat, increasing with the climate breakdown. Yet water literacy and conservation are still under-researched. Showering is one of the most water-intensive behaviours, also contributing to energy use and carbon emissions. This project aimed at fostering pro-environmental shower behaviour among tourism accommodation guests using smart technology and persuasive messages.
Randomised, covert control trials were developed in seven tourism accommodations from four countries, installing in shower cubicles an innovative smart technology that detects showers through different sensors and informs the user in real-time, via a displayed timer, their shower length. Also, different persuasive messages appealing to personal values were used in combination with the technology. Actual shower duration was unobtrusively collected through the technology, measuring the effect of the behavioural intervention.
Shower duration was found to be 13.56% shorter when the real-time information was provided compared to the control. The messages further reduced duration, with showers 21.27% shorter with the message appealing to selfless values.
This intervention provided shower data about a hidden behaviour. It enhanced the role of personal values in pro-environmental behaviour, contributing to current behavioural change theories. Methodologically, the research used innovative technology for the intervention and data collection. The research fostered pro-environmental shower behaviour, achieved water and energy reductions, and contributed to the Sustainable Development Goals 6, 7, 12, & 13.
This research, the first author’s PhD, represents a leaning journey towards Open Research. Through the University of Surrey Open Research training and materials, the first author started to learn the values of openness, collaboration, sharing and transparency in research, now embedded in his way of planning, developing, and communicating research for the benefits of all. This research represents the author’s first steps: the study’s preprint and the data collected. The data will be available when the peer review process ends.
A systematic review and meta-analysis to quantify weight loss in pancreatic cancer: Challenges using published research based on healthcare data
Postgraduate Research Student
Pancreatic cancer is rare but has dismal survival rates meaning it is the sixth cause of UK cancer mortality1. Often presenting with non-specific symptoms, including weight loss, makes early diagnosis when curative treatment is possible challenging. However, because weight loss (often severe) occurs in most pancreatic cancer patients, it could be a useful marker. The study aim is to quantify the pattern (amount and timing) of pre-diagnosis weight loss in pancreatic cancer, improving its utility for early detection.
A systematic review and meta-analysis of literature to quantify weight loss patterns in pancreatic cancer. Embase, PubMed, Scopus, Web of Science and The Cochrane Library were searched using key words including Pancreatic Cancer and Weight Loss. Observational studies containing quantitative data on pre-diagnosis weight loss were included. Risk of bias was assessed using ROBINS-I2. The main outcome was weight loss. This was synthesised separately for each study-type for meta-analysis using standardised mean differences.
40 studies including 15 case-control studies were retrieved. Preliminary findings show that 70-75% pancreatic cancer patients experienced unintentional weight loss six months pre-diagnosis. The odds ratio of weight loss >15% is 15.40 (95% CI, 10.65-22.26). Weight loss is 8.1±0.7 kg with BMI change -1.21 kg/m2. Greater weight loss is associated with poorer prognosis.
Quantifying the pattern of weight loss in pancreatic cancer may help clinicians to identify at risk patients. This information could also improve data-driven algorithms designed to flag high risk patients
Open Research practices
Registered the systematic review protocol on PROSPERO (CRD42022302985). Results will be published open access in a peer reviewed journal.
Open Research challenges
Reported measurement outcomes lack consistency in units and conventions. Standards for reporting weight loss are required to improve research quality. Transparency is impacted with studies only presenting data graphically and unable to share original datasets.
Two Decades of Open Data for language diversity at the Surrey Morphology Group
Professor Erich Round
British Academy Global Professor of Linguistics
The Surrey Morphology Group (SMG) is a world-leading research group in the School of Literature and Languages. Since the 1990s, a key ingredient in the SMG recipe for success has been Open Research language data.
In the 1990s, language databases were often created alongside research projects, but weren’t made publicly available. The SMG committed to high-value databases that were freely accessible.
Linguistic databases require analysis of source materials. We ensure users can access the source information that supports our claim. This is a necessary condition for reproducible science, which has now become a high-priority issue. Currently, the SMG maintains 21 web databases, created between 1999 (Surrey Person Syncretism Database) and 2021 (Surrey Lexical Splits Database, among others), including dictionaries with social media interactivity (Nuer Lexicon, Archi dictionary), descriptions of linguistic phenomena, and curated collections of cultural artefacts (The Mian & Kilivila Collection). Their creation has improved access for linguists and community members alike. Today, a priority is enhancing the inter-operability of the field’s databases beyond the SMG.
SMG is renowned for its outstanding databases, which showcase our excellent research. Around the world, linguists from undergraduates to professors use these resources for learning, teaching and research. Our data is inherently connected by complex relationships, which led to technical challenges alongside theoretical breakthroughs. Early concerns that data would be used without citation were addressed by the systematic usage of DOIs and user-friendly cite buttons. Databases are not on the radar of every committee and ensuring technical continuity has proven a challenge still awaiting a stable solution.
SMG has played a leading role internationally, in promoting Open Research in linguistics. Our pioneering open databases have promoted the idea that such contributions are fundamental to science.
Open-source toolbox for harmonised analysis of clinical angiography images to support discovery of novel biomarkers
Accurate assessment of the microvasculature (the smallest vessels in the human body) could identify biomarkers that lead to a decline in vascular disease mortality. Optical coherence tomography angiography (OCTA) is a non-invasive modality capable of imaging microvasculature, but inconsistencies in data processing protocols across institutions and devices represent a major barrier to progress in applying OCTA to reduce the burden of disease. Our project aims to remove this barrier.
We have acquired and used OCTA images to develop and optimize a toolbox for OCTA image analysis. We validated the optimized software using OCTA images from different commercial and non-commercial instruments and samples.
We have created an integrated MATLAB – ImageJ toolkit (OCTAVA – OCTA Vascular Analyser) with a user-friendly interface for processing and analysis of OCTA images. Quantitative results from various OCTA images showed that OCTAVA can accurately and reproducibly determine metrics for characterization of the microvasculature.
Wide adoption of OCTAVA is possible and could enable studies and aggregation of data on a scale sufficient to develop reliable microvascular biomarkers for early detection and to guide treatment of and thereby reduce the burden of microvascular disease.
Open Research wins/challenges/learnings
There are no large-scale OCTA data sets yet widely available. Making OCTAVA open access means that it can be further validated via international laboratory and clinical research communities and eventually become standardized software for OCTA data analysis. This would enable building large cross-institution normative databases of the microvascular system in health and disease. Such large data sets will enable defining the most sensitive biomarkers to distinguish between health and disease. Working on OCTAVA we have learnt about various repositories to host data and software for the public, how to prepare documentation for users, and when to consider IP protection in software research.
Considerations for Open Practices in quantum technologies research
Postgraduate Research Student
The field of quantum technologies is a fast-paced research environment where new developments are constantly being reported. However, recent controversies -- such as the retraction of a 2018 Nature article claiming to find evidence of the elusive Majorana fermion -- have made me reflect on the research practices of my field and my own practice. When conducting future research, I plan to engage in a variety of open research practices to ensure the reproducibility and transparency of my work, thereby encouraging others in my field to engage with my data, build on it and critically engage with my conclusions.
Regarding my current research, which is due to be published soon, I aim to implement several practices to ensure my data is freely accessible and to facilitate reproducibility and scrutiny of my studies. This will be achieved by publishing under an open licence as well as depositing datasets and code on an open repository such as Zenodo. In future studies, I would like to explore producing a Registered Report alongside publishing an open online lab notebook through my own personal website. This will improve the transparency of my research through the entire process, not just at the time of publication.
Additionally, I aim to engage with the open datasets of others in my field to help foster a culture of shared datasets that different groups will analyse in different ways, extracting more information from results as well as improving the reliability of our knowledge through collaboration. To assess the efficacy of these ideas for open research practices in the field of quantum technologies, I would benefit from reflections on how successful such practices are in other areas, particularly that of analysing the open data of others within the context of my own research topic.
Efficient audio-based convolutional neural networks via filter pruning
Dr Arshdeep Singh
Research Fellow A in AI4S Project + Sustainability Fellow at the Institute for Sustainability
Recent trends in artificial intelligence employ convolutional neural networks (CNNs) [1, 2] that provide remarkable performance than other existing methods. However, large size and high computational cost (including training and inference cost) of CNNs is a bottleneck to deploy on resource-constrained devices such as smart phones. Moreover, training CNNs for several hours leads to emit more CO2. As an instance, a computing device (NVIDIA GPU RTX-2080 Ti) used to train CNN for 48 hours generates equivalent CO2 as emitted by an average car driven for 20 Km1.
One of the directions to compress CNNs is by “pruning”, where the unimportant filters are explicitly removed from the original network to build a compact or pruned network. After pruning, the pruned network is fine-tuned to regain the performance loss. In this study, we propose a cosine distance based greedy algorithm to measure similarity among filters in filter space where each filter is approximated using Rank-1 approximations. The proposed work uses following open research practices,
- “Open data”: Experiments are performed on publicly available audio scene classification2 dataset and CNN.
- “Open Access”: To reproduce the results, the proposed codes are made available publicly. (Link: https://gitlab.surrey.ac.uk/as0150/passive-pruning)3
- “Open Access”: To estimate CO2 emission, an online available tool is used.
The initial analysis suggests the following:
- The number of parameters and the computation speed during inference are reduced by 50% and 30% respectively at approximately 3% drop in accuracy as that of unpruned network.
- Utilizing only 10% of the training dataset (by reducing 90% computation speed during fine-tuning process), the performance of the pruned network can be improved significantly.
In future, the study aims to improve pruned network performance, and to extend the analysis to large networks such as PANNs, VGGish4.
Experience of publishing in Open Research notes journal
Dr Paul Stevenson
Reader, Head of the Theoretical Nuclear Physics Group, Senior Personal Tutor
In 2020 Institute of Physics Publishing launched a new journal IoP SciNotes whose remit is to publish brief pieces of work not appropriate for a full-length article, but which would otherwise not be published, and lost to the research record. These could include preliminary results; reproduced results or observations; descriptions of a new method, protocol or data; negative results; or registered methodologies for a planned piece of research. The journal is peer reviewed and open access.
I wanted to support the journal and had just been making use of a major published computer code in a novel way not documented in our original paper. Writing a short research note to describe this method of using the code seemed like an ideal article for the new journal.
The poster describes the process of written, submitting, revising after referees’ comments, publication and subsequent impact. The summary is that the paper was published and is now freely accessible to the research community. Without such a route to publication it would be a method and use of the published code that remained unknown. On the other hand, IoP SciNotes is not a well-known journal in my research community, and my paper, on a method in nuclear physics, is somewhat lost in a large sea of articles spanning a very wide range of research areas. Publishing in this way thus requires more active promotion of the work to the relevant community if it is to be noticed.
The published article is "Internuclear potentials from the Sky3d code", P. D. Stevenson, IOP SciNotes 1, 025201 (2020) doi: 10.1088/2633-1357/ab952a