Placeholder image for staff profiles

Dr Su Wang

Research Staff, NICE

Academic and research departments

Department of Computer Science.



I was born in Beijing, P. R. China in 1988. I received BSc degree Computer Science and Technology from Century College, Beijing University of Posts and Telecommunications in China in 2010 and MSc degree in Information Systems from University of Surrey in UK in 2011.

My publications


Yin Hu, Su Wang, N Ma, Suzie Hingley-Wilson, Andrea Rocco, Johnjoe McFadden, Hongying Tang (2017)Trajectory Energy Minimisation for Cell Growth Tracking and Genealogy Analysis, In: Royal Society Open Science4170207 The Royal Society

Cell growth experiments with a microfluidic device produce large scale time-lapse image data, which contain important information on cell growth and patterns in their genealogy. To extract such information, we propose a scheme to segment and track bacterial cells automatically. In contrast to most published approaches, which often split segmentation and tracking into two independent procedures, we focus on designing an algorithm that describes cell properties evolving between consecutive frames by feeding segmentation and tracking results from one frame to the next one. The cell boundaries are extracted by minimising the Distance Regularised Level Set Evolution model. Each individual cell was identified and tracked by identifying cell septum and membrane as well as developing a trajectory energy minimisation function along time-lapse series. Experiments show that by applying this scheme, cell growth and division can be measured automatically. The results show the efficiency of the approach when testing on different datasets while comparing with other existing algorithms. The proposed approach demonstrates great potential for large scale bacterial cell growth analysis.

In any diabetic retinopathy screening program, about two-thirds of patients have no retinopathy. However, on average, it takes a human expert about one and a half times longer to decide an image is normal than to recognize an abnormal case with obvious features. In this work, we present an automated system for filtering out normal cases to facilitate a more effective use of grading time. The key aim with any such tool is to achieve high sensitivity and specificity to ensure patients' safety and service efficiency. There are many challenges to overcome, given the variation of images and characteristics to identify. The system combines computed evidence obtained from various processing stages, including segmentation of candidate regions, classification and contextual analysis through Hidden Markov Models. Furthermore, evolutionary algorithms are employed to optimize the Hidden Markov Models, feature selection and heterogeneous ensemble classifiers. In order to evaluate its capability of identifying normal images across diverse populations, a population-oriented study was undertaken comparing the software's output to grading by humans. In addition, population based studies collect large numbers of images on subjects expected to have no abnormality. These studies expect timely and cost-effective grading. Altogether 9954 previously unseen images taken from various populations were tested. All test images were masked so the automated system had not been exposed to them before. This system was trained using image subregions taken from about 400 sample images. Sensitivities of 92.2% and specificities of 90.4% were achieved varying between populations and population clusters. Of all images the automated system decided to be normal, 98.2% were true normal when compared to the manual grading results. These results demonstrate scalability and strong potential of such an integrated computational intelligence system as an effective tool to assist a grading service. © 2013 Tang et al.

Su Wang, Hongying Tang, LI Al turk, Yin Hu, Saeid Sanei, GM Saleh, T Peto (2016)Localising Microaneurysms in Fundus Images Through Singular Spectrum Analysis, In: IEEE Transactions on Biomedical Engineering (TBME).64(5)pp. 990-1002 IEEE

Goal: Reliable recognition of microaneurysms is an essential task when developing an automated analysis system for diabetic retinopathy detection. In this work, we propose an integrated approach for automated microaneurysm detection with high accuracy. Methods: Candidate objects are first located by applying a dark object filtering process. Their cross-section profiles along multiple directions are processed through singular spectrum analysis. The correlation coefficient between each processed profile and a typical microaneurysm profile is measured and used as a scale factor to adjust the shape of the candidate profile. This is to increase the difference in their profiles between true microaneurysms and other non-microaneurysm candidates. A set of statistical features of those profiles is then extracted for a K-Nearest Neighbour classifier. Results: Experiments show that by applying this process, microaneurysms can be separated well from the retinal background, the most common interfering objects and artefacts. Conclusion: The results have demonstrated the robustness of the approach when testing on large scale datasets with clinically acceptable sensitivity and specificity. Significance: The approach proposed in the evaluated system has great potential when used in an automated diabetic retinopathy screening tool or for large scale eye epidemiology studies.

GM Saleh, J Wawrzynski, S Caputo, T Peto, LI Al turk, S Wang, Y Hu, L Da Cruz, P Smith, HL Tang (2016)An automated detection system for microaneurysms that is effective across different racial groups, In: Journal of Ophthalmology Hindawi Publishing Corporation

Patients without diabetic retinopathy (DR) represent a large proportion of the caseload seen by the DR screening service so reliable recognition of the absence of DR in digital fundus images (DFIs) is a prime focus of automated DR screening research. We investigate the use of a novel automated DR detection algorithm to assess retinal DFIs for absence of DR. A retrospective, masked, controlled image-based study was undertaken. 17,850 DFIs of patients from six different countries were assessed for DR by the automated system and by human graders. The system’s performance was compared across DFIs from the different countries/ racial groups. The sensitivities for detection of DR by the automated system were: Kenya 92.8%, Botswana 90.1%, Norway 93.5%, Mongolia 91.3%, China 91.9%, and UK 90.1%. The specificities were: Kenya 82.7%, Botswana 83.2%, Norway 81.3%, Mongolia 82.5%, China 83.0% and UK 79%. There was little variability in the calculated sensitivities and specificities across the six different countries involved in the study. These data suggest the possible scalability of an automated DR detection platform that enables rapid identification of patients without DR across a wide range of races.

MB Hansen, Hongying Tang, Su Wang, L Al Turk, R Piermarocchi, M Speckauskas, H-W Hense, I Leung, T Peto (2016)Automated detection of Diabetic Retinopathy in Three European Populations, In: Journal of Clinical & Experimental Ophthalmology7(4)1000582 OMICS International

Objective: Currently 1/12 of the world’s population has diabetes mellitus (DM), many are or will be screened by having retinal images taken. This current study aims to compare the DAPHNE software’s ability to detect DR in three different European populations compared to human grading carried out at the Moorfields Eye Hospital Reading Centre (MEHRC). Participants: Retinal images were taken from participants of the HAPIEE study (Lithuania, n=1014), the PAMDI study (Italy, n=882) and the MARS study (Germany, n=909). Methods: All anonymized images were graded by human graders at MEHRC for the presence of DR. Independently, and without any knowledge of the human grader’s results, the DAPHNE software analysed the images and divided the participants into DR and no-DR groups. Main outcome measures: The primary outcomes were sensitivity, specificity, positive predictive value (PPV) and negative predictive value (NPV) of the DAPHNE software with regards to the identification of DR or no-DR on retinal images as compared to the human grader as reference standard. Results: A total of 2805 participants were enrolled from the three study sites. The sensitivity of the DAPHNE software was above 93% in all three studies specificity was above 80%, the PPV was above 28% and the NPV was not below 98.8% in any of the studies. The DAPHNE software did not miss any vision-threatening DR. The areas under the curve (AUC) for all three studies were above 0.96. DAPHNE reduced manual human workload by 70% but had a total false positive rate of 63%. Conclusions: The DAPHNE software showed to be reliable to detect DR on three different European populations, using three different imaging settings. Further testing is required to see scalability, performance on live DR screening systems and on camera settings different to these studies.

Lutfiah Al Turk, Su Wang, Paul Krause, James Wawrzynski, George M. Saleh, Hend Alsawadi, Abdulrahman Zaid Alshamrani, Tunde Peto, Andrew Bastawrous, Jingren Li, Hongying Lilian Tang (2020)Evidence Based Prediction and Progression Monitoring on Retinal Images from Three Nations, In: Translational Vision Science & Technology9(2)44 Association for Research in Vision and Ophthalmology

Purpose: The aim of this work is to demonstrate how a retinal image analysis system, DAPHNE, supports the optimization of diabetic retinopathy (DR) screening programs for grading color fundus photography. Method: Retinal image sets, graded by trained and certified human graders, were acquired from Saudi Arabia, China, and Kenya. Each image was subsequently analyzed by the DAPHNE automated software. The sensitivity, specificity, and positive and negative predictive values for the detection of referable DR or diabetic macular edema were evaluated, taking human grading or clinical assessment outcomes to be the gold standard. The automated software’s ability to identify co-pathology and to correctly label DR lesions was also assessed. Results: In all three datasets the agreement between the automated software and human grading was between 0.84 to 0.88. Sensitivity did not vary significantly between populations (94.28%–97.1%) with specificity ranging between 90.33% to 92.12%. There were excellent negative predictive values above 93% in all image sets. The software was able to monitor DR progression between baseline and follow-up images with the changes visualized. No cases of proliferative DR or DME were missed in the referable recommendations. Conclusions: The DAPHNE automated software demonstrated its ability not only to grade images but also to reliably monitor and visualize progression. Therefore it has the potential to assist timely image analysis in patients with diabetes in varied populations and also help to discover subtle signs of sight-threatening disease onset. Translational Relevance: This article takes research on machine vision and evaluates its readiness for clinical use.