William Vale
About
My research project
Deep Learning for Improving Photoacoustic Imaging of CAR-T CellsDuring CAR-T cell cancer therapy, a patients T cells are modified to recognise and attack cancer cells. My work aims to use deep learning paired with reporter genes and photoacoustic imaging to track CAR-T cells in vivo.
Supervisors
During CAR-T cell cancer therapy, a patients T cells are modified to recognise and attack cancer cells. My work aims to use deep learning paired with reporter genes and photoacoustic imaging to track CAR-T cells in vivo.
ResearchResearch interests
Medical Physics, Medical Imaging, Artificial Intelligence, Computed Tomography.
Research interests
Medical Physics, Medical Imaging, Artificial Intelligence, Computed Tomography.
Publications
CAR-T cell immunotherapy is a promising technique for cancer treatment. To better understand and improve its efficacy for solid tumours, methods for in-vivo imaging and quantifying the CAR-T cell distribution are necessary. One approach involves inserting a reporter gene into the CAR-T cells, causing them to express photochromic proteins that provide strong near-infrared (NIR) optical contrast. NIR photoacoustic (PA) imaging is then used to image these proteins, and implicitly the CAR-T cells. The laser pulse in PA imaging causes a systematic and repeatable variation in the contrast provided by the photochromic proteins between successive scans that is distinguishable from the constant background contrast. In this study, machine learning (ML) techniques are used to classify and predict the spatial concentration of the proteins by analysing time-series PA images. To address the need for large training datasets, we developed a novel 3D simulation framework, which generates labelled PA images of CAR-T cells expressing the reporter gene. The framework was used to procedurally generate, and simulate imaging of, 629 digital samples, each of these was scanned sequentially by 32 laser pulses, resulting in 20,128 images. Neural networks, specifically a Multi-Layer Perception (MLP) and U-Net, were applied for the pixel-wise binary classification and regression of the reporter protein. These exceeded the performance of a Random Forest (RF) algorithm which was previously applied in another study using a small (n=3) in-vivo dataset. The U-Net achieved a coefficient of determination (R-2) of 0.96 and a root mean squared error (RMSE) of 4.3 x 10(-9) M, which represents a significant improvement when compared with the R-2 of 0.72 and RMSE of 1.1 x 10(-8) M achieved by the RF. This study proposes a potential advancement in the accurate non-invasive image detection and quantification of CAR-T cells, with the goal of accelerating preclinical research in cancer immunotherapy for solid tumours.
Quantitative photoacoustic imaging aims to determine the spatial distribution of the tissue’s optical absorption coefficient from photoacoustic (PA) signals measured at its surface. We combine large scale optical and acoustic modelling to estimate the optical absorption coefficient from simulated PA signal measurements using a band-limited transducer array that provides limited angular coverage. We validated our approach using a digital mouse atlas, and a PA imaging forward model which is based on the MSOT in-Vision 256TM system (iThera GmbH, Munich). We were able to recover the absorption coefficient when it was assumed that the scattering coefficient was known exactly, and that the digital phantom was an extrusion out of the 2D imaging plane. We then investigated how the performance was affected when these two assumptions were relaxed, and when substantial negative pressure artifacts were present in the reconstructed images.