Dr Tao Chen


Reader
BEng, MEng, PhD
+44 (0)1483 686593
03 BC 02

Academic and research departments

Department of Chemical and Process Engineering.

Biography

Areas of specialism

Process systems engineering; Computer modelling; Data analytics; Applications of modelling and data analytics in skin penetration, food engineering, radiotherapy, and manufacturing processes

University roles and responsibilities

  • Member, Faculty International Relations Committee
  • Professional Training Tutor, Department of Chemical & Process Engineering

    Research

    Research interests

    Research projects

    Supervision

    Postgraduate research supervision

    My teaching

    My publications

    Highlights

    Kattou, P., Lian, G., Glavin, S., Sorrell, I., Chen, T., 2017. Development of a Two-Dimensional Model for Predicting Transdermal Permeation with the Follicular Pathway: Demonstration with a Caffeine Study. Pharm Res 34, 2036–2048. https://doi.org/10.1007/s11095-017-2209-0

    Chen, T., Lian, G., Kattou, P., 2016. In Silico Modelling of Transdermal and Systemic Kinetics of Topically Applied Solutes: Model Development and Initial Validation for Transdermal Nicotine. Pharm Res 33, 1602–1614. https://doi.org/10.1007/s11095-016-1900-x

    Kajero, O.T., Thorpe, R.B., Chen, T., Wang, B., Yao, Y., 2016. Kriging meta-model assisted calibration of computational fluid dynamics models. AIChE J. 62, 4308–4320. https://doi.org/10.1002/aic.15352

    Tariq, I., Chen, T., Kirkby, N.F., Jena, R., 2016. Modelling and Bayesian adaptive prediction of individual patients’ tumour volume change during radiotherapy. Phys. Med. Biol. 61, 2145. https://doi.org/10.1088/0031-9155/61/5/2145

    Gao, X., Jiang, Y., Chen, T., Huang, D., 2015. Optimizing scheduling of refinery operations based on piecewise linear models. Computers & Chemical Engineering 75, 105–119. https://doi.org/10.1016/j.compchemeng.2015.01.022

    He, B., Chen, T., Yang, X., 2014. Root cause analysis in multivariate statistical process monitoring: Integrating reconstruction-based multivariate contribution analysis with fuzzy-signed directed graphs. Computers & Chemical Engineering 64, 167–177. https://doi.org/10.1016/j.compchemeng.2014.02.014

    Publications

    Increasing concern over climate change and the impact of greenhouse gas emissions as well as diminishing global oil reserves has pushed research into alternative energy. Reducing the cost of microalgae, a promising source for alternative energy, is a key step in commercialising biodiesel production. Currently avenues such as the use of waste stream cost effective cultivation system and efficient harvesting options are being explored for the common goal of establishing commercially viable microalgae production and utilisation schemes. From reviewing the current progress presented in literature this research has identified several aspects of importance to commercialising biofuel production. After identifying several gaps in the literature covering direct comparison of microalgal biomass production between temperate and hot region, a novel investigation utilising a refined computer model was undertaken to compare upstream cultivation of open systems in both temperate and hot climates. The outcome of which suggested the relative importance of light over temperature for the cultivation of microalgae in an open pond system. This was then explored further experimentally by setting the temperate light intensity, photoperiod and temperature conditions for three months representing summer and winter seasons. The results of this novel adaptation of seasonal highs and lows data of a temperate climate (UK) indicated that a more effective direction of intervention is the investment in additional light-supply in place of a heating-system, which is more than likely to yield higher algal biomass for biofuel production. Finally, an approach was made towards engaging more economical aspects of the process from upstream cultivation of waste stream based nutrients (leachate) with a native microalgae strain for the first time, to downstream dewatering of algal biomass with innovative improvements to energy efficient forward osmosis technology by uniquely assessing microalgae nutrient-based draw solution. The results both indicated the real potential of utilising these cost efficient methods at a lab scale. The ultimate goal of the project was to combine the research efforts for both cultivation (upstream) and harvesting (downstream) to assist in the understanding of the commercial viability of biofuel production from microalgae.
    Coleman Lucy, Chen Tao, Sorrell Ian, Lian Guoping, Glavin Stephen In Silico Simulation of Simultaneous Percutaneous Absorption and Xenobiotic Metabolism: Model Development and a Case Study on Aromatic Amines, In: Pharmaceutical research37241 Springer
    To advance physiologically-based pharmacokinetic modelling of xenobiotic metabolism by integrating metabolic kinetics with percutaneous absorption. Kinetic rate equations were proposed to describe the metabolism of a network of reaction pathways following topical exposure and incorporated into the diffusion-partition equations of both xenobiotics and metabolites. The published ex vivo case study of aromatic amines was simulated. Diffusion and partition properties of xenobiotics and subsequent metabolites were determined using physiologically-based quantitative structure property relationships. Kinetic parameters of metabolic reactions were best fitted from published experimental data. For aromatic amines, the integrated transdermal permeation and metabolism model produced data closely matched by experimental results following limited parameter fitting of metabolism rate constants and vehicle:water partition coefficients. The simulation was able to produce dynamic concentration data for all the dermal layers, as well as the vehicle and receptor fluid. This mechanistic model advances the dermal in silico functionality. It provides improved quantitative spatial and temporal insight into exposure of xenobiotics, enabling the isolation of governing features of skin. It contributes to accurate modelling of concentrations of xenobiotics reaching systemic circulation and additional metabolite concentrations. This is vital for development of both pharmaceuticals and cosmetics.
    Yan W, Hu S, Yang Y, Gao F, Chen T (2011)Bayesian migration of Gaussian process regression for rapid process modeling and optimization, In: Chemical Engineering Journal166(3)pp. 1095-1103 Elsevier
    Data-based empirical models, though widely used in process optimization, are restricted to a specific process being modeled. Model migration has been proved to be an effective technique to adapt a base model from a old process to a new but similar process. This paper proposes to apply the flexible Gaussian process regression (GPR) for empirical modeling, and develops a Bayesian method for migrating the GPR model. The migration is conducted by a functional scale-bias correction of the base model, as opposed to the restrictive parametric scale-bias approach. Furthermore, an iterative approach that jointly accomplishes model migration and process optimization is presented. This is in contrast to the conventional “two-step” method whereby an accurate model is developed prior to model-based optimization. A rigorous statistical measure, the expected improvement, is adopted for optimization in the presence of prediction uncertainty. The proposed methodology has been applied to the optimization of a simulated chemical process, and a real catalytic reaction for the epoxidation of trans-stilbene.

    Model Predictive Control (MPC) is a solution towards more energy-efficient waste treatment without compromising on treatment quality. A key component is the process model describing how the inputs and outputs correlate. MPC uses this model to predict future outputs over a finite horizon to decide on step changes to make at the input. These step changes are made so that the output reaches and maintains at a user specified set point. For MPC to be effective, the process model needs to accurately describe the process behaviour. This is a difficult challenge in waste treatment processes due to a combination of slow response, process complexity, and large disturbances.

    This research project investigated two research avenues towards developing better modelling techniques. This would result in more accurate models or achieve a sufficiently accurate model with fewer experiments. The first avenue is Constrained Model Identification (CMI). Model identification is an optimisation problem to estimate the model parameters. In CMI, process knowledge from first principles and operator experience is translated into optimisation constraints to aid data-driven model identification.

    The second avenue is Sequential Optimal Experiment Design (SOED). This uses the concept of measuring a value representing information content of a dataset. Like MPC, SOED uses the model to make output predictions. The expected output response to a sequence of input steps form a dataset, and SOED is an optimisation problem to maximise the information content of that expected dataset, by changing the input step sequence. Once optimised, this step sequence is applied in the next experiment.

    The third part of this work focused on farm-fed anaerobic digestion. It is a renewable energy technology fuelled by agricultural waste. They rely on government incentives to be profitable, but these incentives have steadily been decreased. This project investigated methods to help farmers in the day-to day operation of the unit, including biogas production estimation, automated fault identification and partial diagnosis.

    Objectives: The accuracy of delivered dose depends directly upon initial beam calibration and subsequent maintenance of this beam output. The uncertainty associated with these measurements and its impact on clinical outcomes is not well documented. This work gives an evidence based approach to determining this variation and its clinical impact. Novelty: This work will quantify for the first time the variations present in the routine maintenance of beam output on a national scale. The novel application of these dosimetric uncertainties to radiobiological models is then employed to predict the variation in clinical outcome due to the quantified dosimetric variations for specific clinical cases, including both tumour control and associated treatment complications on both individual and patient populations. Results: The linear-quadratic and Lyman Kutcher Burman models have been implemented to allow flexibility in the modelling of individual patient doses on a fraction by fraction basis. The variation in delivered doses due to beam output variations is seen to be normally distributed with a standard deviation of 0.7%. These variations may lead to a typical patient experiencing a range in treatment outcome probabilities of over 10% for cancers with a steep dose response curve such as head and neck in both the case of an individual patient and for a patient population. Conclusions: The precise control of beam output is shown to be a major factor in the overall uncertainty for dose delivery in modern treatment techniques. With reductions in other uncertainties in radiotherapy treatments, now may be the time to consider reduction of tolerance levels to allow optimal patient treatment and outcomes.
    Li Daoliang, Wang Zhenhu, Chen Tao, Li Hui, Hao Yinfeng, Miao Zheng, Peng Fang, Wang Liang, Zheng Yingying (2020)Automatic counting methods in aquaculture: a review, In: Journal of the World Aquaculture Society Wiley
    Object counting in aquaculture is an important task, and has been widely applied on fish population estimation, lobster abundance estimation, and scallop stock, etc. However, underwater object counting is challenging for biologists and marine scientists, because of the diversity of background of the lake or ocean, the uncertainty of the object motion, and the occlusion between objects. With the rapid development of sensor, computer vision and acoustic technologies, advanced and efficient counting methods are available in aquaculture. We reviewed underwater object counting methods in aquaculture, provided a survey including more than 50 papers in the recent 10 years, and analyzed the pros and cons of the counting methods and the applicable scenarios of those methods. Finally, the major challenges and future trends of underwater object counting in aquaculture are discussed.
    Wang Hua, Gu Sai, Chen Tao (2020)Experimental Investigation of the Impact of CO, C₂H₆, and H₂ on the Explosion Characteristics of CH₄, In: ACS Omega5(38)pp. 24684-24692 American Chemical Society
    Gas explosions are destructive disasters in coal mines. Coal mine gas is a multi-component gas mixture, with methane (CH₄) being the dominant constituent. Understanding the process and mechanism of mine gas explosions is of critical importance to the safety of mining operations. In this work, three flammable gases (CO, C₂H₆, and H₂) which are commonly present in coal mines were selected to explore how they affect a methane explosion. The explosion characteristics of the flammable gases were investigated in a 20 L spherical closed vessel. Experiments on binary- (CH₄/CO, CH₄/C₂H₆, and CH₄/H₂) and multicomponent (CH₄/CO/C₂H₆/H₂) mixtures indicated that the explosion of such mixtures is more dangerous and destructive than that of methane alone in air, as measured by the explosion pressure. Furthermore, a self-promoting microcirculation reaction network is proposed to help analyze the chemical reactions involved in the multicomponent (CH₄/CO/C₂H₆/H₂) gas explosion. This work will contribute to a better understanding of the explosion mechanism of gas mixtures in coal mines and provide a useful reference for determining the safety limits in practice.
    Yates James W.T., Byrne Helen, Chapman Sonya C., Chen Tao, Cucurull‐Sanchez Lourdes, Delgado‐SanMartin Juan, Di Veroli Giovanni, Dovedi Simon J., Dunlop Carina, Jena Rajesh, Jodrell Duncan, Martin Emma, Mercier Francois, Ramos‐Montoya Antonio, Struemper Herbert, Vicini Paolo (2020)Opportunities for Quantitative Translational Modeling in Oncology, In: Clinical Pharmacology & Therapeutics108(3)pp. 447-457 Wiley

    A 2‐day meeting was held by members of the UK Quantitative Systems Pharmacology Network (http://www.qsp‐uk.net/) in November 2018 on the topic of Translational Challenges in Oncology. Participants from a wide range of backgrounds were invited to discuss current and emerging modeling applications in nonclinical and clinical drug development, and to identify areas for improvement. This resulting perspective explores opportunities for impactful quantitative pharmacology approaches. Four key themes arose from the presentations and discussions that were held, leading to the following recommendations:

    • Evaluate the predictivity and reproducibility of animal cancer models through precompetitive collaboration.

    • Apply mechanism of action (MoA) based mechanistic models derived from nonclinical data to clinical trial data.

    • Apply MoA reflective models across trial data sets to more robustly quantify the natural history of disease and response to differing interventions.

    • Quantify more robustly the dose and concentration dependence of adverse events through mathematical modelling techniques and modified trial design.

    Li Weijun, Chen Tao, Gu Sai, Li Hui (2020)Process fault diagnosis with model- and knowledge-based approaches: Advances and opportunities, In: Control Engineering Practice105104637 Elsevier
    Fault diagnosis plays a vital role in ensuring safe and efficient operation of modern process plants. Despite the encouraging progress in its research, developing a reliable and interpretable diagnostic system remains a challenge. There is a consensus among many researchers that an appropriate modelling, representation and use of fundamental process knowledge might be the key to addressing this problem. Over the past four decades, different techniques have been proposed for this purpose. They use process knowledge from different sources, in different forms and on different details, and are also named model-based methods in some literature. This paper first briefly introduces the problem of fault detection and diagnosis, its research status and challenges. It then gives a review of widely used model- and knowledge-based diagnostic methods, including their general ideas, properties, and important developments. Afterwards, it summarises studies that evaluate their performance in real processes in process industry, including the process types, scales, considered faults, and performance. Finally, perspectives on challenges and potential opportunities are highlighted for future work.
    Chu Fei, Zhao Xu, Yao Yuan, Chen Tao, Wang Fuli (2019)Transfer learning for batch process optimal control using LV-PTM and adaptive control strategy, In: Journal of Process Control81pp. 197-208 Elsevier
    In this study, we investigate a data-driven optimal control for a new batch process. Existing data-driven optimal control methods often ignore an important problem, namely, because of the short operation time of the new batch process, the modeling data in the initial stage can be insufficient. To address this issue, we introduce the idea of transfer learning, i.e., a latent variable process transfer model (LV-PTM) is adopted to transfer sufficient data and process information from similar processes to a new one to assist its modeling and quality optimization control. However, due to fluctuations in raw materials, equipment, etc., differences between similar batch processes are always inevitable, which lead to the serious and complicated mismatch of the necessary condition of optimality (NCO) between the new batch process and the LV-PTM-based optimization problem. In this work, we propose an LV-PTM-based batch-to-batch adaptive optimal control strategy, which consists of three stages, to ensure the best optimization performance during the whole operation lifetime of the new batch process. This adaptive control strategy includes model updating, data removal, and modifier-adaptation methodology using final quality measurements in response. Finally, the feasibility of the proposed method is demonstrated by simulations.
    Xu M, Chen T, Yang X (2011)Optimal replacement policy for safety-related multi-component multi-state systems, In: Reliability Engineering and System Safety99(1)pp. 87-95 Elsevier
    This paper investigates replacement scheduling for non-repairable safety-relatedsystems (SRS) with multiple components and states. The aim is to determine the cost-minimizing time for replacing SRS while meeting the required safety. Traditionally, such scheduling decisions are made without considering the interaction between the SRS and the production system under protection, the interaction being essential to formulate the expected cost to be minimized. In this paper, the SRS is represented by a non-homogeneous continuous time Markov model, and its state distribution is evaluated with the aid of the universal generating function. Moreover, a structure function of SRS with recursive property is developed to evaluate the state distribution efficiently. These methods form the basis to derive an explicit expression of the expected system cost per unit time, and to determine the optimal time to replace the SRS. The proposed methodology is demonstrated through an illustrative example.
    Li Weijun, Gu Sai, Zhang Xiangping, Chen Tao (2020)A pattern matching and active simulation method for process fault diagnosis, In: Industrial & Engineering Chemistry Research American Chemical Society
    Fault detection and diagnosis is a crucial approach to ensure safe and efficient operation of chemical processes. This paper reports a new fault diagnosis method that exploits dynamic process simulation and pattern matching techniques. The proposed method consists of a simulated fault database which, through pattern matching, helps narrow down the fault candidates in an efficient way. An optimisation based fault reconstruction method is then developed to determine the fault pattern from the candidates, and the corresponding magnitude and time of occurrence of the fault. A major advantage of this approach is capable of diagnosing both single and multiple faults. We illustrate the effectiveness of the proposed method through case studies of the Tennessee Eastman benchmark process.
    Ge Z, Chen T, Song Z (2011)Quality prediction for polypropylene production process based on CLGPR model, In: Control Engineering Practice19(5)pp. 423-432 Elsevier
    Online measurement of the melt index is typically unavailable in industrial polypropyleneproductionprocesses, soft sensing models are therefore required for estimation and prediction of this important quality variable. Polymerization is a highly nonlinear process, which usually produces products with multiple quality grades. In the present paper, an effective soft sensor, named combined local Gaussian process regression (CLGPR), is developed for prediction of the melt index. While the introduced Gaussian process regression model can well address the high nonlinearity of the process data in each operation mode, the local modeling structure can be effectively extended to processes with multiple operation modes. Feasibility and efficiency of the proposed soft sensor are demonstrated through the application to an industrial polypropyleneproductionprocess.
    Chen T, Morris J, Martin E (2007)Gaussian process regression for multivariate spectroscopic calibration, In: CHEMOMETRICS AND INTELLIGENT LABORATORY SYSTEMS87(1)pp. 59-67 ELSEVIER SCIENCE BV
    Chen T, Martin E (2007)The impact of temperature variations on spectroscopic calibration modelling: a comparative study, In: JOURNAL OF CHEMOMETRICS21(5-6)pp. 198-207
    Temperature fluctuations can have a significant impact on the repeatability of spectral measurements and as a consequence can adversely affect the resulting calibration model. More specifically, when test samples measured at temperatures unseen in the training dataset are presented to the model, degraded predictive performance can materialise. Current methods for addressing the temperature variations in a calibration model can be categorised into two classes—calibration model based approaches, and spectra standardisation methodologies. This paper presents a comparative study on a number of strategies reported in the literature including partial least squares (PLS), continuous piecewise direct standardisation (CPDS) and loading space standardisation (LSS), in terms of the practical applicability of the algorithms, their implementation complexity, and their predictive performance. It was observed from the study that the global modelling approach, where latent variables are initially extracted from the spectra using PLS, and then augmented with temperature as the independent variable, achieved the best predictive performance. In addition, the two spectra standardisation methods, CPDS and LSS, did not provide consistently enhanced performance over the conventional global modelling approach, despite the additional effort in terms of standardising the spectra across different temperatures. Considering the algorithmic complexity and resulting calibration accuracy, it is concluded that the global modelling (with temperature) approach should be first considered for the development of a calibration model where temperature variations are known to affect the fundamental data, prior to investigating the more powerful spectra standardisation approaches. Copyright © 2007 John Wiley & Sons, Ltd.
    Yan W, Guo Z, Jia X, Kariwala V, Chen T, Yang Y (2012)Model-aided optimization and analysis of multi-component catalysts: Application to selective hydrogenation of cinnamaldehyde, In: Chemical Engineering Science76pp. 26-36 Elsevier
    Cucurull-Sanchez Lourdes, Delgado-SanMartin Juan, Di Veroli Giovanni, Dovedi Simon J, Dunlop Carina, Jena Rajesh, Jodrell Duncan, Yates James W T, Byrne Helen Byrne, Chapman Sonya C, Chen Tao, Ramos-Montoya Antonio, Struemper Herbert, Vicini Paolo, Mercier Francois, Martin Emma (2020)Opportunities for quantitative translational modelling in Oncology, In: Clinical Pharmacology & Therapeutics American Society for Clinical Pharmacology and Therapeutics

    A two-day meeting was held by members of the UK Quantitative Systems Pharmacology Network (http://www.qsp-uk.net/) in November 2018 on the topic of Translational Challenges in Oncology. Participants from a wide range of backgrounds were invited to discuss current and emerging modelling applications in non-clinical and clinical drug development, and to identify areas for improvement. This resulting perspective explores opportunities for impactful quantitative pharmacology approaches. Four key themes arose from the presentations and discussions that were held, leading to the following recommendations:

    - Evaluate the predictivity and reproducibility of animal cancer models through pre-competitive collaboration

    - Apply mechanism of action (MoA) based mechanistic model derived from nonclinical data to clinical trial data

    - Apply MoA reflective models across trial data sets to more robustly quantify the natural history of disease and response to differing interventions

    - Quantify more robustly the dose and concentration dependence of adverse events through mathematical modelling techniques and modified trial design

    Chen T, Sun Y, Zhao SR (2008)Constrained Principal Component Extraction Network, In: 2008 7TH WORLD CONGRESS ON INTELLIGENT CONTROL AND AUTOMATION, VOLS 1-23pp. 7135-7139
    Chen LLT, Chen T, Chen J (2016)PID Based Nonlinear Processes Control Model Uncertainty Improvement by Using Gaussian Process Model, In: Journal of Process Control42pp. 77-89 Elsevier
    Proportional-integral-derivative (PID) controller design based on the Gaussian process (GP) model is proposed in this study. The GP model, defined by its mean and covariance function, provides predictive variance in addition to the predicted mean. GP model highlights areas where prediction quality is poor, due to the lack of data, by indicating the higher variance around the predicted mean. The variance information is taken into account in the PID controller design and is used for the selection of data to improve the model at the successive stage. This results in a trade-off between safety and the performance due to the controller avoiding the region with large variance at the cost of not tracking the set point to ensure process safety. The proposed direct method evaluates the PID controller design by the gradient calculation. In order to reduce computation the characteristic of the instantaneous linearized GP model is extracted for a linearized framework of PID controller design. Two case studies on continuous and batch processes were carried out to illustrate the applicability of the proposed method.
    Wang K, Chen T, Kwa ST, Ma Y, Lau R (2014)Meta-modelling for fast analysis of CFD-simulated vapour cloud dispersion processes, In: COMPUTERS & CHEMICAL ENGINEERING69pp. 89-97 PERGAMON-ELSEVIER SCIENCE LTD
    Meta-modelling for fast analysis of CFD-simulated vapour cloud dispersion processes Abstract: Released flammable chemicals can form an explosible vapour cloud, posing safety threat in both industrial and civilian environments. Due to the difficulty in conducting physical experiments, computational fluid dynamic (CFD) simulation is an important tool in this area. However, such simulation is computationally too slow for routine analysis. To address this issue, a meta-modelling approach is developed in this study; it uses a small number of simulations to build an empirical model, which can be used to predict the concentration field and the potential explosion region. The dimension of the concentration field is reduced from around 43,421,400 to 20 to allow meta-modelling, by using the segmented principal component transform-principal component analysis. Moreover, meta-modelling-based uncertainty analysis is explored to quantify the prediction variance, which is important for risk assessment. The effectiveness of the methodology has been demonstrated on CFD simulation of the dispersion of liquefied natural gas
    Chen T, Zhang J (2009)On-line statistical monitoring of batch processes using Gaussian mixture model, In: IFAC Proceedings: Advanced Control of Chemical Processes7(PART 1)pp. 667-672
    The statistical monitoring of batch manufacturing processes is considered. It is known that conventional monitoring approaches, e.g. principal component analysis (PCA), are not applicable when the normal operating conditions of the process cannot be sufficiently represented by a Gaussian distribution. To address this issue, Gaussian mixture model (GMM) has been proposed to estimate the probability density function of the process nominal data, with improved monitoring results having been reported for continuous processes. This paper extends the application of GMM to on-line monitoring of batch processes, and the proposed method is demonstrated through its application to a batch semiconductor etch process.
    Gao X, Shang C, Jiang Y, Huang D, Chen T (2014)Refinery scheduling with varying crude: A deep belief network classification and multimodel approach, In: AICHE JOURNAL60(7)pp. 2525-2532 WILEY-BLACKWELL
    Chen T, Morris J, Martin E (2004)Particle filters for the estimation of a state space model, In: BarbosaPovoa AP, Matos H, (eds.), EUROPEAN SYMPOSIUM ON COMPUTER-AIDED PROCESS ENGINEERING - 1418pp. 613-618
    Goodarzi M, Chen T, Freitas MP (2010)QSPR predictions of heat of fusion of organic compounds using Bayesian regularized artificial neural networks, In: Chemometrics and Intelligent Laboratory Systems104(2)pp. 260-264
    Chuang Y-C, Chen Tao, Yao Y, Wong D (2018)Transfer Learning for Efficient Meta-Modeling of Process Simulations, In: Chemical Engineering Research and Design.138pp. 546-553 Elsevier
    In chemical engineering applications, computational efficient meta-models have been successfully implemented in many instants to surrogate the high-fidelity computational fluid dynamics (CFD) simulators. Nevertheless, substantial simulation efforts are still required to generate representative training data for building meta-models. To solve this problem, in this research work an efficient meta-modeling method is developed based on the concept of transfer learning. First, a base model is built which roughly mimics the CFD simulator. With the help of this model, the feasible operating region of the simulated process is estimated, within which computer experiments are designed. After that, CFD simulations are run at the designed points for data collection. A transfer learning step, which is based on the Bayesian migration technique, is then conducted to build the final meta-model by integrating the information of the base model with the simulation data. Because of the incorporation of the base model, only a small number of simulation points are needed in meta-model training.
    Kajero Olumayowa, Thorpe Rex, Chen Tao (2016)Kriging meta-model assisted calibration of computational fluid dynamics models, In: AIChE Journal62(12)pp. 4308-4320 Wiley
    Computational fluid dynamics (CFD) is a simulation technique widely used in chemical and process engineering applications. However, computation has become a bottleneck when calibration of CFD models with experimental data (also known as model parameter estimation) is needed. In this research, the kriging meta-modelling approach (also termed Gaussian process) was coupled with expected improvement (EI) to address this challenge. A new EI measure was developed for the sum of squared errors (SSE) which conforms to a generalised chi-square distribution and hence existing normal distribution-based EI measures are not applicable. The new EI measure is to suggest the CFD model parameter to simulate with, hence minimising SSE and improving match between simulation and experiments. The usefulness of the developed method was demonstrated through a case study of a single-phase flow in both a straight-type and a convergent-divergent-type annular jet pump, where a single model parameter was calibrated with experimental data.
    Si R, Wang K, Chen T, Chen Y (2011)Chemometric determination of the length distribution of single walled carbon nanotubes through optical spectroscopy., In: Anal Chim Acta708(1-2)pp. 28-36 Elsevier
    Current synthesis methods for producing single walled carbon nanotubes (SWCNTs) do not ensure uniformity of the structure and properties, in particular the length, which is an important quality indicator of SWCNTs. As a result, sorting SWCNTs by length is an important post-synthesis processing step. For this purpose, convenient analysis methods are needed to characterize the length distribution rapidly and accurately. In this study, density gradient ultracentrifugation was applied to prepare length-sorted SWCNT suspensions containing individualized surfactant-wrapped SWCNTs. The length of sorted SWCNTs was first determined by atomic force microscope (AFM), and their absorbance was measured in ultraviolet-visible near-infrared (UV-vis-NIR) spectroscopy. Chemometric methods are used to calibrate the spectra against the AFM-measured length distribution. The calibration model enables convenient analysis of the length distribution of SWCNTs through UV-vis-NIR spectroscopy. Various chemometric techniques are investigated, including pre-processing methods and non-linear calibration models. Extended inverted signal correction, extended multiplicative signal correction and Gaussian process regression are found to provide good prediction of the length distribution of SWCNTs with satisfactory agreement with the AFM measurements. In summary, spectroscopy in conjunction with advanced chemometric techniques is a powerful analytical tool for carbon nanotube research.
    Zhou Le, Chuang Yao-Chen, Hsu Shao-Heng, Yao Yuan, Chen Tao Prediction and Uncertainty Propagation for Completion Time of Batch Processes based on Data-driven Modeling, In: Industrial and Engineering Chemistry Research American Chemical Society
    Batch processes have been playing a crucial role for the flexibility in producing low-volume and high-value-added products. Due to the fluctuations of raw materials and operation conditions, the batch duration often varies. Prediction of batch completion time is important for the purpose of process scheduling and optimization. Existing studies of this subject have been focused on the prediction accuracy, while the importance of the prediction uncertainty has been under-explored. When the key variable defining the completion time changes slowly towards the end of a batch, the prediction uncertainty tends to be large. Under such situations, we argue that the uncertainty should always be considered along with the mean prediction for practical use. To this end, two data-driven prediction methods using probabilistic principal component analysis (PPCA) and bootstrapping case-based reasoning (Bootstrapping CBR) are developed, followed by the uncertainty quantification in the probabilistic framework. Finally, two batch processes are used to demonstrate the importance of prediction uncertainty and the efficiency of the proposed schemes.
    Chen T, Morris J, Martin E (2005)Bayesian control limits for statistical process monitoring, In: 2005 International Conference on Control and Automation (ICCA), Vols 1 and 2pp. 409-414
    Liu Y-J, Yao Y, Chen Tao (2014)Nonlinear process monitoring and fault isolation using extended maximum variance unfolding, In: Journal of Process Control24(6)pp. 880-891 Elsevier
    Kernel principal component analysis (KPCA) has become a popular technique for process monitoring, owing to its capability of handling nonlinearity. Nevertheless, KPCA suffers from two major disadvantages. First, the underlying manifold structure of data is not considered in process modeling. Second, the selection of kernel parameters is problematic. To avoid such deficiencies, a manifold learning technique named maximum variance unfolding (MVU) is considered as an alternative. However, such method is merely able to deal with the training data, but has no means to handle new samples. Therefore, MVU cannot be applied to process monitoring directly. In this paper, an extended MVU (EMVU) method is proposed, extending the utilization of MVU to new samples by approximating the nonlinear mapping between the input space and the output space with a Gaussian process model. Consequently, EMVU is suitable to nonlinear process monitoring. A cross-validation algorithm is designed to determine the dimensionality of the EMVU output space. For online monitoring, three different types of monitoring indices are developed, including squared prediction error (SPE), Hotelling-T, and the prediction variance of the outputs. In addition, a fault isolation algorithm based on missing data analysis is designed for EMVU to identify the variables contributing most to the faults. The effectiveness of the proposed methods is verified by the case studies on a numerical simulation and the benchmark Tennessee Eastman (TE) process. © 2014 Elsevier Ltd. All rights reserved.
    Recent development of non-destructive optical techniques, such as spectroscopy and machine vision technologies, have laid a good foundation for real-time monitoring and precise management of crop N status. However, their advantages and disadvantages have not been systematically summarized and evaluated. Here, we reviewed the state-of-the-art of non-destructive optical methods for monitoring the N status of crops, and summarized their advantages and disadvantages. We mainly focused on the contribution of spectral and machine vision technology to the accurate diagnosis of crop N status from three aspects: system selection, data processing and estimation methods. Finally, we discussed the opportunities and challenges of the application of these technologies, followed by recommendations for future work to address the challenges.
    Yan W, Jia X, Chen T, Yang Y (2013)Optimization and statistical analysis of Au-ZnO/Al2O3 catalyst for CO oxidation, In: JOURNAL OF ENERGY CHEMISTRY22(3)pp. 498-505 ELSEVIER SCIENCE BV
    Gao Xiaoyong, Wang Yuhong, Feng Zhenhui, Huang Dexian, Chen Tao (2018)Plant planning optimization under time-varying uncertainty: Case study on a Poly (vinyl chloride) plant, In: Industrial & Engineering Chemistry Research57(36)pp. 12182-12191 American Chemical Society
    Planning optimization considering various uncertainties has attracted increasing attentions in the process industry. In the existing studies, the uncertainty is often described with a time-invariant distribution function during the entire planning horizon, which is a questionable assumption. Particularly, for long-term planning problems, the uncertainty tends to vary with time and it usually increases when a model is used to predict the parameter (e.g. price) far into the future. In this paper, time-varying uncertainties are considered in robust planning problems with a focus on a polyvinyl chloride (PVC) production planning problem. Using the stochastic programming techniques, a stochastic model is formulated, and then transformed into a multi-period mixed-integer linear programming (MILP) model by chance constrained programming and piecewise linear approximation. The proposed approach is demonstrated on industrial-scale cases originated from a real-world PVC plant. The comparisons show that the model considering varying-uncertainty is superior in terms of robustness under uncertainties.
    Yao Y, Chen T, Gao F (2010)Multivariate statistical monitoring of two-dimensional dynamic batch processes utilizing non-Gaussian information, In: Journal of Process Control20(10)pp. 1188-1197 Elsevier
    Dynamics are inherent characteristics of batchprocesses, and they may exist not only within a particular batch, but also from batch to batch. To model and monitor such two-dimensional (2D) batch dynamics, two-dimensionaldynamic principal component analysis (2D-DPCA) has been developed. However, the original 2D-DPCA calculates the monitoring control limits based on the multivariateGaussian distribution assumption which may be invalid because of the existence of 2D dynamics. Moreover, the multiphase features of many batchprocesses may lead to more significant non-Gaussianity. In this paper, Gaussian mixture model (GMM) is integrated with 2D-DPCA to address the non-Gaussian issue in 2D dynamicbatchprocessmonitoring. Joint probability density functions (pdf) are estimated to summarize the information contained in 2D-DPCA subspaces. Consequently, for online monitoring, control limits can be calculated based on the joint pdf. A two-phase fed-batch fermentation process for penicillin production is used to verify the effectiveness of the proposed method.
    Chen T, Zhang J (2010)On-line multivariate statistical monitoring of batch processes using Gaussian mixture model, In: COMPUTERS & CHEMICAL ENGINEERING34(4)pp. 500-507 PERGAMON-ELSEVIER SCIENCE LTD
    Huang C-C, Chen T, Yao Y (2013)Mixture Discriminant Monitoring: A Hybrid Method for Statistical Process Monitoring and Fault Diagnosis/Isolation, In: Industrial and Engineering Chemistry Research52(31)pp. 10720-10731 American Chemical Society
    To better utilize historical process data from faulty operations, supervised learning methods, such as Fisher discriminant analysis (FDA), have been adopted in process monitoring. However, such methods can only separate known faults from normal operations, and they have no means to deal with unknown faults. In addition, most of these methods are not designed for handling non-Gaussian distributed data; however, non-Gaussianity is frequently observed in industrial processes. In this paper, a hybrid multivariate approach named mixture discriminant monitoring (MDM) was proposed, in which supervised learning and statistical process control (SPC) charting techniques are integrated. MDM is capable of solving both of the above problems simultaneously during online process monitoring. Then, for known faults, a root-cause diagnosis can be automatically achieved, while for unknown faults, abnormal variables can be isolated through missing variable analysis. MDM was used on the benchmark Tennessee Eastman (TE) process, and the results showed the capability of the proposed approach.
    Lau R, Lee PHV, Chen T (2012)Mass transfer studies in shallow bubble column reactors, In: Chemical Engineering and Processing: Process Intensification62pp. 18-25
    Mass transfer studies are carried out in a bubble column with an internal diameter of 14. cm and various static liquid heights. The mass transfer coefficient is evaluated by using an oxygen sorption method. A model considering the gas holdup flushing and the sensor response is used. The interfacial mass transfer area is determined according to the measured bubble size distribution. The liquid-side mass transfer coefficient is also estimated from the volumetric mass transfer coefficient and the interfacial mass transfer area found. Results show that the effect of static liquid height on gas-liquid mass transfer is primarily on the interfacial mass transfer area. The mass transfer process is also governed by the type of gas distributor used. A single nozzle distributor is not suitable for shallow bubble column operations due to the large initial bubbles and the large volume of dead zone generated. It is also found that the different dependence of the liquid-side mass transfer coefficient on the superficial gas velocity observed in the literatures is due to the different bubble rising regimes. © 2012 Elsevier B.V.
    Zhang Yanling, Lane Majella E., Hadgraft Jonathan, Heinrich Michael, Chen Tao, Lian Guoping, Sinko Balint (2019)A comparison of the in vitro permeation of niacinamide in mammalian skin and in the Parallel Artificial Membrane Permeation Assay (PAMPA) model, In: International Journal of Pharmaceutics556pp. 142-149 Elsevier
    The in vitro skin penetration of pharmaceutical or cosmetic ingredients is usually assessed in human or animal tissue. However, there are ethical and practical difficulties associated with sourcing these materials; variability between donors may also be problematic when interpreting experimental data. Hence, there has been much interest in identifying a robust and high throughput model to study skin permeation that would generate more reproducible results. Here we investigate the permeability of a model active, niacinamide (NIA), in (i) conventional vertical Franz diffusion cells with excised human skin or porcine skin and (ii) a recently developed Parallel Artificial Membrane Permeation Assay (PAMPA) model. Both finite and infinite dose conditions were evaluated in both models using a series of simple NIA solutions and one commercial preparation. The Franz diffusion cell studies were run over 24 h while PAMPA experiments were conducted for 2.5 h. A linear correlation between both models was observed for the cumulative amount of NIA permeated in tested models under finite dose conditions. The corresponding correlation coefficients (r²) were 0.88 for porcine skin and 0.71 for human skin. These results confirm the potential of the PAMPA model as a useful screening tool for topical formulations. Future studies will build on these findings and expand further the range of actives investigated.
    Patient-reported outcome measures (PROMs) are a useful way of recording patient perceptions of the impact of their cancer and the consequences of treatment. Understanding the impact of radiotherapy longer term requires tools that are sensitive to change but also meaningful for patients. PROMs are useful in defining symptom severity but also the burden of illness for cancer patients. Patient-reported outcomes are increasingly being seen as a way to improve practice by enhancing communication, improving symptom management as well as identifying patient care needs. This paper provides an overview of the use of PROMs in radiotherapy and considerations for tool choice, analysis and the logistics of routine data collection. Consistent assessment is essential to detect patient problems as a result of radiotherapy, but also to address emerging symptoms promptly.
    Chi G, Hu S, Yang Y, Chen T (2012)Response surface methodology with prediction uncertainty: A multi-objective optimisation approach, In: Chemical Engineering Research and Design90(9)pp. 1235-1244
    In the field of response surface methodology (RSM), the prediction uncertainty of the empirical model needs to be considered for effective process optimisation. Current methods combine the prediction mean and uncertainty through certain weighting strategies, either explicitly or implicitly, to form a single objective function for optimisation. This paper proposes to address this problem under the multi-objective optimisation framework. Overall, the method iterates through initial experimental design, empirical modelling and model-based optimisation to allocate promising experiments for the next iteration. Specifically, the Gaussian process regression is adopted as the empirical model due to its demonstrated prediction accuracy and reliable quantification of prediction uncertainty in the literature. The non-dominated sorting genetic algorithm II (NSGA-II) is used to search for Pareto points that are further clustered to give experimental points to be conducted in the next iteration. The application study, on the optimisation of a catalytic epoxidation process, demonstrates that the proposed method is a powerful tool to aid the development of chemical and potentially other processes. © 2011 The Institution of Chemical Engineers.
    He B, Yang X, Chen T, Zhang J (2012)Reconstruction-based multivariate contribution analysis for fault isolation: A branch and bound approach, In: Journal of Process Control22(7)pp. 1228-1236
    Identification of faulty variables is an important component of multivariate statistical process monitoring (MSPM); it provides crucial information for further analysis of the root cause of the detected fault. The main challenge is the large number of combinations of process variables under consideration, usually resulting in a combinatorial optimization problem. This paper develops a generic reconstruction based multivariate contribution analysis (RBMCA) framework to identify the variables that are the most responsible for the fault. A branch and bound (BAB) algorithm is proposed to efficiently solve the combinatorial optimization problem. The formulation of the RBMCA does not depend on a specific model, which allows it to be applicable to any MSPM model. We demonstrate the application of the RBMCA to a specific model: the mixture of probabilistic principal component analysis (PPCA mixture) model. Finally, we illustrate the effectiveness and computational efficiency of the proposed methodology through a numerical example and the benchmark simulation of the Tennessee Eastman process. © 2012 Elsevier Ltd. All rights reserved.
    Wang K, Chen T, Lau R (2011)Bagging for robust non-linear multivariate calibration of spectroscopy, In: Chemometrics and Intelligent Laboratory Systems105(1)pp. 1-6 Elsevier
    This paper presents the application of the bagging technique for non-linear regression models to obtain more accurate and robust calibration of spectroscopy. Bagging refers to the combination of multiple models obtained by bootstrap re-sampling with replacement into an ensemble model to reduce prediction errors. It is well suited to “non-robust” models, such as the non-linear calibration methods of artificial neural network (ANN) and Gaussian process regression (GPR), in which small changes in data or model parameters can result in significant change in model predictions. A specific variant of bagging, based on sub-sampling without replacement and named subagging, is also investigated, since it has been reported to possess similar prediction capability to bagging but requires less computation. However, this work shows that the calibration performance of subagging is sensitive to the amount of sub-sampled data, which needs to be determined by computationally intensive cross-validation. Therefore, we suggest that bagging is preferred to subagging in practice. Application study on two near infrared datasets demonstrates the effectiveness of the presented approach.
    Chen T, Sun Y (2009)Probabilistic contribution analysis for statistical process monitoring: A missing variable approach, In: Control Engineering Practice17(4)pp. 469-477 Elsevier
    Probabilistic models, including probabilistic principal component analysis (PPCA) and PPCA mixture models, have been successfully applied to statisticalprocess monitoring. This paper reviews these two models and discusses some implementation issues that provide alternative perspective on their application to process monitoring. Then aprobabilisticcontributionanalysis method, based on the concept of missingvariable, is proposed to facilitate the diagnosis of the source behind the detected process faults. The contributionanalysis technique is demonstrated through its application to both PPCA and PPCA mixture models for the monitoring of two industrial processes. The results suggest that the proposed method in conjunction with PPCA model can reduce the ambiguity with regard to identifying the processvariables that contribute to process faults. More importantly it provides a fault identification approach for PPCA mixture model where conventional contributionanalysis is not applicable.
    Chen T, Morris J, Martin E (2005)Particle filters for state and parameter estimation in batch processes, In: JOURNAL OF PROCESS CONTROL15(6)pp. 665-673 ELSEVIER SCI LTD
    Chen T, Yang Y (2011)Interpretation of non-linear empirical data-based process models using global sensitivity analysis, In: Chemometrics and Intelligent Laboratory Systems107(1)pp. 116-123 Elsevier
    non-linear regression techniques have been widely used for data-based modeling of chemical processes, and they form the basis of process design under the framework of response surface methodology (RSM). These non-linear models typically achieve more accurate approximation to the factor–response relationship than traditional polynomial regressions. However, non-linear models usually lack a clear interpretation as to how the factors contribute to the prediction of process response. This paper applies the technique of sensitivity analysis (SA) to facilitate the interpretation of non-linear process models. By recognizing that derivative-based local SA is only valid within the neighborhood of certain “nominal” values, global SA is adopted to study the entire range of the factors. Global SA is based on the decomposition of the model and the variance of response into contributing terms of main effects and interactions. Therefore, the effect of individual factors and their interactions can be both visualized by graphs and quantified by sensitivity indices. The proposed methodology is demonstrated on two catalysis processes where non-linear data-based models have been developed to aid process design. The results indicate that global SA is a powerful tool to reveal the impact of process factors on the response variables.
    Wu Anthony, Lovett D, McEwan M, Cecelja Franjo, Chen Tao (2016)A spreadsheet calculator for estimating biogas production and economic measures for UK-based farm-fed anaerobic digesters, In: Bioresource Technology220(Nov)pp. 479-489 Elsevier
    This paper presents a spreadsheet calculator to estimate biogas production and the operational revenue and costs for UK-based farm-fed anaerobic digesters. There exist sophisticated biogas production models in published literature, but the application of these in farm-fed anaerobic digesters is often impractical. This is due to the limited measuring devices, financial constraints, and the operators being non-experts in anaerobic digestion. The proposed biogas production model is designed to use the measured process variables typically available at farm-fed digesters, accounting for the effects of retention time, temperature and imperfect mixing. The estimation of the operational revenue and costs allow the owners to assess the most profitable approach to run the process. This would support the sustained use of the technology. The calculator is first compared with literature reported data, and then applied to the digester unit on a UK Farm to demonstrate its use in a practical setting.
    Ni W, Wang K, Chen T, Ng WJ, Tan SK (2012)GPR model with signal preprocessing and bias update for dynamic processes modeling, In: Control Engineering Practice20(12)pp. 1281-1292
    This paper introduces a Gaussian process regression (GPR) model which could adapt to both linear and nonlinear systems automatically without prior introduction of kernel functions. The applications of GPR model for two industrial examples are presented. The first example addresses a biological anaerobic system in a wastewater treatment plant and the second models a nonlinear dynamic process of propylene polymerization. Special emphasis is placed on signal preprocessing methods including the Savitzky-Golay and Kalman filters. Applications of these filters are shown to enhance the performance of the GPR model, and facilitate bias update leading to reduction of the offset between the predicted and measured values. © 2012 Elsevier Ltd.
    Wang JG, Shen T, Zhao JH, Ma SW, Wang XF, Yao Y, Chen T (2016)Soft-Sensing Method for Optimizing Combustion Efficiency of Reheating Furnaces, In: Journal of the Taiwan Institute of Chemical Engineers
    Rolling mill reheating furnaces are widely used in large-scale iron and steel plants, the efficient operation of which has been hampered by the complexity of the combustion mechanism. In this paper, a soft-sensing method is developed for modeling and predicting combustion efficiency since it cannot be measured directly. Statistical methods are utilized to ascertain the significance of the proposed derived variables for the combustion efficiency modeling. By employing the nonnegative garrote variable selection procedure, an adaptive scheme for combustion efficiency modeling and adjustment is proposed and virtually implemented on a rolling mill reheating furnace. The results show that significant energy saving can be achieved when the furnace is operated with the proposed model-based optimization strategy.
    Liu Y, Chen T, Chen J (2015)Auto-Switch Gaussian Process Regression-Based Probabilistic Soft Sensors for Industrial Multigrade Processes with Transitions, In: INDUSTRIAL & ENGINEERING CHEMISTRY RESEARCH54(18)pp. 5037-5047 AMER CHEMICAL SOC
    Prediction uncertainty has rarely been integrated into traditional soft sensors in industrial processes. In this work, a novel autoswitch probabilistic soft sensor modeling method is proposed for online quality prediction of a whole industrial multigrade process with several steady-state grades and transitional modes. It is different from traditional deterministic soft sensors. Several single Gaussian process regression (GPR) models are first constructed for each steady-state grade. A new index is proposed to evaluate each GPR-based steady-state grade model. For the online prediction of a new sample, a prediction variance-based Bayesian inference method is proposed to explore the reliability of existing GPR-based steady-state models. The prediction can be achieved using the related steady-state GPR model if its reliability using this model is large enough. Otherwise, the query sample can be treated as in transitional modes and a local GPR model in a just-in-time manner is online built. Moreover, to improve the efficiency, detailed implementation steps of the autoswitch GPR soft sensors for a whole multigrade process are developed. The superiority of the proposed method is demonstrated and compared with other soft sensors in an industrial process in Taiwan, in terms of online quality prediction.
    Chen T (2010)On reducing false alarms in multivariate statistical process control, In: Chemical Engineering Research and Design88(4)pp. 430-436 Elsevier
    The primary objective of this note is to reduce the falsealarms in multivariatestatisticalprocesscontrol (MSPC). The issue of falsealarms is inherent within MSPC as a result of the definition of control limits. It has been observed that under normal operating conditions, the occurrence of “out-of-control” data, i.e. falsealarms, conforms to a Bernoulli distribution. Therefore, this issue can be formally addressed by developing a Binomial distribution for the number of “out-of-control” data points within a given time window, and a second-level control limit can be established to reduce the falsealarms. This statistical approach is further extended to consider the combination of multiple control charts. The proposed methodology is demonstrated through its application to the monitoring of a benchmark simulated chemical process, and it is observed to effectively reduce the falsealarms whilst retaining the capability of detecting process faults.
    Chen T, Hadinoto K, Yan W, Ma Y (2011)Efficient meta-modelling of complex process simulations with time-space-dependent outputs, In: Computers and Chemical Engineering35(3)pp. 502-509 Elsevier
    Process simulations can become computationally too complex to be useful for model-based analysis and design purposes. Meta-modelling is an efficient technique to develop a surrogate model using “computer data”, which are collected from a small number of simulation runs. This paper considers meta-modelling with time–space-dependent outputs in order to investigate the dynamic/distributed behaviour of the process. The conventional method of treating temporal/spatial coordinates as model inputs results in dramatic increase of modelling data and is computationally inefficient. This paper applies principal component analysis to reduce the dimension of time–space-dependent output variables whilst retaining the essential information, prior to developing meta-models. Gaussian process regression (also termed kriging model) is adopted for meta-modelling, for its superior prediction accuracy when compared with more traditional neural networks. The proposed methodology is successfully validated on a computational fluid dynamic simulation of an aerosol dispersion process, which is potentially applicable to industrial and environmental safety assessment.
    Chen T, Morris J, Martin E (2008)Dynamic data rectification using particle filters, In: Computers and Chemical Engineering32(3)pp. 451-462 PERGAMON-ELSEVIER SCIENCE LTD
    The basis of dynamic data rectification is a dynamic process model. The successful application of the model requires the fulfilling of a number of objectives that are as wide-ranging as the estimation of the process states, process signal denoising and outlier detection and removal. Current approaches to dynamic data rectification include the conjunction of the Extended Kalman Filter (EKF) and the expectation-maximization algorithm. However, this approach is limited due to the EKF being less applicable where the state and measurement functions are highly non-linear or where the posterior distribution of the states is non-Gaussian. This paper proposes an alternative approach whereby particle filters, based on the sequential Monte Carlo method, are utilized for dynamic data rectification. By formulating the rectification problem within a probabilistic framework, the particle filters generate Monte Carlo samples from the posterior distribution of the system states, and thus provide the basis for rectifying the process measurements. Furthermore, the proposed technique is capable of detecting changes in process operation and thus complements the task of process fault diagnosis. The appropriateness of particle filters for dynamic data rectification is demonstrated through their application to an illustrative non-linear dynamic system, and a benchmark pH neutralization process.
    Thomas RAS, Bolt Matthew, Bass G, Nutbrown R, Chen Tao, Nisbet Andrew, Clark CH (2017)Radiotherapy reference dose audit in the United Kingdom by the National Physical Laboratory: 20 years of consistency and improvements., In: Physics & Imaging in Radiation Oncology3pp. 21-27 Elsevier
    Background and Purpose Audit is imperative in delivering consistent and safe radiotherapy and the UK has a strong history of radiotherapy audit. The National Physical Laboratory (NPL) has undertaken audit measurements since 1994 and this work examines results from these audits. Materials and Methods This paper reviews audit results from 209 separate beams from 82 on-site visits to National Health Service (NHS) radiotherapy departments conducted between June 1994 and February 2015. Measurements were undertaken following the relevant UK code of practice. The accuracy of the implementation of absorbed dose calibration across the UK is quantified for MV photon, MeV electron and kV x-ray radiotherapy beams. Results Over the measurement period the standard deviation of MV photon beam output has reduced from 0.8 % to 0.4 %. The switch from air kerma- to absorbed dose-based electron code of practice contributed to a reduction in the difference of electron beam output of 0.6 % (p < 0.01). The mean difference in NPL to local measurement for radiation output calibration was less than 0.25 % for all beam modalities. Conclusions The introduction of the 2003 electron code of practice based on absorbed dose to water decreased the difference between absolute dose measurements by the centre and NPL. The use of a single photon code of practice over the period of measurements has contributed to a reduction in measurement variation. Within the clinical setting, on-site audit visits have been shown to identify areas of improvement for determining and implementing absolute dose calibrations.
    Chen T, Martin E, Montague G (2009)Robust probabilistic PCA with missing data and contribution analysis for outlier detection, In: COMPUTATIONAL STATISTICS & DATA ANALYSIS53(10)pp. 3706-3716 ELSEVIER SCIENCE BV
    Chi G, Yan W, Chen T (2010)Iterative data-based modelling and optimization for rapid design of dynamic processes, In: IFAC Proceedings: Dynamics and Control of Process Systems9(PART 1)pp. 475-480
    We consider an off-line process design problem where the response variable is affected by several factors. We present a data-based modelling approach that iteratively allocates new experimental points, update the model, and search for the optimal process factors. A flexible non-linear modelling technique, the kriging (also known as Gaussian processes), forms the cornerstone of this approach. Kriging model is capable of providing accurate predictive mean and variance, the latter being a quantification of its prediction uncertainty. Therefore, the iterative algorithm is devised by jointly considering two objectives: (i) to search for the best predicted response, and (ii) to adequately explore the factor's space so that the predictive uncertainty is small. This method is further extended to consider dynamic processes, i.e. the process factors are time-varying and thus the problem becomes to design a time-dependent trajectory of these factors. The proposed approach has been demonstrated by its application to a simulated chemical process with promising results being achieved.
    Kariwala V, Odiowei P-E, Cao Y, Chen T (2010)A branch and bound method for fault isolation through missing variable analysis, In: IFAC Proceedings: Dynamics and Control of Process Systems9(PART 1)pp. 121-126
    Fault detection and diagnosis (FDD) is a critical approach to ensure safe and efficient operation of manufacturing and chemical processing plants. Multivariate statistical process monitoring (MSPM) has received considerable attention for FDD since it does not require a mechanistic process model. The diagnosis of the source or cause of the detected process fault in MSPM largely relies on contribution analysis, which is ineffective in identifying the joint contribution of multiple variables to the occurrence of fault. In this work, a missing variable analysis approach based on probabilistic principal component analysis is proposed for fault isolation. Furthermore, a branch and bound method is developed to handle the combinatorial nature of the problem involving finding the variables, which are most likely responsible for the occurrence of fault. The efficiency of the method proposed is shown through a case study on the Tennessee Eastman process.
    Kattou Panayiotis, Lian Guoping, Glavin S, Sorrell I, Chen Tao (2017)Development of a two-dimensional model for predicting transdermal permeation with the follicular pathway: Demonstration with a caffeine study, In: Pharmaceutical Research34(10)pp. 2036-2048 Springer
    Purpose: The development of a new two-dimensional (2D) model to predict follicular permeation, with integration into a recently reported multi-scale model of transdermal permeation is presented. Methods: The follicular pathway is modelled by diffusion in sebum. The mass transfer and partition properties of solutes in lipid, corneocytes, viable dermis, dermis and systemic circulation are calculated as reported previously [Pharm Res 33 (2016) 1602]. The mass transfer and partition properties in sebum are collected from existing literature. None of the model input parameters was fit to the clinical data with which the model prediction is compared. Results: The integrated model has been applied to predict the published clinical data of transdermal permeation of caffeine. The relative importance of the follicular pathway is analysed. Good agreement of the model prediction with the clinical data has been obtained. The simulation confirms that for caffeine the follicular route is important; the maximum bioavailable concentration of caffeine in systemic circulation with open hair follicles is predicted to be 20% higher than that when hair follicles are blocked. Conclusions: The follicular pathway contributes to not only short time fast penetration, but also the overall systemic bioavailability. With such in silico model, useful information can be obtained for caffeine disposition and localised delivery in lipid, corneocytes, viable dermis, dermis and the hair follicle. Such detailed information is difficult to obtain experimentally.
    Yang S, Li L, Chen Tao, Han L, Lian Guoping (2018)Determining the Effect of pH on the Partitioning of Neutral, Cationic and Anionic Chemicals to Artificial Sebum: New Physicochemical Insight and QSPR Model, In: Pharmaceutical Research35141 Springer Verlag, for American Association of Pharmaceutical Scientists
    Purpose:

    Sebum is an important shunt pathway for transdermal permeation and targeted delivery, but there have been limited studies on its permeation properties. Here we report a measurement and modelling study of solute partition to artificial sebum.

    Methods:

    Equilibrium experiments were carried out for the sebum-water partition coefficients of 23 neutral, cationic and anionic compounds at different pH.

    Results:

    Sebum-water partition coefficients not only depend on the hydrophobicity of the chemical but also on pH. As pH increases from 4.2 to 7.4, the partition of cationic chemicals to sebum increased rapidly. This appears to be due to increased electrostatic attraction between the cationic chemical and the fatty acids in sebum. Whereas for anionic chemicals, their sebum partition coefficients are negligibly small, which might result from their electrostatic repulsion to fatty acids. Increase in pH also resulted in a slight decrease of sebum partition of neutral chemicals.

    Conclusions:

    Based on the observed pH impact on the sebum-water partition of neutral, cationic and anionic compounds, a new quantitative structure-property relationship (QSPR) model has been proposed. This mathematical model considers the hydrophobic interaction and electrostatic interaction as the main mechanisms for the partition of neutral, cationic and anionic chemicals to sebum.

    Gao Xiaoyong, Xie Yi, Wang Shuqi, Wu Mingyang, Wang Yuhong, Tan Chaodong, Zuo Xin, Chen Tao (2020)Offshore oil production planning optimization: An MINLP model considering well operation and flow assurance, In: Computers and Chemical Engineering133106674 Elsevier
    With the increasing energy requirement and decreasing onshore reserves, offshore oil production has attracted increasing attention. A major challenge in offshore oil production is to minimize both the operational costs and risks; one of the major risks is anomalies in the flows. However, optimization methods to simultaneously consider well operation and flow assurance in operation planning have not been explored. In this paper, an integrated planning problem both considering well operation and flow assurance is reported. In particular, a multi-period mixed integer nonlinear programming (MINLP) model was proposed to minimize the total operation cost, taking into account of well production state, polymer flooding, energy consumption, platform inventory and flow assurance. By solving this integrated model, each well's working state, flow rates and chemicals injection rates can be optimally determined. The proposed model was applied to a case originated from a real-world offshore oil site and the results illustrate the effectiveness.
    Li Weijun, Gu Sai, Zhang Xiangping, Chen Tao (2020)Transfer learning for process fault diagnosis: Knowledge transfer from simulation to physical processes, In: Computers & Chemical Engineering139106904 Elsevier
    Deep learning has shown great promise in process fault diagnosis. However, due to the lack of sufficient labelled fault data, its application has been limited. This limitation may be overcome by using the data generated from computer simulations. In this study, we consider using simulated data to train deep neural network models. As there inevitably is model-process mismatch, we further apply transfer learning approach to reduce the discrepancies between the simulation and physical domains. This approach will allow the diagnostic knowledge contained in the computer simulation being applied to the physical process. To this end, a deep transfer learning network is designed by integrating the convolutional neural network and advanced domain adaptation techniques. Two case studies are used to illustrate the effectiveness of the proposed method for fault diagnosis: a continuously stirred tank reactor and the pulp mill plant benchmark problem.
    Chen T, Martin E (2009)Bayesian linear regression and variable selection for spectroscopic calibration., In: Anal Chim Acta631(1)pp. 13-21 Elsevier
    This paper presents a Bayesian approach to the development of spectroscopic calibration models. By formulating the linear regression in a probabilistic framework, a Bayesian linear regression model is derived, and a specific optimization method, i.e. Bayesian evidence approximation, is utilized to estimate the model "hyper-parameters". The relation of the proposed approach to the calibration models in the literature is discussed, including ridge regression and Gaussian process model. The Bayesian model may be modified for the calibration of multivariate response variables. Furthermore, a variable selection strategy is implemented within the Bayesian framework, the motivation being that the predictive performance may be improved by selecting a subset of the most informative spectral variables. The Bayesian calibration models are applied to two spectroscopic data sets, and they demonstrate improved prediction results in comparison with the benchmark method of partial least squares.
    Chen T, Morris J, Martin E (2007)Response to the discussion of "Gaussian process regression for multivariate spectroscopic calibration", In: CHEMOMETRICS AND INTELLIGENT LABORATORY SYSTEMS87(1)pp. 69-71 ELSEVIER SCIENCE BV
    Totti Stella, Ng Keng Wooi, Dale Lorraine, Lian Guoping, Chen Tao, Velliou Eirini G. (2019)A novel versatile animal-free 3D tool for rapid low-cost assessment of immunodiagnostic microneedles, In: Sensors and Actuators B: Chemical296126652pp. 1-8 Elsevier
    Microneedle devices offer minimally invasive and rapid biomarker extraction from the skin. However, the lack of effective assessment tools for such microneedle devices can delay their development into useful clinical applications. Traditionally, the microneedle performance is evaluated i) in vivo, using animal models, ii) ex vivo, on excised human or animal skin or iii) in vitro, using homogenised solutions with the target antigen to model the interstitial fluid. In vivo and ex vivo models are considered the gold-standard approach for the evaluation of microneedle devices because of their structural composition, however they do exhibit limitations. More specifically, they have limited availability and they present batch-to-batch variations depending on the skin origin. Furthermore, their use rises ethical concerns regarding compliance with the globally accepted 3Rs principle of reducing the use of animals for research purposes. At the same time, in vitro models fail to accurately mimic the structure and the mechanical integrity of the skin tissue that surrounds the interstitial fluid. In this study, we introduce for the first time an animal-free, mechanically robust, 3D scaffold that has great potential as an accurate in vitro evaluation tool for immunodiagnostic microneedle devices. More specifically, we demonstrate, for the first time, successful extraction and detection of a melanoma biomarker (S100B) using immunodiagnostic microneedles in the 3D culture system. Melanoma cells (A375) were cultured and expanded for 35 days in the highly porous polymeric scaffold followed by in situ capture of S100B with the microneedle device. Scanning electron microscopy showed a close resemblance between the 3D scaffold and human skin in terms of internal structure and porosity. The microneedle device detected S100B in the scaffold (with a detection pattern similar to the positive controls), while the biomarker was not detected in the surrounding liquid supernatants. Our findings demonstrate the great potential of this animal-free 3D tool for rapid and low-cost evaluation of microneedle devices.
    Gao X, Huang D, Jiang Y, Chen Tao (2017)A Decision Tree based Decomposition Method for Oil Refinery Scheduling, In: Chinese Journal of Chemical Engineering26(8)pp. 1605-1612 Elsevier
    Refinery scheduling attracts increasing concerns in both academic and industrial communities in recent years. However, due to the complexity of refinery processes, little has been reported for success use in real world refineries. In academic studies, refinery scheduling is usually treated as an integrated, large-scale optimization problem, though such complex optimization problems are extremely difficult to solve. In this paper, we proposed a way to exploit the prior knowledge existing in refineries, and developed a decision making system to guide the scheduling process. For a real world fuel oil oriented refinery, ten adjusting process scales are predetermined. A C4.5 decision tree works based on the finished oil demand plan to classify the corresponding category (i.e. adjusting scale). And then, a specific sub-scheduling problem with respect to the determined adjusting scale is solved. The proposed strategy is demonstrated with a scheduling case originated from a real world refinery.
    Gao X, Jiang Y, Chen T, Huang D (2015)Optimizing scheduling of refinery operations based on piecewise linear models, In: COMPUTERS & CHEMICAL ENGINEERING75pp. 105-119 PERGAMON-ELSEVIER SCIENCE LTD
    Optimizing scheduling is an effective way to improve the profit of refineries; it usually requires accurate models to describe the complex and nonlinear refining processes. However, conventional nonlinear models will result in a complex mixed integer nonlinear programming (MINLP) problem for scheduling. This paper presents a piecewise linear (PWL) modeling approach, which can describe global nonlinearity with locally linear functions, to refinery scheduling. Specifically, a high level canonical PWL representation is adopted to give a simple yet effective partition of the domain of decision variables. Furthermore, a unified partitioning strategy is proposed to model multiple response functions defined on the same domain. Based on the proposed PWL partitioning and modeling strategy, the original MINLP can be replaced by mixed integer linear programming (MILP), which can be readily solved using standard optimization algorithms. The effectiveness of the proposed strategy is demonstrated by a case study originated from a refinery in China.
    Hossain MI, Chen T, Yang Y, Lau R (2009)Determination of actual object size distribution from direct imaging, In: Industrial and Engineering Chemistry Research48(22)pp. 10136-10146 American Chemical Society
    Lau R, Hassan MS, Wong W, Chen T (2010)Revisit of the wall effect on the settling of cylindrical particles in the inertial regime, In: Industrial and Engineering Chemistry Research49(18)pp. 8870-8876 American Chemical Society
    Lemanska Agnieszka, Chen Tao, Dearnaley DP, Jena R, Sydes MR, Faithfull Sara (2017)Symptom clusters for revising scale membership in the analysis of prostate cancer patient reported outcome measures: a secondary data analysis of the Medical Research Council RT01 trial (ISCRTN47772397), In: Quality of Life Research26(8)pp. 2103-2116 Springer
    Purpose: To investigate the role of symptom clusters in the analysis and utilisation of Patient-Reported Outcome Measures (PROMs) for data modelling and clinical practice. To compare symptom clusters with scales, and explore their value in PROMs interpretation and symptom management. Methods: A dataset called RT01 (ISCRTN47772397) of 843 prostate cancer patients was used. PROMs were reported with the University of California, Los Angeles Prostate Cancer Index (UCLA-PCI). Symptom clusters were explored with hierarchical cluster analysis (HCA) and average linkage method (correlation >0.6). The reliability of the Urinary Function Scale was evaluated with Cronbach's Alpha. The strength of the relationship between the items was investigated with Spearman's correlation. Predictive accuracy of the clusters was compared to the scales by receiver operating characteristic (ROC) analysis. Presence of urinary symptoms at 3 years measured with the Late Effects on Normal Tissue: Subjective, Objective, Management tool (LENT/SOM) was an endpoint. Results: Two symptom clusters were identified (Urinary Cluster and Sexual Cluster). The grouping of symptom clusters was different than UCLA-PCI Scales. Two items of the Urinary Function Scales (“Number of pads” and “Urinary leak interfering with sex”) were excluded from the Urinary Cluster. The correlation with the other items in the scale ranged from 0.20-0.21 and 0.31-0.39 respectively. Cronbach's Alpha showed low correlation of those items with the Urinary Function Scale (0.14-0.36 and 0.33-0.44 respectively). All Urinary Function Scale items were subject to a ceiling effect. Clusters had better predictive accuracy, AUC = 0.70-0.65, while scales AUC = 0.67-0.61. Conclusion: This study adds to the knowledge on how cluster analysis can be applied for the interpretation and utilisation of PROMs. We conclude that multiple-item scales should be evaluated and that symptom clusters provide an adaptive and study specific approach for modelling and interpretation of PROMs.
    Wang B, Chen Tao, Xu A (2017)Gaussian process regression with functional covariates and multivariate response, In: Chemometrics and Intelligent Laboratory Systems163(April)pp. 1-6 Elsevier
    Gaussian process regression (GPR) has been shown to be a powerful and effective non- parametric method for regression, classification and interpolation, due to many of its desirable properties. However, most GPR models consider univariate or multivariate covariates only. In this paper we extend the GPR models to cases where the covariates include both functional and multivariate variables and the response is multidimen- sional. The model naturally incorporates two different types of covariates: multivari- ate and functional, and the principal component analysis is used to de-correlate the multivariate response which avoids the widely recognised difficulty in the multi-output GPR models of formulating covariance functions which have to describe the correla- tions not only between data points but also between responses. The usefulness of the proposed method is demonstrated through a simulated example and two real data sets in chemometrics.
    Gao Xiaoyong, Li Haishou, Wang Yuhong, Chen Tao, Zuo Xin, Zhong Lei (2018)Fault Detection in Managed Pressure Drilling using Slow Feature Analysis, In: IEEE Access Institute of Electrical and Electronics Engineers
    Correct detection of drilling abnormal incidents while minimizing false alarms is a crucial measure to decrease the non-productive time and thus decrease the total drilling cost. With the recent development of drilling technology and innovation of down-hole signal transmitting method, abundant drilling data are collected and stored in the electronic driller’s database. The availability of such data provides new opportunities for rapid and accurate fault detection; however, data-driven fault detection has seen limited practical application in well drilling processes. One particular concern is how to distinguish “controllable” process changes, e.g. due to set-point changes, from truly abnormal events that should be considered as faults. This is highly relevant for the managed pressure drilling (MPD) technology, where the operating pressure window is often narrow resulting in necessary set-point changes at different depths. However, the classical data-driven fault detection methods, such as principal component analysis (PCA) and independent component analysis (ICA), are unable to distinguish normal set-point changes from abnormal faults. To address this challenge, a slow feature analysis (SFA) based fault detection method is applied. SFA-based method furnishes four monitoring charts containing more information that could be synthetically utilized to correctly differentiate set-point changes from faults. Furthermore, the evaluation about controller performance is provided for drilling operator. Simulation studies with a commercial high-fidelity simulator, Drillbench, demonstrate the effectiveness of the introduced approach.
    Wojtasik Arek, Bolt Matthew, Clark Catherine H., Nisbet Andrew, Chen Tao Multivariate log file analysis for multi-leaf collimator failure prediction in radiotherapy delivery, In: Physics & Imaging in Radiation Oncology Elsevier
    Background and Purpose Motor failure in multi-leaf collimators (MLC) is a common reason for unscheduled accelerator maintenance, disrupting the workflow of a radiotherapy treatment centre. Predicting MLC replacement needs ahead of time would allow for proactive maintenance scheduling, reducing the impact MLC replacement has on treatment workflow. We propose a multivariate approach to analysis of trajectory log data, which can be used to predict upcoming MLC replacement needs. Materials and Methods Trajectory log files from two accelerators, spanning six and seven months respectively, have been collected and analysed. The average error in each of the parameters for each log file was calculated and used for further analysis. A performance index (PI) was generated by applying moving window principal component analysis to the prepared data. Drops in the PI were thought to indicate an upcoming MLC replacement requirement; therefore, PI was tracked with exponentially weighted moving average (EWMA) control charts complete with a lower control limit. Results The best compromise of fault detection and minimising false alarm rate was achieved using a weighting parameter (λ) of 0.05 and a control limit based on three standard deviations and an 80 data point window. The approach identified eight out of thirteen logged MLC replacements, one to three working days in advance whilst, on average, raising a false alarm, on average, 1.1 times a month. Conclusions This approach to analysing trajectory log data has been shown to enable prediction of certain upcoming MLC failures, albeit at a cost of false alarms.
    Xu M, Chen T, Yang X (2011)The effect of parameter uncertainty on achieved safety integrity of safety system, In: Reliability Engineering and System Safety99(1)pp. 15-23
    Lian Guoping, Li L, Yang S, Chen Tao, Han L (2018)A measurement and modelling study of hair partition of neutral, cationic and anionic chemicals, In: Journal of Pharmaceutical Sciences107(4)pp. 1122-1130 Elsevier
    Various neutral, cationic and anionic chemicals contained in hair care products can be absorbed into hair fiber to modulate physicochemical properties such as color, strength, style and volume. For environmental safety, there is also an interest in understanding hair absorption to wide chemical pollutants. There have been very limited studies on the absorption properties of chemicals into hair. Here, an experimental and modelling study has been carried out for the hair-water partition of a range of neutral, cationic and anionic chemicals at different pH. The data showed that hair-water partition not only depends on the hydrophobicity of the chemical but also the pH. The partition of cationic chemicals to hair increased with pH and this is due to their electrostatic interaction with hair increased from repulsion to attraction. For anionic chemicals, their hair-water partition coefficients decreased with increasing pH due to their electrostatic interaction with hair decreased from attraction to repulsion. Increase in pH didn’t change the partition of neutral chemicals significantly. Based on the new physicochemical insight of the pH effect on hair-water partition, a new QSPR model has been proposed, taking into account of both the hydrophobic interaction and electrostatic interaction of chemical with hair fiber.
    He B, Zhang J, Chen T, Yang X (2013)Penalized Reconstruction-Based Multivariate Contribution Analysis for Fault Isolation, In: INDUSTRIAL & ENGINEERING CHEMISTRY RESEARCH52(23)pp. 7784-7794 AMER CHEMICAL SOC
    Yan W, Chen Y, Yang Y, Chen T (2011)Development of high performance catalysts for CO oxidation using data-based modeling, In: Catalysis Today174(1)pp. 127-134 Elsevier
    This paper presents a model-aided approach to the development of catalysts for CO oxidation. This is in contrast to the traditional methodology whereby experiments are guided based on experience and intuition of chemists. The proposed approach operates in two stages. To screen a promising combination of active phase, promoter and support material, a powerful “space-filling” experimental design (specifically, Hammersley sequence sampling) was adopted. The screening stage identified Au–ZnO/Al2O3 as a promising recipe for further optimization. In the second stage, the loadings of Au and ZnO were adjusted to optimize the conversion of CO through the integration of a Gaussian process regression (GPR) model and the technique of maximizing expected improvement. Considering that Au constitutes the main cost of the catalyst, we further attempted to reduce the loading of Au with the aid of GPR, while keeping the low-temperature conversion to a high level. Finally we obtained 2.3%Au–5.0%ZnO/Al2O3 with 21 experiments. Infrared reflection absorption spectroscopy and hydrogen temperature-programmed reduction confirmed that ZnO significantly promotes the catalytic activity of Au.
    Tang Q, Lau YB, Hu S, Yan W, Yang Y, Chen T (2010)Response surface methodology using Gaussian processes: Towards optimizing the trans-stilbene epoxidation over Co2+-NaX catalysts, In: Chemical Engineering Journal156(2)pp. 423-431 Elsevier
    Response surface methodology (RSM) relies on the design of experiments and empirical modelling techniques to find the optimum of a process when the underlying fundamental mechanism of the process is largely unknown. This paper proposes an iterative RSM framework, where Gaussian process (GP) regression models are applied for the approximation of the response surface. GP regression is flexible and capable of modelling complex functions, as opposed to the restrictive form of the polynomial models that are used in traditional RSM. As a result, GP models generally attain high accuracy of approximating the response surface, and thus provide great chance of identifying the optimum. In addition, GP is capable of providing both prediction mean and variance, the latter being a measure of the modelling uncertainty. Therefore, this uncertainty can be accounted for within the optimization problem, and thus the process optimal conditions are robust against the modelling uncertainty. The developed method is successfully applied to the optimization of trans-stilbene conversion in the epoxidation of trans-stilbene over cobalt ion-exchanged faujasite zeolites (Co2+–NaX) catalysts using molecular oxyge
    Kariwala V, Odiowei P-E, Cao Y, Chen T (2010)A branch and bound method for isolation of faulty variables through missing variable analysis, In: Journal of Process Control20(10)pp. 1198-1206 Elsevier
    Fault detection and diagnosis is a critical approach to ensure safe and efficient operation of manufacturing and chemical processing plants. Although multivariate statistical process monitoring has received considerable attention, investigation into the diagnosis of the source or cause of the detected process fault has been relatively limited. This is partially due to the difficulty in isolating multiple variables, which jointly contribute to the occurrence of fault, through conventional contribution analysis. In this work, a method based on probabilistic principal component analysis is proposed for fault isolation. Furthermore, a branch and bound method is developed to handle the combinatorial nature of problem involving finding the contributing variables, which are most likely to be responsible for the occurrence of fault. The efficiency of the method proposed is shown through benchmark examples, such as Tennessee Eastman process, and randomly generated cases.
    Yang Senpei, Li Lingyi, Lu Minsheng, Chen Tao, Han Lujia, Lian Guoping (2019)Determination of solute diffusion properties in artificial sebum, In: Journal of Pharmaceutical Sciences108(9)pp. pp 3003-3010 Elsevier
    Despite a number of studies showed that hair follicular pathway contributed significantly to transdermal delivery, there have been limited studies on the diffusion properties of chemicals in sebum. Here, the diffusion property of 17 chemical compounds across artificial sebum has been measured using diffusion cell. The diffusion flux showed two types of distinctive behaviors: that reached steady-state and that did not. Mathematical models have been developed to fit the experimental data and derive the sebum diffusion and partition coefficients. The models considered the uneven thickness of the sebum film and the additional resistance of the unstirred aqueous boundary layer and the supporting filter. The derived sebum-water partition coefficients agreed well with the experimental data measured previously using equilibrium depletion method. The obtained diffusion coefficients in artificial sebum only depended on the molecular size. Change in pH for ionic chemicals did not affect the diffusion coefficients but influenced their diffusion flux due to the change of sebum-water partition coefficients. Generally, the measured diffusion coefficients of chemicals in artificial sebum are about one order of magnitude higher than those in the stratum corneum lipids, suggesting the hair follicle might have a non-negligible contribution to the overall permeation.
    Sfarra Stefano, Perilli Stefano, Guerrini Mirco, Bisegna Fabio, Chen Tao, Ambrosini Dario (2019)On the use of phase change materials applied on cork-coconut-cork panels: a thermophysical point of view concerning the beneficial effect in term of insulation properties, In: Journal of Thermal Analysis and Calorimetry Springer Verlag / Akadémiai Kiadó
    This work explores the potentialities of combining a multi-layer eco-friendly panel with a phase change material coating. Although the work is based on a numerical approach performed by Comsol Multiphysics® computer program, it can be considered as rigorous, robust and optimized since the most important parameters added to the model were experimentally evaluated. The scientific soundness was guaranteed by a comparative analysis performed in two different times. The cork-coconut-cork panel was firstly investigated as it was, and secondly it was analysed with a phase change material layer applied on. In the second step, the panel undergone to a mechanical process concerning the realization of a subsurface defect simulating a detachment. The aim was based on the conduction of a thermal conductivity analysis to characterize the benefits deriving from the application of the coating, as well as the negative effects introduced by the subsurface defect resembling a potential thermal bridge. The experiments were performed in Italy in a place identified into the text by means of geographical coordinates.
    Jimenez Toro Maria J, Dou Xin, Ajewole Isaac, Wang Jiawei, Chong Katie, Ai Ning, Zeng Ganning, Chen Tao (2017)Preparation and optimization of macroalgae-derived solid acid catalysts, In: Waste and Biomass Valorization Springer Verlag
    Solid acid catalysts were synthesized from macroalgae Sargassum horneri via hydrothermal carbonization followed by sulfuric acid sulfonation. A three-variable Box-Behnken design and optimization was used to maximize surface acidity. The optimal preparation conditions were found to be at the carbonization temperature of 217 °C, the carbonization time of 4.6 hours and the sulfonation temperature of 108.5 °C. Under these conditions, the highest surface acidity achieved was 1.62 mmol g-1. Physical and chemical properties of prepared solid acid catalyst were characterized by powder X-ray diffraction (PXRD), Fourier transform infrared (FTIR) spectroscopy, and elemental analysis. The results proved the grafting of -SO3H groups on an amorphous carbon structure. The catalyst activity was evaluated by the esterification of oleic acid with methanol. The sample prepared achieved 96.6% esterification yield, which was higher than the 86.7% yield achieved by commercial Ambersyst-15 under the same reaction conditions.
    Kajero OT, Thorpe Rex, Yao Y, Wong DSH, Chen Tao (2017)Meta-model based calibration and sensitivity studies of CFD simulation of jet pumps, In: Chemical Engineering & Technology40(9)pp. 1674-1684 Wiley
    Calibration and sensitivity studies in the computational fluid dynamics (CFD) simulation of process equipment such as the annular jet pump are useful for design, analysis and optimisation. The use of CFD for such purposes is computationally intensive. Hence, in this study, an alternative approach using kriging-based meta-models was utilised. Calibration via the adjustment of two turbulent model parameters, C_μ and C_2ε, and likewise two parameters in the simulation correlation for C_μ was considered; while sensitivity studies were based on C_μ as input. The meta-model based calibration aids exploration of different parameter combinations. Computational time was also reduced with kriging-assisted sensitivity studies which explored effect of different C_μ values on pressure distribution.
    Li Lingyi, Yang Senpei, Chen Tao, Han Lujia, Lian Guoping (2018)Investigation of pH effect on cationic solute binding to keratin and partition to hair, In: International Journal of Cosmetic Science40(1)pp. 93-102 Wiley

    OBJECTIVE: In the process of hair treatment, various cationic actives contained in hair care products can be absorbed into hair fiber to modulate the physicochemical properties of hair such as color, strength, style and volume. There have been very limited studies on the binding and partition properties of hair care actives to hair. This study aimed to investigate the pH effects on cationic solute absorption into hair and binding to keratin.

    METHODS: The keratin binding and hair partition properties of three cationic solutes (theophylline, nortriptyline and amitriptyline) have been measured at different pH using fluorescence spectroscopy and equilibrium absorption experiment. The binding constants, thermodynamic parameters and hair-water partition coefficients determined at different pH were compared and analyzed.

    RESULTS: Increasing the pH from 2.0 to 6.0 resulted in the net charge of hair keratin changed from positive to negative. As a consequence, the binding constants of the three cationic solutes with keratin increased with the increasing pH. This correlated with the variation of the electrostatic interaction between cationic solutes and keratin from repulsion to attraction. The positive H and S values indicated that hydrophobic interaction also played a major role in the binding of the three cationic solutes to keratin. There was a good correlation between solutes binding to keratin and hair-water partition of solutes.

    CONCLUSION: It appears that solute binding to hair keratin is driven first by hydrophobic interaction and then by electrostatic interaction. The fitted thermodynamic parameters suggested that hydrophobic interaction dominates for the binding of the three cationic solutes to keratin. That binding of cationic solutes to keratin correlates with the partition of the solutes to hair could provide theoretical guidance for further developing mathematical models of hair partition and penetration properties.

    Wang K, Chi G, Lau R, Chen T (2011)MULTIVARIATE CALIBRATION OF NEAR INFRARED SPECTROSCOPY IN THE PRESENCE OF LIGHT SCATTERING EFFECT: A COMPARATIVE STUDY, In: ANALYTICAL LETTERS44(5)pp. 824-836 TAYLOR & FRANCIS INC
    When analyzing heterogeneous samples using spectroscopy, the light scattering effect introduces non-linearity into the measurements and deteriorates the prediction accuracy of conventional linear models. This paper compares the prediction performance of two categories of chemometric methods: pre-processing techniques to remove the non-linearity and non-linear calibration techniques to directly model the non-linearity. A rigorous statistical procedure is adopted to ensure reliable comparison. The results suggest that optical path length estimation and correction (OPLEC) and Gaussian process (GP) regression are the most promising among the investigated methods. Furthermore, the combination of pre-processing and non-linear models is explored with limited success being achieved.
    Chen T, Liu Y, Chen J (2013)An integrated approach to active model adaptation and on-line dynamic optimisation of batch processes, In: JOURNAL OF PROCESS CONTROL23(10)pp. 1350-1359 ELSEVIER SCI LTD
    Liu YJ, Chen T, Yao Y (2013)Nonlinear process monitoring by integrating manifold learning with Gaussian process, In: Computer Aided Chemical Engineering32pp. 1009-1014
    In order to monitor nonlinear processes, kernel principal component analysis (KPCA) has become a popular technique. Nevertheless, KPCA suffers from two major disadvantages. First, the underlying manifold structure of data is not considered in process modeling. Second, the selection of kernel function and kernel parameters is always problematic. To avoid such deficiencies, an integrating method of manifolding learning and Gaussian process is proposed in this paper, which extends the utilization of maximum variance unfolding (MVU) to online process monitoring and fault isolation. The proposed method is named as extendable MVU (EMVU), whose effectiveness is verified by the case studies on the benchmark Tennessee Eastman (TE) process. © 2013 Elsevier B.V.
    Bolt Matthew A, Clark Catharine H, Chen Tao, Nisbet Andrew (2017)A multi-centre analysis of radiotherapy beam output measurement, In: Physics & Imaging in Radiation Oncology4pp. 39-43 Elsevier
    Background and Purpose

    Radiotherapy requires tight control of the delivered dose. This should include the variation in beam output as this may directly affect treatment outcomes. This work provides results from a multi-centre analysis of routine beam output measurements.

    Materials and Methods

    A request for 6MV beam output data was submitted to all radiotherapy centres in the UK, covering the period January 2015 – July 2015. An analysis of the received data was performed, grouping the data by manufacturer, machine age, and recording method to quantify any observed differences. Trends in beam output drift over time were assessed as well as inter-centre variability. Annual trends were calculated by linear extrapolation of the fitted data.

    Results

    Data was received from 204 treatment machines across 52 centres. Results were normally distributed with mean of 0.0% (percentage deviation from initial calibration) and a 0.8% standard deviation, with 98.1% of results within ±2%. There were eight centres relying solely on paper records. Annual trends varied greatly between machines with a mean drift of +0.9%/year with 95th percentiles of +5.1%/year and -2.2%/year. For the machines of known age 25% were over ten years old, however there was no significant differences observed with machine age.

    Conclusions

    Machine beam output measurements were largely within ±2% of 1.00cGy/MU. Clear trends in measured output over time were seen, with some machines having large drifts which would result in additional burden to maintain within acceptable tolerances. This work may act as a baseline for future comparison of beam output measurements.

    Gao X, Qi L, Lyu W, Chen Tao, Huang D (2017)RIMER and SA based Thermal Efficiency Optimization for Fired Heaters, In: FUEL205pp. 272-285 ELSEVIER SCI LTD
    Due to frequent changes in thermal load and drift of online oxygen analyzer, the heater’s thermal efficiency optimization system with limited maintenance resources seldom works in long term. To solve this problem, a novel and practical optimization method combing RIMER (i.e. belief rule-base inference methodology using the evidential reasoning) approach and SA (stochastic approximation) online self-optimization is proposed. The optimization scheme consists of (i) an off-line expert system that determines the optimal steady state operation for a given thermal load, and (ii) an on-line optimization system that further improves the thermal efficiency to alleviate the influence caused by sensors’ drift. In more details, at the off-line stage, a belief-rule-base (BRB) expert system is constructed to determine good initial references of operating conditions for a specified thermal load, which quickly drives the system to a near optimal operation point when confronted with the thermal load change; this is based on RIMER. During on-line operation, these off-line determined initial values are further refined by using the SA approach to optimize the thermal efficiency. The newly obtained optimal operating condition then is updated online to compensate the sensor’s drift. The optimized profile is implemented through a practical control strategy for the flue gas - air system of fired heaters, which is applied to the flue gas oxygen concentration and chamber negative pressure control on the basis of flue gas-air control system. Simulation results on the UniSimTM Design platform demonstrate the feasibility of the proposed optimization scheme. Furthermore, the field implementation results at a real process illustrate the effectiveness of this optimization system. Both simulation and field application show that the thermal efficiency can be nearly improved by c. 1%.
    Kajero Olumayowa T., Chen Tao, Yao Yuan, Chuang Yao-Chen, Wong David Shan Hill (2017)Meta-modelling in chemical process system engineering, In: Journal of the Taiwan Institute of Chemical Engineers73pp. 135-145 Elsevier
    Use of computational fluid dynamics to model chemical process system has received much attention in recent years. However, even with state-of-the-art computing, it is still difficult to perform simulations with many physical factors taken into accounts. Hence, translation of such models into computationally easy surrogate models is necessary for successful applications of such high fidelity models to process design optimization, scale-up and model predictive control. In this work, the methodology, statistical background and past applications to chemical processes of meta-model development were reviewed. The objective is to help interested researchers be familiarized with the work that has been carried out and problems that remain to be investigated.
    Gaussianprocesses have received significant interest for statistical data analysis as a result of the good predictive performance and attractive analytical properties. When developing a Gaussianprocess regression model with a large number of covariates, the selection of the most informative variables is desired in terms of improved interpretability and prediction accuracy. This paper proposes a Bayesian method, implemented through the Markov chain Monte Carlo sampling, for variableselection. The methodology presented here is applied to the chemometriccalibration of near infrared spectrometers, and enhanced predictive performance and model interpretation are achieved when compared with benchmark regression method of partial least squares.
    With the world’s population increasing, fossil fuel resources are being consumed at an ever-increasing rate. At the same time, the globalization wave of recent years has also brought forward significant humanitarian and environmental concerns such as the depletion of other natural resources and climate change. With these problems in mind, the production of biofuels is seen as an alternative solution of strategic importance in many countries. The objective of this research is to propose a biofuel supply chain framework that aims to maximize profit and minimize environmental impact. This will be done by considering the various sub-components of the biofuel supply chain. Using the superstructure approach and supply chain block representation, a mathematical formulation is setup to analyse a biomass cultivation site, biomass storage and distribution facility, biofuel production plant, biofuel storage and distribution facility, by-product storage and distribution facility, and finally the customer. A case study on a wheat-to-bioethanol supply chain in the UK is proposed along with another case study on product distribution in the UK to test the validity and robustness of the proposed work. The result of the case studies shows that compared to traditional fossil fuels, biofuel is less competitive in terms of pricing due to the poor conversion ratio and high animal feed wheat price in the UK. However, the WTI Crude Oil price will increase in the long run and therefore, there will be good fighting chance for the biofuel supply chain to compete with the traditional fossil fuel supply chain in future. As for the biofuel distribution, a centralized distribution method is more cost effective when using diesel trucks for delivery. However, the use of electric trucks will give decentralized distribution an advantage in terms of costs. Overall, this research gives an overview of setting up a biofuel supply chain framework, where each of the sub-components is considered, together with other factors such as government policy and environmental impacts. Although biofuel may not be the most desirable energy source now, it still has great room for improvement in the future, with the advancement in technologies.
    Vertically focussed ion beams such as the Surrey Vertical Nanobeam are useful tools for examining the effects of particle radiation on individual cells, allowing single cells to be irradiated and observed. The length of time that cells can be observed is limited by environmental factors, such as ambient CO2, temperature and relative humidity. The Automated Microbeam Observation Endstation for Biological Analysis (AMOEBA) is a control system designed for controlling the environment on the end of vertically focussed ion beams that has been implemented on three different vertical focused ion beam in two different labs. The AMOEBA system allows cells to be observed for up to 36 hours after irradiation without having to move the cell dish. An additional drawback to vertically focussed ion beams are that they are currently unable to perform experiments in low or high oxygen conditions. To expand their capabilities, a microfluidic AMOEBA system was developed to allow vertically focused ion beams to irradiate cells in hypoxic conditions.
    Renewable energy in general, and biofuels in particular, is seen as a viable solution for energy security and climate change problems. For this reason many countries, including Thailand, have set common objectives for utilisation of alternative resources. Thailand is an agricultural country and hence it has a great potential for generating renewable energy from a large amount of biomass resources. In consequence, a 15-year renewable energy development plan has been set by the Thai government, which targets an increase in electricity generation of 32%, from 2,800 MW in 2011 to 3,700 MW in 2022, and also an increase in consumption of ethanol by 200%, from 1,095 million litres in 2011 to 3,285 million litres in 2022 (Department of Alternative Energy Development and Efficiency of Thailand, 2008). Sugarcane and rice are the two main industrial crops in Thailand, with estimated production of 73.50 million tons of sugarcane per year (2009) and 31.50 million tons of rice per year (Sawangphol, 2011), and they are seen as a major source of biomass. This research focuses on the biomass from rice mill and sugar mill processes. In order to develop processing facilities that are capable of utilising available biomass and delivering the above set targets, a comprehensive and systematic methodology is required which will support the decision-making process by accounting for technological, economic and parameters. In this thesis, exhaustive simulation and optimisation are proposed as a tool. The first tool is the technology screening. The aim of the technology screening step is to show all profitability of technologies. This is done by considering various components of rice and sugar mills energy frameworks in Thailand: rice mill technology type, sugar mill technology type, ethanol technology type and biomass based power plant technology type. The modelling of processes for converting sugarcane and rice biomass into electrical energy and ethanol has been performed at the level of superstructure which has been chosen because the scope of the work is to screen available options and to compare them in different configurations in terms of economic aspects. The result of the simulation approach has shown the most profitable (shortest payback period) is the configuration that includes electrical rice mill, automated control sugar mill, gasification biomass based power plant and continuous ethanol plant. The sensitivity analysis has compared the cost of feedstock against profitability (payback period). The sensitivity analysis also compared the price of product against profitability (payback period). The result of the sensitivity analysis showed the change in the price of sugar product is the most sensitive for the rice and sugar mills energy framework. The second tool is the optimisation approach. The aim of the optimisation is to maximise the profit (NPV) impact. This is done by considering the various components of the biofuel supply chain in Thailand. All components were calculated based on candidate points including: the biofield(rice mill and sugar mill), biomass warehouse capacity and location, biofuel plant technology type, plant technology capacity, plant technology location, product warehouse capacity and location, transportation type is considered. There are four scenarios in the case study which were created to examine the proposed biomass optimisation model for Thailand to validate the mathematic formulation. The overall conclusion of the optimization approach is that the biomass power plant is profitable at the present time. The lignocellulosic plant will be the option when the process demand a lot of ethanol production. In summary, the proposed research fills the gap in the operational level and process level by multi-biomass from biofield to customer that includes warehouses and multi transportation modes towards the biofuel supply chain. From the business point of view, the research defines the data for the business investor and also analyses the risk of change in product price, feedstock cost and transportation cost.
    Computational fluid dynamics (CFD) is a computer-based analysis of the dynamics of fluid flow, and it is widely used in chemical and process engineering applications. However, computation usually becomes a herculean task when calibration of the CFD models with experimental data or sensitivity analysis of the output relative to the inputs is required. This is due to the simulation process being highly computationally intensive, often requiring a large number of simulation runs, with a single simulation run taking hours or days to be completed. Hence, in this research project, the kriging meta-modelling method was coupled with expected improvement (EI) global optimisation approach to address the CFD model calibration challenge. In addition, a kriging meta-model based sensitivity analysis technique was implemented to study the model parameter input-output relationship. A novel EI measure was developed for the sum of squared errors (SSE) which conforms to a generalised chi-square distribution, where existing normal distribution-based EI measures are not applicable. This novel EI measure suggested the values of CFD model parameters to simulate with, hence minimising SSE and improving the match between simulation and experiments. To test the proposed methodology, a non-CFD numerical simulation case of the semi-batch reactor was considered as a case study which confirmed a saving in computational time, and an improvement of the simulation model with the actual plant data. The usefulness of the developed method has been subsequently demonstrated through a CFD case study of a single-phase flow in both a straight type and convergent-divergent type annular jet pump, where both a single turbulent model parameter, C_μ and two turbulent model parameters, C_μ and C_2ε where considered for calibration. Sensitivity analysis was subsequently based on C_μ as the input parameter. In calibration using both single and two model parameters, a significant improvement in the agreement with experimental data was obtained. The novel method gave a significant reduction in simulation computational time as compared to traditional CFD. A new correlation was proposed relating C_μ to the flow ratio, which could serve as a guide for future simulations. The meta-model based calibration aids exploration of different parameter combinations which would have been computationally challenging using CFD. In addition, computational time was significantly reduced with kriging-assisted sensitivity analysis studies which explored effect of different C_μ values on the output, the pressure coefficient. The numerical simulation case of the semi-batch reactor was also used as a basis of comparison between the previous EI measure and the newly proposed EI measure, which overall revealed that the latter gave a significant improvement at fewer number of simulation runs as compared to the former. The research studies carried out has hence been able to propose and successfully demonstrate the use of a novel methodology for faster calibration and sensitivity analysis studies of computational fluid dynamics simulations. This is essential in the design, analysis and optimisation of chemical and process engineering systems.
    The focus of this research was to apply mathematical and computational methods for modelling and prediction of tumour volume during the course of radiotherapy. The developed tools could provide valuable information for the optimisation of radiotherapy in the future. Firstly, the feasibility of modelling tumour volume dynamics of individual patients, as measured by computed tomography (CT) imaging, was explored. The main objective was to develop a model that is adequate to describe tumour volume dynamics, and at the same time is not excessively complex as lacking support from clinical data. To this end, various modelling options were explored, and rigorous statistical methods, the Akaike information criterion (AIC) and the corrected Akaike information criterion (AICc), were used for model selection. The models were calibrated to data from two cohorts of non-small cell lung cancer patients, one treated by stereotactic ablative radiotherapy and the other by conventionally fractionated radiotherapy. The results showed that a two-population model with exponential tumour growth is the most appropriate for the data studied as judged by AIC and AICc. Secondly, this model was further equipped with a Bayesian adaption approach in order to predict individual patients’ response to radiotherapy in terms of tumour volume change during the treatment. The main idea was to start from a population-average model, which is subsequently updated, using Bayesian parameter estimation, from an individual’s tumour volume measurement. Therefore the model becomes more and more personalised and so is the prediction. The usefulness of the developed method was demonstrated on clinical data. Finally, attempt was made to link the predicted tumour volume (an important but often secondary treatment outcome indicator) to tumour control probability (one of the primary indicators of treatment outcome), and this model was demonstrated through a simulation study. Overall this research has contributed new methods and results of mathematical modelling for quantitatively analysis and prediction of individual patients’ response to radiotherapy; it represents a significant development that could be used for improved and personalised planning and scheduling of radiotherapy in the future.
    The assessment of the follicular penetration of chemicals into the human skin is of high importance to topical and transdermal drug delivery, personal care, as well as risk assessment of chemical exposure. This is due to the significant contribution of the hair follicles to the penetration of chemicals through the epidermal barrier. The purpose of this work is to develop a two-dimensional pharmacokinetic model which will provide quantitative elucidation of the impact of the follicular pathway, in addition to the transcellular and intercellular routes, on a wide range of chemicals. The follicular pathway is modelled by diffusion in the sebum, which is assumed to completely fill the gap between the inner and outer root sheath. The model is capable of predicting the transdermal permeation kinetics by using built-in equations to estimate the input parameters (e.g. the partition and diffusion coefficients in various skin components). The model has been quantitatively or qualitatively compared to 18 experimental studies, and has demonstrated good predictive capability against the majority of the experimental data. Simulations across a wide chemical space have indicated that the follicular pathway has a greater impact on the penetration of lipophilic than hydrophilic chemicals. Additionally, the larger the molecular weight of the chemical, the greater the impact the hair follicle has on its penetration. The follicular impact has been quantified in various ways (e.g. amount penetrated, bioavailability, permeability difference). The developed model can provide new insight and detailed information regarding chemicals’ disposition and localised delivery in lipid, corneocytes, viable dermis, dermis and the hair follicle.

    Additional publications