My research expertise includes the application of mathematical methods to optimise process design, control and operation as well as model-based experimental analysis in process and energy systems. My current research activities are concentrated on the development and application of systems engineering approaches for process intensification and integration including dynamic process simulation, model-based analysis and experimental verification, modelling and optimization of complex chemical and biological production systems, and energy conversion systems with large structural diversity and a high number of elements. Particular attention is paid to the holistic view of the involved processes phenomena, micro and macro processes, process design and the final experimental verification using miniplant technologies including improved control and monitoring.
Find me on campus Room: 31 BC 02
Carbon formation and sintering remain the main culprits regarding catalyst deactivation in the dry and bi-reforming of methane reactions (DRM and BRM, respectively). Nickel based catalysts (10 wt.%) supported on alumina (Al2O3) have shown no exception in this study, but can be improved by the addition of tin and ceria. The effect of two different Sn loadings on this base have been examined for the DRM reaction over 20 h, before selecting the most appropriate Sn/Ni ratio and promoting the alumina base with 20 wt.% of CeO2. This catalyst then underwent activity measurements over a range of temperatures and space velocities, before undergoing experimentation in BRM. It not only showed good levels of conversions for DRM, but exhibited stable conversions towards BRM, reaching an equilibrium H2/CO product ratio in the process. In fact, this work reveals how multicomponent Ni catalysts can be effectively utilised to produce flexible syngas streams from CO2/CH4 mixtures as an efficient route for CO2 utilisation.
Rivulet instabilities appear in many engineering applications. In absorption equipment, they affect the interface area available for mass transfer, and thus, reducing the efficiency. Here, computational fluid dynamics are used to reproduce the meanders and braids in rivulets flowing down an inclined channel. Fast oscillations of the meander (f = 5.6 Hz) are observed at low flow rates. At greater flow rates, an analysis of the transversal velocity in the retraction waves shows the effect of the surface tension, which causes the braiding phenomenon, and thus, the reduction in gas-liquid interface area.
This paper evidences the viability of chemical recycling of CO2 via reverse water-gas shift reaction using advanced heterogeneous catalysts. In particular, we have developed a multicomponent Fe-Cu-Cs/Al2O3 catalyst able to reach high levels of CO2 conversions and complete selectivity to CO at various reaction conditions (temperature and space velocities). In addition, to the excellent activity, the novel-Cs doped catalyst is fairly stable for continuous operation which suggests its viability for deeper studies in the reverse water-gas shift reaction. The catalytic activity and selectivity of this new material have been carefully compared to that of Fe/Al2O3, Fe-Cu/Al2O3 and Fe-Cs/Al2O3 in order to understand each active component’s contribution to the catalyst’s performance. This comparison provides some clues to explain the superiority of the multicomponent Fe-Cu-Cs/Al2O3 catalyst
We investigate homogeneous nuclear matter within the Brueckner-Hartree-Fock (BHF) approach in the limits of isospin-symmetric nuclear matter (SNM) as well as pure neutron matter at zero temperature. The study is based on realistic representations of the internucleon interaction as given by Argonne v18, Paris, Nijmegen I and II potentials, in addition to chiral N3LO interactions, including three-nucleon forces up to N2LO. Particular attention is paid to the presence of di-nucleon bound states structures in 1S0 and 3SD1 channels, whose explicit account becomes crucial for the stability of self-consistent solutions at low densities. A characterization of these solutions and associated bound states is discussed. We confirm that coexisting BHF single-particle solutions in SNM, at Fermi momenta in the range 0.13 − 0.3 fm−1 , is a robust feature under the choice of realistic internucleon potentials.
We investigate the appearance of di–neutron bound states in pure neutron matter within the Brueckner–Hartree–Fock approach at zero temperature. We consider the Argonne v18 and Paris bare interactions as well as chiral two– and three–nucleon forces. Self–consistent single–particle potentials are calculated by controlling explicitly singularities in the g matrix associated with bound states. Di–neutrons are loosely bound, with binding energies below 1 MeV, but are unambiguously present for Fermi momenta below 1 fm−1 for all interactions. Within the same framework we are able to calculate and characterize di–neutron bound states, obtaining mean radii as high as ∼110 fm. Implications of these findings are presented and discussed.
This work presents a novel algorithm and its implementation for the stochastic optimization of generally constrained Nonlinear Programming Problems (NLP). The basic algorithm adopted is the Iterated Control Random Search (ICRS) method of Casares and Banga (1987) with modifications such that random points are generated strictly within a bounding box defined by bounds on all variables. The ICRS algorithm serves as an initial point determination method for launching gradient-based methods that converge to the nearest local minimum. The issue of constraint handling is addressed in our work via the use of a filter based methodology, thus obviating the need for use of the penalty functions as in the basic ICRS method presented in Banga and Seider (1996), which handles only bound constrained problems. The proposed algorithm, termed ICRS-Filter, is shown to be very robust and reliable in producing very good or global solutions for most of the several case studies examined in this contribution.
Hydrogen produced by microalgae is intensively researched as a potential alternative to conventional energy sources. Scaling-up of the process is still an open issue, and to this end, accurate dynamic modeling is very important. A challenge in the development of these highly nonlinear dynamic models is the estimation of the associated kinetic parameters. This work presents the estimation of the parameters of a revised Droop model for biohydrogen production by Cyanothece sp. ATCC 51142 in batch and fed-batch reactors. The latter reactor type results in an optimal control problem in which the influent concentration of nitrate is optimized which has never been considered previously. The kinetic model developed is demonstrated to predict experimental data to a high degree of accuracy. A key contribution of this work is the prediction that hydrogen productivity can achieve 3365 mL/L through an optimally controlled fed-batch process, corresponding to an increase of 116% over other recently published strategies.
This work presents the mathematical formulation of a nonlinear programming (NLP) model which optimizes simultaneously crude oil blending and operating conditions for a system of several crude oil distillation units (CDUs) at a Colombian refinery. The CDU system consists of three industrial units processing a blending of five extra-heavy crude oils and producing two commercial fuels, Jet-1A and Diesel. The NLP model involves typical restrictions (e.g., flow rate according to capacity of pumps, distillation columns, etc.) and the heat integration of streams from atmospheric distillation towers (ADTs) and vacuum distillation towers (VDTs) with the heat exchanger networks for crude oil preheating. A metamodeling approach is used so as to represent the ADTs. Preheating networks are modeled with mass, energy balances, and design equations of each heat exchanger. The NLP model has been implemented in GAMS using CONOPT as solver. Different cases are solved by the NLP model such that the optimal case with less profit increment had an economical benefit of 13% with respect to its case without optimization. In each optimal case the extra-heavy crude oils in the feed blending of each CDU required more severe operating conditions such as higher temperature of the crude oil at the entrance to the towers, greater flow rate of stripping steam at the bottom, and minor pressure of the tower tops.
The hydroformylation of 1-dodecene on a rhodium-biphephos catalyst complex exploiting a thermomorphic multicomponent solvent system was studied experimentally in a batch reactor in order to describe the kinetics of the main and the most relevant side reactions. The formation of the active catalyst was studied in preliminary experiments. Based on a postulated catalytic cycle mechanistic kinetic models were developed considering isomerization, hydrogenation and hydroformylation reactions as well as the formation of not catalytically active Rh-species. The complex overall network was decomposed to support parameter estimation. The isomerization of 1-dodecene, the hydrogenations of iso- and 1-dodecene and the hydroformylations of iso-dodecene and 1-dodecene were investigated as a function of temperature, total pressure and partial pressures of carbon monoxide and hydrogen, respectively. These four sub-networks of increasing size and the total network were analyzed sequentially in order to identify kinetic models and to estimate the corresponding parameters applying model reduction techniques based on singular value decomposition combined with rank revealing QR factorization.
When developing a new process with expensive and dangerous reaction partners, it is often of interest to keep pilot plant operation time low. A model of the process can aid in achieving this goal. However, the model is generally prone to errors, and thus, leading to deviations from the actual operation. In this contribution, the focus is set on the modeling of a novel process concept for the hydroformylation of long-chained alkenes with particular interest laid on the three phase micellar system. Moreover, a methodology is presented to support operators in determining suitable operating policies for mini-plant operation. Two models of the core units, reactor and decanter, of the constructed mini-plant are presented and linked. With these, an operation policy for the minimization of model parameter uncertainty is determined.
Chromatographic separation processes such as simulated moving bed (SMB) are widely used in the petrochemical, fine chemical, sugar, and pharmaceutical industries. Their separation efficiency can be improved by optimizing the components’ adsorptivity in the different unit sections through, e.g., the use of solvent, temperature, or PH gradients. In this work, the salt gradient ion-exchange SMB used to separate two proteins, β-lactoglobulins A and B, is theoretically analyzed, where the protein adsorption is described by the steric mass action model. Detailed model-based sequential optimization studies have been carried out for both closed-loop isocratic SMB as well as closed and open-loop gradient ion-exchange SMB. The separation efficiency is described by the throughput. Different gradient cases were analyzed to find out the influence factors. The results show that the gradient SMB is more efficient than the isocratic SMB in terms of startup time, throughput, and desorbent-to-throughput ratio. Moreover, it can be known that using open-loop gradient ion-exchange SMB to separate proteins is more effective than using closed-loop gradient SMB. The influencing factors such as the mass transfer efficiency and the maximum flow rate on separation efficiency are discussed according to different cases.
The oxidative coupling of methane (OCM), in which methane is turned into ethylene and ethane, presents an opportunity for using natural gas to produce desired C2 hydrocarbons. The main challenge for the successful implementation of the process is the reduction of undesired by-products, such as CO2. This contribution deals with the optimization of a membrane reactor network for the OCM process with the goal of maximizing yield while maintaining simultaneously a high selectivity in C2 products. The network consists of a plug flow/fixed-bed reactor, a conventional packed-bed membrane reactor and a so-called proposed packed-bed membrane reactor.
For more than three decades Oxidative Coupling of Methane (OCM) process has been investigated as an attractive alternative for cracking technologies for ethylene production and exploiting the huge resources of natural gas. Developing a uitable catalyst and analyzing proper reactor feeding policy, reviewing and deploying the efficient methods in separation and purification of the undesired and desired products, possible energy saving and process intensification in each section, each has been the subject of many researches in the past. In this paper, these aspects will be addressed simultaneously in a general overview of the main research activities performed in the chair of process dynamics and operation at Berlin Institute of Technology under the context of Unifying Concepts in Catalysis (UniCat) project. Moreover, a cost estimation of the industrial scale OCM process guiding the analysis method to address the potentials and disadvantageous of each OCM scenario structure, highlighted the possible process intensifications potentials in case of energy and equipment.
This paper, inspired by the success of adsorptive air separation in big scale (up to 250 tons/day), looks into the possibility of replacing cryogenic distillation with adsorptive separation, and thus improving the downstream processing of OCM. This results in a new process concept. For this purpose, a plug flow model of fixed-bed adsorber was developed and several separation schemes were investigated via simulation. Among them, the simultaneously separation of ethylene and carbon dioxide using zeolite 4A is found realizable. The results show that by switching from cryogenic distillation to adsorption, separation cost can be significantly reduced.
A systematic approach for system identification is applied to experimental data of ethanol production from cellulose. Special attention is given to the identification of model parameters, which can be reliably estimated from available measurements. For this purpose, an identifiable parameter subset selection algorithm for nonlinear least squares parameter estimation is used. The procedure determines the parameters whose effects are unique and have a strong effect on the predicted (measurement variables) output variables. The system is described by a generic process model for the simultaneous saccharification and fermentation including three enzyme-catalyzed reactions. The process model is clearly over-parameterized. By applying the subset selection approach the parameter space is reduced to a reasonable subset, whose estimated parameters are still able to predict the experimental data accurately.
In this work, the systematic integration of bio-refineries within oil refineries is considered. This is particularly relevant due to the lack of adaptation of existing refineries to diminishing oil supply. Moreover, the integration of oil and bio-refineries has a massively positive effect on the reduction of CO2 emissions. For instance, the biodiesel produced in bio-refineries could be integrated with conventional oil refinery processes to produce fuel, thusly reducing the dependence on crude oil. This represents a suitable alternative for increasing profit margins while being increasingly environmentally friendly. The identified possible routes of integration will be discussed in this contribution. For this purpose, the different proposed alternatives and their configurations were simulated and analysed. The developed models simulated key integrations e.g. a gasification unit that is fed from pyrolysis oil, biodiesel, and refinery residue, before being combined into one system involving all three. Varying forms of synthesis for these three feeds were also considered, focusing on novel techniques as well as environmentally friendly options that made use of waste products from other processes. The simulations revealed valuable gas stream rich in H2, with some CO2 and with a slight excess of CO resulting from the gasification unit. Further upgrading of these products was achieved by coupling the gasifier with a water gas shift (WGS) unit. This allowed a fine tune of the H2:CO ratio in the gas stream which can be further processed to obtain liquid hydrocarbons via Fischer-Tropsch (FT) synthesis or alternatively, clean hydrogen for fuel cells applications.
Ethylene is the world’s largest commodity chemical and a fundamental building block molecule in the chemical industry. Oxidative coupling of methane (OCM) is considered a promising route to obtain ethylene due to the potential of natural gas as a relatively economical feedstock. In a recent work, this route has been integrated by Godini et al (2013) with methane dry reforming (DRM) in a dual membrane reactor, allowing an improved thermal performance. In this work, we have explored a more ambitious integrated system by coupling the production of methane and carbon dioxide via coal gasification with the DRMOCM unit. Briefly, our process utilises coal to generate value-added methane and ethylene. In addition, CO2 management is achieved through CO2 methanation and dry methane reforming. Potential mass and energy integration between two systems is proposed as well as the optimum conditions for synthetic natural gas production. The upstream gasification process is modelled to determine the influence of temperature, pressure, and feed composition in the methane yield. The results suggest that the key variables are temperature and hydrogen concentration, as both parameters significantly affect the methane and CO2 levels in the linking stream. This study reports for the first time the linking stream between the two systems with a high methane concentration and the appropriate amount of CO2 for downstream processing.
The automated construction of physical laws from raw experimental measurements poses a great challenge in modern modelling and remains an open question. The work here presents a novel generalized Mixed-Integer Nonlinear Programming (MINLP) approach, which constitutes a rigorous theoretical formulation that best fits the given data. The proposal is based on the use of generic representation of analytical functions as binary evaluation trees which are Directed Acyclic Graphs (DAG) utilized to allow the construction of a superstructure out of which the optimal fitting model can be identified by solving the resulting (non-convex) MINLP problems. The trees are constructed in a way that their nodes are comprised of a linear combination of basic atomic functions, either arithmetic or unary, weighted by binary decision variables. Both single-input single-output (SISO) and multiple-input multiple-output systems are considered, as well as more complex models comprised of differential equations or even described by series summation of algebraic terms. The aim and contribution proposed methodology in this paper is to present the most general theoretical formulatioon of how models are constructed for systems quantification via analytical function forms, irrespective of the source of data. The constructed formulation is shown to contain all formulations thus far presented in the open literature, comprising a starting point either for direct fitting or for the derivation of simplified approaches.
Integration of Ordinary Differential Equations (ODE's) plays a paramount role in the dynamic simulation of a wide spectrum of processes in Chemical Engineering. This paper presents a novel approach within our discipline, namely the Quantised State Integration technique (QSI) (also known as Quantised State Simulation, QSS) which was introduced in its raw form several decades ago within Electrical Engineering for the simulation of electrical and electronic circuits in dynamic operation. While traditionally integration of ODE's considers time to be the coordinating parameter and it is discretised to allow the calculation of the state variables evolution, in QSS methods the states are discretised and time is calculated at points states go state events (changes by an amount equal to the discretisation level for each of them)-this allows effectively the decoupling of the state integration within accuracy tolerances. In the current work, we present significant theoretical and implementational extensions to the method, rendering it capable of handling large- to huge-scale applications involving stiff systems, state discontinuities (discrete events in hybrid systems) as well as the efficient calculation of sensitivity equations-all aspects that have previously been impossible to incorporate in the QSS suite of techniques presented over the years. Overall, all theoretical and preliminary computational demonstrations show it to be a very promising and powerful integration technique with a strong potential for future evolution and contributions. A multitude of areas that can benefit from this technique are identified in the paper
Synthesis gas (syngas), mainly constituted by carbon monoxide (CO) and hydrogen gas (H2), is produced mostly through biomass gasification and methane reforming. In the last decade, the thermochemical route to produce ethanol and higher alcohols from syngas has been gaining space as a possible route to produce synthetic fuels and additives. This kind of process presents a series of advantages as: short-time reaction, abundant and lower-price feedstocks, the use of lignin and the almost complete conversion of syngas, having the potential to exceed ethanol production by fermentative route. Aiming to produce ethanol through thermochemical route, a singular process (a small-scale plant with capacity to process 100 kmol/h of syngas) was proposed for a first evaluation using the commercial simulator ASPEN Plus v7.3. Four different Rh-based catalysts were tested in the process (RhFe, RhLa, RhLaV, and RhLaFeV), trying to take advantage of the characteristics of Rh-based catalysts as high ethanol selectivity and hydrocarbons production. The process design took into account the reactor selectivity and conversion. Through sensitivity analysis, the downstream process were configured searching for the best possible design of separation steps, making possible to obtain ethanol (>99 % wt.), methanol (>90 % wt.), Liquified Petroleum Gas (LPG, mixture of C2H6, C3H8 and C4H10, > 99 % wt.) and pentane (>95% wt.).
Biodiesel has turned out to be an integral part of the discussion of renewable energy sources and has diverse advantages in terms of its flexibility and applicability. Considering the characteristics of the transesterification reaction, a laboratory-scale system has been developed in this work. Waste Vegetable Oil (WVO), mainly sunflower oil, from local sources has been used and the transesterification carried out using methanol in the presence of sodium hydroxide catalyst. Characterisation of the biodiesel produced has been carried out using a number of different techniques including rheology, calorimetry, and gas liquid chromatography. The main factors affecting the % yield of biodiesel are temperature, catalyst, and alcohol to triglyceride ratio. Thus, experimental work has been carried out so as to study the rate and yield of the reaction as a function of those factors. A model has also been developed to validate the experimental data and this should help in increasing the efficiency of these processes and reducing the energy input. Moreover, the novel use of ultrasound as a method of measuring progression of the reaction is correlated with in-situ pH monitoring of the reaction process.
Coalescence enhancement of water droplets in oil emulsions is commonly contemplated for the separation of an aqueous phase dispersed in a dielectric oil phase with a considerably lower dielectric constant than that of the dispersed phase. The characteristics and geometry of the electrode system have a large impact on the performance of an electrostatic coalescer and are actually strictly linked to the type of the applied electric field and the emulsion used. Furthermore, addition of chemicals and heating has also been revealed to further enhance the electrocoalescence of water droplets. In this work, the coalescence of two water drops sinking in a dielectric oil phase at an applied high voltage, pulsed dc electric field, in particular with regards to the effects of pressure and temperature on coalescence performance is investigated. The developed model should help to recognise and prove approaches to electrocoalescence mechanisms, the dispersion flow direction with respect to the applied electric field, as well as the electric field configuration.
A novel approach for the systematic and hierarchical derivation of process models is presented. Model candidates for different unit phenomena are collected and rated on the basis of the model structure, origin and the modeler's belief. The process model is created as a superstructure with the competing partial models. Thereby, it is possible to determine the best possible combination through optimization with respect to different objective functions. The systematic procedure has been implemented into the online web modeling platform MOSAIC. Based on the superstructure, optimization code for the state-of-the art optimization and simulation software can automatically be created. Based on two case studies, the new approach is demonstrated, namely a process model for the hydroformylation of long-chain olefins and a model for the pressure drop in packed columns with foaming components.
Mathematical models are essential for chemical processes since they contribute to both identification and manipulation of process mechanisms, e.g. in reaction systems, separation processes and process controls. The methodologies of model-based experiment design aim at reducing the uncertainties of estimated model parameters, and thus, make the identification and use of these models possible. Up to now, sequential optimization approaches have been applied to solve the related extended optimal control problem. In this contribution, a substantial comparison between the sequential and the simultaneous optimization approach for optimal model-based experimental design is presented with respect to convergence behavior and computational load. Moreover, both approaches are compared regarding high order and continuous control trajectories as well as restrictions from the process. Process restrictions, especially regarding state variables and continuity of control trajectories have a tremendous importance with respect to the practical implementation, but are mostly just touched or totally neglected during the optimization steps so far. The results are discussed by the application on a CSTR example.
The oxidative coupling of methane (OCM) is a promising alternative route to olefins that converts methane to higher hydrocarbons and open up a new feedstock for the oil based industry. However, due to yield limitations of available catalysts and high separation costs for conventional gas processing, the OCM process has not been applied yet in the industry. Starting with process simulation and sensitivity studies a flexible mini-plant was built in this research so as to demonstrate technical feasibility of an efficient OCM process, model validity and to study long term effects. By this means a concurrent engineering approach was applied for the whole process while investigating each unit parallel. Moreover, catalyst with several reactor concepts like the fluidized bed and membrane reactor were investigated by CFD simulation, process simulation and experiments, in order to study catalyst life time, operation conditions and technical feasibility. Thus, the reaction section was improved from 16% yield to 18%. Furthermore, the separation part of the OCM process was energetically improved by an integrated down streaming unit for the CO2. Thus, an energetic improvement of more than 40% in comparison to a benchmark absorption - desorption based CO2 separation process was achieved. In addition to this, novel absorbents were studied starting with molecular simulation up to process simulation and experimental validation for the CO2 separation. The results of the integrated process development and optimization process for the OCM will be presented and an overview of the multi scale and multilevel Process System Engineering (PSE) approach will be given for the case study.
An important aspect for model-based design and development as well as for process monitoring and control is the consideration of uncertain process parameters. One approach for the explicit consideration of such uncertainties is the formulation of Chance-Constrained optimization problems. Within the last years, several different methods for the efficient solution of these problems have been presented. In this work, chance constraints are evaluated following the idea of the variable mapping approach. Because the efficiency of the original approach deteriorates with an increasing number of uncertain parameters, the probability integration has been extended recently to the exploitation of sparse grids. In this work, additional techniques for improving the efficiency of the variable mapping approach are presented. Firstly, the solution of a subproblem, the so called shooting task is analyzed in detail and enhanced through an idea called here result recycling. Secondly, possible extensions are presented which make use of second order derivative information. The new methods are verified by application to an industrially validated process model of a vacuum distillation column for the separation of multicomponent fatty acids.
Due to the huge methane deposits worldwide and the great need for the chemical process industry to have new alternatives for olefins production, especially ethylene as starting raw material for numerous products, the direct conversion of methane to ethylene has attracted considerable interest. The main reason that motivates the realization of this new approach is to exploit the availability of un-reacted methane, coming from the exit flue gas products of the OCM reactor, and thus, design an alternative process for methanol and formaldehyde production via OCM and the co-generation of electricity that can make the process economically attractive and designed so as to be industrially implemented. The total project investment, based on total equipment cost, as well as variable and fixed operating costs, was developed based on mass and energy balance information taken from Aspen® Process Economic Analyzer simulation results. The feasibility was evaluated in terms of energy savings, CO2- emission reductions and costs, in comparison to the separate production of methanol with conventional technology alone. Before starting the economic study of the OCM process a preliminary analysis of possible plant locations has been developed. Natural gas is a commodity which price varies strongly from one region to another. Moreover, not only the price of raw materials is affected by the location of the plant but also the costs associated with the production, namely: steam, refrigeration, electricity, fuel, wages, etc., affecting strongly the profitability of a petrochemical project. Due to low natural gas prices in Venezuela, which has the highest production potential in South America, and the highest ethylene sales for the European market, this geographical location has been chosen for economic analysis of this project. Kinetic data of the OCM reaction were taken from the experimental fluidized bed reactor values that has been build in our facilities at TU-Berlin, which reflect promising conversion, selectivity and yield values, testing different catalysts developed at the Institute of Chemistry inside the scope of the UNICAT project. This analysis suggests areas for research focus that might improve the profitability of natural gas conversion, and the results have also been used for the design of the pilot plant which is now being operational at our department.
The oxidation of sulfur dioxide over vanadium pentoxide catalysts represents a basic step in the sulfuric acid production process. In conventional sulfuric acid plants the SO2 oxidation represents the limiting step with respect to the SO2 emissions. Due to the fact that the SO2 oxidation is an equilibrium reaction, sulfuric acid plants always have SO2 emissions. In this work, a new process concept is presented, which uses the transient behaviour of the reaction in two reactors operating under unsteady conditions (Saturated Metal Phase reactor). Besides several advantages, which can increase the efficiency of the whole sulfuric acid process drastically, the SMP Reactor is a key component for an efficient operation of a sulfuric acid plant which reduces the emissions down to zero while keeping the necessary conditions for the hydrogenation unit installed downstream. For this purpose, a mathematical model is used, which describes the dynamic effects of the SO2 oxidation. The model has been experimentally verified in a Miniplant, which works with commercial catalyst pellets.
In this work, a two-dimensional model for a conventional packed-bed membrane reactor (CPBMR) is developed for the oxidative coupling of methane, which uses a nonselective porous membrane to continuously feed oxygen to the catalytic bed. The model incorporates radial diffusion and thermal conduction and assumes convective transport for the axial direction. In addition, two 10 cm long cooling segments for the CPBMR were implemented based on the idea of a fixed cooling temperature outside the reactor shell. The resulting model is discretized using two-dimensional orthogonal collocation on finite elements with a combination of Hermite polynomials for the radial and Lagrangian polynomials for the axial coordinate. The simulation study shows that it is necessary to make all transport coefficients dependent on local temperatures and compositions. This leads to a simulation with roughly 130,000 variables, which is then used to generate initial points for the optimization of the CPBMR stand-alone operation. In addition, inequality constraints and variable bounds are introduced so as to avoid potentially hazardous mixtures of methane and oxygen in both shell and tube as well as to keep the temperatures below levels stressing reactor materials (< 1,100 °C). Moreover, membrane thickness, feed compositions, temperatures at the reactor inlet and for the cooling segments, diameters of tube and shell, and finally the amount of inert packing in the reactor are considered as decision variables. The optimization procedure uses IPOPT as a solver. Afterwards, the 2D model is integrated into a membrane reactor network (MRN) proposed by H. Godin, 2010 which is simulated. Finally, attempts are made to optimize its operation.
The accurate and efficient evaluation of first and higher order derivative information of mathematical process models plays a major role in the field of Process Systems Engineering. Although it is well known that the chosen methods for derivative evaluation may have a major impact on solution efficiency, a detailed assessment of these methods is rarely made by the modeler. Since standard modeling tools and some solvers normally only support own default methods for derivative evaluation, the evaluation of further methods can be a tedious work, and thus requiring the connection of different tools. In this contribution the implementation of a general method for generation of derivative information out of the documentation level is presented. Exploiting the possibility of code generation given by the web-based modeling environment MOSAIC (Kuntsche et al. 2011), derivative information of model equations is generated either by symbolic derivatives or by coupling the models with state of the art automatic differentiation (AD) tools. This offers the modeler different methods of getting exact derivative values and opens the possibility of integrating the assessment of derivative evaluations within the modeling and simulation workflow.
A new approach for optimal experimental design has been developed to support the work of chemists and process engineers in determining reaction kinetics of complex reaction networks. The methodology is applied on sub-networks of the hydroformylation process of 1-dodecene with a Biphephos-modified rhodium catalyst in a DMF-decane thermomorphic solvent system (TMS). The isomerization and hydrogenation sub-networks are systematically analyzed with respect to parameter estimability. They are determined in a sequential approach using model-based optimal experimental design via perturbations with respect to temperature and synthesis gas pressure, and subsequently used to build up the reaction network. The focus of this contribution is the parameter estimation procedure at the very early investigation stage where model uncertainties are high. Sensitivities of sensitive parameters are increased while others are suppressed, which are carried over from the estimated sub-networks or structurally more difficult to determine. This subsequently leads to more reliable parameter estimations.
In this work, a new approach to model identification and PI controller tuning based on model-based experimental design is presented. In the proposed strategy, system identification relies on a closed-loop set-point response. For this purpose, experiments are first exemplarily executed with a P-controller. Therefore, in this specific case only one design variable is considered that is represented by the controller gain. Additionally, we use the results achieved in the system identification step for the calculation of controller settings. In order to validate our approach and demonstrate the benefits of the proposed strategy, different scenarios are simulated.
This work presents a Non Linear Programming (NLP) model developed to optimize simultaneously a crude oil distillation unit (CDU) system and several cases of application run in a refinery as well. This model optimizes feedstock composition and operational conditions for a CDU System (ECOPETROL S.A.). The NLP Model uses a Metamodeling approach so as to represent Atmospheric Distillation Towers (ADT). The Vacuum Distillation Towers (VDT) are implemented assuming perfect separation (assay cuttings). The defined objective function is given by an economic profit. The CDU system consists basically of five industrial units and fourteen Colombian Crude Oils. Each Metamodel uses as independent variables: crude oil flow rates, operational conditions, Jet EBP, and Diesel T95% from ASTM D-86 distillation curve. The output variables of the Metamodels are product flows, temperatures, and qualities. The developed NLP model was implemented in GAMS. The time needed for its solution is around 60s while using the CONOPT solver. The NLP model results were successfully applied to a Colombian refinery for 3 consecutive weeks. The model was able to find the best use of installed equipments in CDUs through the preparation of a crude oil charge quasi-constant quality without matter the time period of the optimization. In each week, optimal crude oil flow rates towards each CDU (like new scenarios implemented in the refinery) were evaluated in a refinery global simulator with all downstream refining schemes in order to calculate the Refinery Gross Margin (RGM). In each analyzed case, the obtained RGM for new crude oil feeds was however better than that case without optimization with a economic benefit of up to 0.043 US$/bl equivalent to US$ 3.870.000 per year. This shows the effectiveness of a CDU NLP model within short term planning in the petroleum industry.
In this work, a new approach to model identification based on model-based experimental design is presented. In the proposed strategy, system identification relies on a closed-loop set-point response. For this purpose, experiments are first exemplarily executed with a P-controller. Therefore, in this specific case only one design variable is considered that is represented by the controller gain. In order to validate our approach and demonstrate the benefits of the proposed strategy different scenarios are simulated.
Operators in chemical plants are confronted with several different measured process variables and parameters. Although the precision of measuring rose, individual measurements remained uncertain. This might affect real measurements such as temperature or pressure, where the measured value is more an expected value, with the real value within a range around it or also process dynamics, which hold exactly only under certain circumstances. Within the process monitoring and control, the operator has to take such uncertainties into account; on the one hand to not risk the violation of safety regulations, on the other hand to not use a too conservative control and give away product or quality. Even though an experienced and skilled operator might be able to handle single uncertain parameters and variables quiet efficiently, the outcome of multiple uncertain parameters is difficult. To handle multiple uncertain parameters simultaneously in optimisation, the concept of chance-constrained optimisation has been developed and extended over the last years. In this work, we present developed techniques of chance-constrained optimisation for process monitoring and control. It will allow to calculate potential key performance indicators out of uncertain variables and parameters, which can help operators in the decision making process. However, one drawback of using chance-constraints techniques is the required computation time for calculation. It requires a significant amount of individual calculations. Therefore, algorithmic improvements were required to meet the requirements of online monitoring and control. The talk will present the application of the developed chance-constrained approach on uncertain parameters in process monitoring and control, give an insight how the computing time improvements were fulfilled and show results of a practical evaluation.
Page Owner: ha0016
Page Created: Thursday 27 August 2015 16:47:08 by mex058
Last Modified: Monday 17 July 2017 11:43:28 by m07811
Expiry Date: Sunday 27 November 2016 16:45:30
Assembly date: Mon Nov 20 00:43:59 GMT 2017
Content ID: 154444