Debugging Cyber-Physical System (CPS) models can be extremely complex. Indeed, only detection of a failure is insufficient to know how to correct a faulty model. Faults can propagate in time and in space producing observable misbehaviours in locations completely different from the location of the fault. Understanding the reason of an observed failure is typically a challenging and laborious task left to the experience and domain knowledge of the designers. In this work, we propose CPSDebug, a novel approach that combines testing, specification mining, and failure analysis, to automatically explain failures in Simulink/Stateflow models. We evaluate CPSDebug on two case studies, involving two use scenarios and several classes of faults, demonstrating the potential value of our approach.
This is a colloquium course. The aim of the course is to provide an introduction to mathematical modeling, analysis, system identification, and control of biochemical reaction systems and of rational systems of engineering phenomena. The course will be of interest to students in: bioinformatics, biomathematics, biology, mathematics, and engineering, all at the master level.
Stochastic models such as Continuous-Time Markov Chains (CTMC) and Stochastic Hybrid Automata (SHA) are powerful formalisms to model and to reason about the dynamics of biological systems, due to their ability to capture the stochasticity inherent in biological processes. A classical question in formal modelling with clear relevance to biological modelling is the model checking problem, i.e. calculate the probability that a behaviour, expressed for instance in terms of a certain temporal logic formula, may occur in a given stochastic process. However, one may not only be interested in the notion of satisfiability, but also in the capacity of a system to maintain a particular emergent behaviour unaffected by the perturbations, caused e.g. from extrinsic noise, or by possible small changes in the model parameters. To address this issue, researchers from the verification community have recently proposed several notions of robustness for temporal logic providing suitable definitions of distance between a trajectory of a (deterministic) dynamical system and the boundaries of the set of trajectories satisfying the property of interest. In this talk, we present an extension of this notion of robustness to stochastic systems, showing that this naturally leads to a distribution of robustness degrees. Secondly, we show how to exploit this notion to address the system design problem, where the goal is to optimise some control parameters of a stochastic model in order to maximise robustness of the desired specifications.
Light, being the fundamental energy source to sustain life on Earth, is the external factor with the strongest impact on photosynthetic microorganisms. Moreover, when considering biotechnological applications such as the production of energy carriers and commodities in photobioreactors, light supply within the reactor volume is one of the main limiting factors for an efficient system. Thus, the prediction of light availability and its spectral distribution is of fundamental importance for the productivity of photo-biological processes. Additionally, most of the photosynthesis related processes are directly regulated by light quality and quantity, such as Non-Photochemical Quenching or State-Transitions in cyanobacteria. Finally, light effect on photosynthesis electron chain and its integration into an overall model is the ultimate goal to describe real industrial scenarios. In summary, light penetration in photobioreactor cultures, optical related mechanisms and effect of light on the electron chain will be described.
Boolean networks (and more general logic models) are useful frameworks to study signal transduction across multiple pathways. Logical models can be learned from a prior knowledge network structure and multiplex phosphoproteomics data. However, most efficient and scalable training methods focus on the comparison of two time-points and assume that the system has reached an early steady state. In this paper, we generalize such a learning procedure to take into account the time series traces of phosphoproteomics data in order to discriminate Boolean networks according to their transient dynamics. To that goal, we exhibit a necessary condition that must be satisfied by a Boolean network dynamics to be consistent with a discretized time series trace. Based on this condition, we use a declarative programming approach (Answer Set Programming) to compute an over-approximation of the set of Boolean networks which fit best with experimental data. Combined with modelchecking approaches, we end up with a global learning algorithm and compare it to learning approaches based on static data.
Computational modelling of biological machinery in a living cell (e.g., gene regulatory networks or signalling pathways) often leads to dynamical models characterised by huge state spaces of sizes that surpass the sizes of any human-designed systems by orders of magnitude. Therefore, profound understanding of biological processes asks for the development of new methods and tools that would provide means for formal analysis and reasoning about such big systems. In the first lecture, we consider Probabilistic Boolean Networks (PBNs), a computational framework for the modelling of biological regulatory networks. One of the key aspects in the analysis of PBN models is the comprehension of their long-run (steady-state) behaviour. In particular, we focus on the computation of steady-state probabilities, which, e.g., are crucial for the quantification of the impact of genes on other genes or for the identification of network elements with highest impact. Obtaining the steady-state distribution for large PBNs poses a significant challenge. We discuss the existing solutions and present our approach to this problem. In the second lecture, we demonstrate our results for a case-study of a large PBN model of apoptosis in hepatocytes. Finally, we consider future perspectives.
We consider the problem of synthesizing safe and robust values of timing parameters of cardiac pacemaker models so that a quantitative objective, such as the pacemaker energy consumption or its cardiac output (a heamodynamic indicator of the human heart), is optimised in a finite path. Indeed, safety is of paramount importance in the design of medical devices, and patient's physiological properties has to be maintained in a robust way with respect to parameter perturbations.
In the first part of the lecture, we introduce the formal modelling framework and the synthesis algorithms. We consider models given as parametric networks of timed I/O automata with data, which extend timed I/O automata with priorities, real variables and real-valued functions, and specifications as Counting Metric Temporal Logic (CMTL) formulas. We formulate the parameter synthesis as a bilevel optimisation problem, where the quantitative objective (the outer problem) is optimised in the solution space obtained from optimising an inner problem that yields the maximal robustness for any parameter of the model. We develop an SMT-based method for solving the inner problem exactly through a discrete encoding, and combine it with evolutionary algorithms and simulations to solve the outer optimization task.
In the second part, we discuss the application of this method to the synthesis of pacemaker parameters. We provide a new multi-component heart model, which can reproduce patient-specific heart rhythm and a range of heart diseases, and consider a parametric dual chamber pacemaker model. We apply our approach to find the values of multiple timing parameters of the pacemaker for different heart diseases. Finally, we show how our synthesis method can be used to derive personalised heart models from time-series data.
Evolutionary algorithms (EAs) belong to stochastic heuristic algorithms that are used for finding an optimal solution in the domain space of the solved problem. EAs are inspired by fundamental principles of Darwinian evolution and Neo-Darwinism - the phylogenesis. EAs are successfully applied to the complex problem solving in the artificial inteligence, engineering or optimization. In this talk the fundamental principles of EAs will be introduced. We focus on Genetic Programming as the tool for identification of the solved problem hidden model and Genetic Programming optimization using coevolutionary algorithms in order to accelerate the evolutionary process and to provide the ability to adapt to dynamically changing conditions during the evolutionary process. Examples in connection with biological models design and optimization will be provided.
Process algebras are formalisms that describe the concurrent behaviour of systems. In recent years, they have been applied to the modelling of biological systems. Bio-PEPA is a quantitative process algebra used for this type of modelling and it is based on the stochastic process algebra PEPA which was developed for modelling the performance of human-created systems such as computer networks. In this presentation, the syntax and semantics of Bio-PEPA will be introduced, as well as the different analysis techniques that are available for Bio-PEPA models. A model of protein trafficking within the mammalian cell and related results will be described, and if time permits, semantic equivalences for Bio-PEPA will be briefly discussed. This is joint work with Jane Hillston.
Although (bio)chemical reaction networks have been under intensive study during the last decades, a general theoretical formalism capturing both the algebraic and the energetic properties of the underlying chemistry is still missing. The structural change of molecules during chemical reactions can conveniently be modeled as graph rewrite, if the participating molecules are treated as edge and vertex labeled graphs. The graph grammar approach to chemical transformation captures nicely the algebraic structure of chemical transformation i.e. the fact that reacting molecules yield novel molecules while retaining the possibility to assign physico-chemical properties to the molecules via the arsenal of chemoinformatics methods. With a constructive method to expand arbitrary chemical transformation space at hand, the focus of interest shifts to the question of finding sub network with particular properties, e.g. exhibiting auto-catalytic behavior, in these spaces. The problem of finding such sub networks can be formulated as a network flow problem on the underlying hyper-graph which can be solved using integer linear programming techniques.
Cyanobacteria are the only known prokaryotes with the capability to perform oxygenic photosynthesis and are attracting increasing attention as suitable host organisms for the production of organic products and biofuels. Due to their highly versatile metabolism and their ability to directly convert solar energy into hydrocarbons, biotechnological applications of cyanobacteria are at the forefront of current global challenges, such as the supply of energy from non-fossil resources and the efficient sequestration of atmospheric CO2. The domestication of phototrophic microorganisms remains one of the grand challenges of the 21st century with the potential to transform agriculture on a global scale. One step towards such a domestication of cyanobacteria is an integrated experimental and computational approach to understand the functional properties of phototrophic growth. The focus of the talk is to describe the construction of computational models of cyanobacteria, from molecules to pathways, to the entire organism, and the analysis of such models using kinetic and constraint-based methods. In particular, we are interested in intermediate methods that aim to bridge between large-scale stoichiometric representations and detailed kinetic models of selected cellular processes.
The task of the Cyanonet consortium is to investigate the photosynthesis of a cyanobacterium and to develop in silico models. A phenomenological model will be described which allows direct system identification from measurement data. A larger biochemical reaction system is planned to be developed. Its modular structure will be described. The approach and the resulting problems are described.