CAST10 Archives

May 2022


Options: Use Monospaced Font
Show Text Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Reply To:
Tue, 24 May 2022 02:30:31 -0400
text/plain (21 lines)
Pre-conference workshops on Process Data Analytics and Reinforcement Learning will be offered at AdCONIP 2022. 

Workshop 1: Process Data Analytics and Network or Flowsheet Reconstruction
Presented by Profs. Sirish Shah (UAlberta), Shankar Narasimhan (IIT Madras) and Arun K. Tangirala (IIT Madras)

Overview of the broad analytics area with emphasis on its use in the process industry. Basic definitions and introduction to supervised and unsupervised learning: simple regression, classification and clustering; Data visualization methods (in the temporal as well as the spectral domains).
Multivariate methods for data analysis: Principal Component Analysis (PCA) / Singular Value Decomposition (SVD) and its variants for steady-state model identification and reconstruction of conservation networks.
Alarm data analysis: Detection and removal of nuisance alarms; root-cause analysis of alarms and alarm floods.
Causal discovery and network reconstruction: Causality concepts and definitions; Methods for detecting cause-effect links and reconstructing graphical / network models from data.

Workshop 2: Making reinforcement learning a practical technology for industrial control
Presented by Prof. Philip Loewen (UBC), Dr. Thomas Badgwell (Collaborative Systems Integration), Prof. Jay Lee (KAIST), Prof. Biao Huang (UAlberta), Dr. Panagiotis Petsagkourakis (Illumina), Prof. Antonio del Rio Chanona (Imperial College London), Prof. Mario Zanon (IMT) and Prof. Sebastien Gros (NTNU)

Reinforcement learning (RL) is an emerging technology in process systems engineering (PSE) [1,2]. The objective in RL is to generate an optimal “policy” in a stochastic environment [3]. This general formulation makes RL appealing for both control and operational decision-making tasks, notably, without a system model [2]. Despite the enthusiasm surrounding RL, there are also reasons to be skeptical of its viability. For example, RL does not have strong stability or constraint satisfaction guarantees, and it is notoriously data-hungry. Recent work at the intersection of RL and PSE strives to mitigate these issues and ultimately make RL more reliable, scaleable, and interpretable [4–7]. This workshop aims to engage academics and industrial practitioners in both the machine learning and controls communities with a lively discussion on the challenges and opportunities surrounding real-world RL.

More details on the workshops including registration information can be found at: and

Prof. Jie Bao
Chair, IPC AdCONIP 2022