TuA23 - Formal Synthesis of Control Strategies for Dynamical Systems
Organizer: Calin Belta
Tuesday 13th, 10:00-12:00, Room Ironwood 5
Abstract: In formal verification, the goal is to check whether the executions of a system satisfy a rich property, usually expressed as a temporal logic formula.
While the formal verification problem received a lot of attention from the formal methods community for the past thirty years, the dual problem of formal synthesis, in which the goal is to synthesize or control a system from a temporal logic specification, has not received much attention until recently. This tutorial provides a self-contained exposition on formal synthesis of control strategies for some classes of dynamical systems. The tutorial has two parts. Central to the first part is the concept of transition system, which is shown to be general enough to model a wide variety of dynamical systems. It is shown how abstractions can be constructed by using simulation and bisimulation relations. The control specifications are restricted to formulas of Linear Temporal Logic (LTL) and some fragments of LTL, which are introduced together with the corresponding automata and the automata games used to generate control strategies for finite transition systems. The second part of the tutorial shows how such control strategies can be adapted to continuous-state, continuous-time, and discrete-time dynamical systems. Several examples are provided throughout the tutorial.
Session Structure: Calin Belta, Formal Synthesis of Control Strategies for Dynamical Systems, 120 min.
- Need for Formal Synthesis in Dynamical Systems, 5 min;
- Part I: Finite Systems
- Transition systems, quotients, and bisimulations, 20 min;
- Linear temporal logic and automata, 25 min;
- Synthesis of control strategies for finite systems, 30 min;
- Part II: Infinite Systems
- Synthesis of control strategies for continuous-time systems, 20 min;
- Synthesis of control strategies for discrete-time systems, 20 min.
TuB23 - Differential Privacy in Control and Network Systems
Organizer: Jorge Cortes
Tuesday 13th, 13:30-15:30, Room Ironwood 5
Abstract: The adoption of new automation capabilities and smart technologies relies heavily on public data as well as private information. Data has to be collected from (maybe remote) sources, fused to estimate the current state of the target system, and then used for effective decision making and control. Not surprisingly, much attention has been focused on the design, improvement, and sophistication of these technologies assuming that aggregators and decision makers receive and manipulate data in a trustworthy fashion. However, this assumption turns out to be restrictive in a wide variety of applications, especially in scenarios where multiple decision makers are distributed over a network. The reason for this is that, very often, the data required to be collected from individual sources contains sensitive information but the communication channels and/or other involved entities are not perfectly trustworthy. For instance, in smart grids, optimized power forecast, generation, and distribution requires that network moderators and dispatch units have access to detailed usage time series of consumers. This reveals information about consumers’ presence and absence patterns, their used appliance, etc. Similarly, smart transportation services relies heavily on traffic monitoring and forecast, which in turn requires huge collections of individual positions and intentions. These issues have motivated the recent body of research on privacy-aware algorithmic solutions for a variety of tasks. This tutorial session focuses on the concept of differential privacy and its applications to control and network systems. This notion was first introduced by Dwork and coauthors in 2006 for privacy preservation of databases of individual records subject to public queries. In the original framework, the response to each query from the database is randomized by adding carefully chosen noise to it, so that individual records cannot be truly estimated with high probability from the randomized response. In other words, due to the added noise, the presence or absence of any individual record has very little effect in the aggregate output, so keeping the record “differentially” private. This notion has made its way into systems and controls, where researchers have used it to design privacy-aware algorithms for diverse objectives. The goal of this tutorial session is to introduce the main concepts and recent advances in this emerging area to interested researchers. The session is composed of four talks introducing the audience to the basic concepts in differential privacy, and applications to estimation and filtering, consensus, and distributed optimization.
- George Pappas, Foundations of Differential Privacy, 30 min;
- Jerome Le Ny, Differential Privacy Filtering, 30 min;
- Geir Dullerud, Differential Privacy, Entropy, and Consensus, 30 min;
- Jorge Cortes, Differential Privacy and Distributed Optimization, 30 min.
TuC23 - An overview of compressed sensing
Organizer: Mathukumalli Vidyasagar
Tuesday 13th, 16:00-18:00, Room Ironwood 5
Abstract: Compressed sensing refers to the reconstruction of "large but sparse" objects from a limited number of measurements. Some examples of such "large but sparse" objects include: (i) high-dimensional vectors with very few nonzero components, (ii) large matrices of low rank and/or very few nonzero elements, (iii) temporal signals whose discrete Fourier transform contains very few nonzero components, and (iv) a two-dimensional image whose wavelet transform contains very few nonzero components. In addition to these traditional applications from signal and image processing, there is also some scope for applying this approach to problems of system identification; however, such applications are far less numerous.
The topic of compressed sensing is of relatively recent origin, having "taken off" in the mid-2000s. The first set of methods for compressed sensing were based on using randomly constructed measurement matrices. This theory is relatively mature. As it turns out, this approach is computationally rather expensive. During the past few years, the focus has shifted from probabilistic construction methods to deterministic construction methods. Moreover, interesting connections have been made between deterministic construction methods, and coding theory based on algebraic geometry over finite fields. This is currently an area of active research. Compressed sensing algorithms based on such approaches may be easier to implement and more computationally efficient compared to the use of probabilistic methods to generate measurement matrices.
In this tutorial, the organizer will present a comprehensive overview of the current state of the theory, some illustrative applications to signal and image reconstruction, matrix completion, and system identification.
Session Structure: Mathukumalli Vidyasagar, An overview of compressed sensing, 120 min.
- Compressed sensing: Problem formulations
- Null space-based conditions
- Matrix recovery as an extension of vector recovery
- Construction of measurement matrices
- Optimization algorithms
- Some extensions: Group sparsity, one-bit compressed sensing, nonconvex methods
- Applications from signal recovery, matrix completion, image processing and system identification
WeA23 - Teaching control theory in high school
Organizer: John C. Doyle, (道耀, Caltech, email@example.com, www.cds.caltech.edu/~doyle)
Wednesday 14th, 10:00-12:00, Room Juniper 4
The goal of this tutorial session is to describe recent progress in control theory that can potentially make the core concepts in the subject vastly more accessible to a wider audience. We will also describe progress in tailoring experiments using robotics and computer games to augment the theory. Pieces have been tested in classes at Caltech and Berkeley, but we will organize and extend existing results, and also challenge the CDC community to contribute to a radical rethinking of control theory education. We will also review experience in teaching this kind of material to physicians in workshops during summer 2016 and at Caltech in a new fall 2016 curriculum. This also continues and extends a trend promoted by recent textbooks by Murray and co-authors. We will have limited experience with high school students this year, but will report on what we have, including presentations by students. However the title is not entirely facetious, as a goal is the versions of this material that would be accessible to high school students, and a surprising amount can be done with proofs only requiring high school algebra.
One driver to broaden our educational reach is the explosion of the role of dynamics, feedback, robustness, scalability, and evolution and design in complex networks of all sorts. While theories from physics, biology, information theory, computing, statistics, etc. provide crucial components, none provide the needed framework that is both rigorous and unifying. While we have had success in publishing control theory applied to cell biology, medical physiology, internet technology, and multiscale physics, including several in domain journals, the tools have not been broadly adopted, partly because they are difficult for domain experts to learn. (Existing domain theorists are also hostile to new theory that appears too mathematical for them to retool for.)
The good news is that concerted efforts to simplify the basic mathematics (some done very recently during the IMA year in control at the University of Minnesota) has led to some major “breakthroughs” in greatly simplifying introductory control. The basic concepts of feedback and dynamics including the challenges of unstable poles and zeros, delays, actuator saturation, sparse sensing and actuation, and quantization of communications in control loops, can now all be introduced with proofs requiring largely high school math. Making everything scalable to large systems requires the addition of linear algebra.
None of this is all that new, particularly in retrospect, but together creates a radically different curriculum that appears to be much more broadly accessible.
Many organizational details remain to be worked out, but the plan is that this course would be “flipped” so all lectures would be videos, and the “core” would consist of the key concepts with accessible math and proofs. The intent would be to allow an extremely broad range of students to include those with minimal background to specialists planning to do PhDs in control theory or related subjects, but all would learn a common language. The core videos would be augmented with more math depth for the latter or more breadth and examples for the former.
- John C. Doyle, Teaching control theory in high school, 120 min.
WeB23 - Distributed Learning
Organizers: Mohammad Amin Rahimian, Ali Jadbabaie
Wednesday 14th, 13:30-15:30, Room Juniper 4
Abstract: This session aims to promote discussion among researchers who actively investigate the areas of distributed estimation, learning and decision-making over networks. The proposed contributions cover a diverse range of techniques from distributed computation, online optimization, game theory as well as Bayesian and non-Bayesian learning. The papers demonstrate the wide applicability of these tools for addressing the challenges of estimation, learning and decision making over networks, where information and computational resources are scarce and scattered. The session is comprised of three contributions that cover the social learning literature, online learning, convergence rates and finite time analysis, foundations of non-Bayesian social learning and heuristic updating, decentralized hypothesis testing, distributed optimization, as well as their algorithmic and computational aspects.
- M. Amin Rahimian and Ali Jadbabaie, Group Decision Making and Social Learning, 40 min;
- Angelia Nedich, Decentralized Hypothesis Testing, 40 min;
- Alex Olshevsky, Fast Algorithms for Distributed Optimization and Hypothesis Testing, 40 min.
WeC23 - Information acquisition, controlled sensing, and sequential refinement of belief
Organizer: Tara Javidi
Wednesday 14th, 16:00-18:00, Room Juniper 4
Abstract: Information acquisition problems form a class of stochastic
decision problems in which a decision maker is faced with
utilizing a stochastically varying (and uncontrollable)
environment. However, the state of the environment, due to
the limited nature of the measurements in terms of
dimension/cost/accuracy, is only partially known to the
decision maker. The decision maker, by carefully
controlling the sequence of actions with uncertain outcomes
and noisy measurements, dynamically refines the belief
about the stochastically varying parameters of interest. A
generalization of hidden Markov models and a special case of partially
observable Markov models, information acquisition is both
an informational problem as well as a
In this tutorial, we start with active hypothesis testing
as a special case of information acquisition. This problem
has been studied in various areas of applied mathematics,
statistics, and engineering. The first part of the tutorial
discusses the historical developments due to Wald,
Blackwell, DeGroot’s, and Chernoff. We then catalog recent
advances in two special cases of the problem: Noisy Search
and Bayesian Active Learning. In this context, we connect
DeGroot's information utility framework with the Shannon
theoretic concept of uncertainty reduction to introduce a
symmetrized divergence measure: Extrinsic Jensen-Shannon
(EJS) divergence. We use this divergence to provide (tight)
lower and upper bounds on the optimal performance and, as a
corollary, provide the significant (asymptotic) performance
gain of sequential and adaptive search strategies over
In the last part of the tutorial, we go back to consider
the information acquisition problem and show its
equivalence to generalized tracking of a time varying state
(hypothesis) with partial observation. In a Bayesian and
Markov setting, this allows for a dynamic program
formulation of the optimal policy. In addition to this
structural result, we consider a few open problems and a
variety of applications such as the news vendor problem
with asymmetric observation and cost. In the context of the
news vendor problem, for the first time, a class of
policies with good competitive ratios are introduced and
Session Structure: Tara Javidi, Information acquisition, controlled sensing, and sequential refinement of belief, 120 min.
- Active Hypothesis Testing and Stochastic Control (33+10 min)
- Active Hypothesis Testing and Information Theory (33+10 min)
- Noisy Generalized Tracking and Belief Dynamics (34 min)