Goals

My research is guided by the desire to perform (1) real-time uncertainty quantification and control and use (2) high-fidelity system models and large data sets to enable automated systems. Many approaches to these problems are either too narrow in application scope or limited by the curse of dimensionality; they require exponentially increasing computational expense with the number of degrees of freedom. Algorithmic breakthroughs are needed to allow high performance and reliable autonomous robotic systems that can effectively explore, operate, and learn about complex environments. Our research aims to enable

  1. Optimality guarantees for systems with high dimensional and nonlinear dynamics
  2. Increased confidence and expansion of the role of simulation and high-fidelity modeling for autonomy
  3. Incorporation of uncertainty in dynamic environments into the decision making process of autonomous agents

These breakthroughs will lie at the intersection of both (i) computational science and engineering and (ii) autonomy. 


Current projects

Below we demonstrate a couple of sample projects that are currently ongoing in the lab.

Multifidelity uncertainty quantification

The ability to leverage multiple simulation models of low-fidelity for accelerating the propagation of uncertainty through a high-fidelity model has been demonstrated to be one of the prime enabling technologies for addressing UQ in application areas where complex large-scale high-fidelity physical models and data are challenged by prohibitive computational costs.

Predictive models for these application areas typically require fine resolution of the physical state space and the incorporation of a large number of additional degrees of freedom describing design variables, uncertain parameters, and structural parameters. Furthermore, comprehensive analysis algorithms for uncertainty quantification, optimization, and design can easily require millions of model evaluations to fully explore all of the degrees of freedom and to obtain highly resolved simulations. Because many of the most important simulation models require high-performance computing to obtain just a single evaluation for a fixed set of parameters and the output of a large-scale simulation might require up to a terabyte of storage space, performing these analyses for the highest of fidelity models is currently either impractical or requires significant assumptions.

In this research we study new modeling paradigms for fusing information from multiple information sources together. The approaches that we take are mainly statistical and cover topics that include: control variates, importance sampling, and Bayesian networks.


: . .
: . .

Capability-aware multi-agent assignment

We are seeking to enable a group of autonomous agents to jointly accomplish a challenging dynamic task. A majority of existing methods typically tackle this problem by greatly simplifying agent dynamics and doing a nearest-neighbor assignment. Our approach seeks to broadly develop multi-agent assignments that enable general and heterogeneous sets of agents to optimally accomplish a task by leveraging individual capabilities. The main questions we address are:

  1. How can a group of agents be distributed to accomplish a task?
  2. How does the assignment process leverage knowledge of the agent capabilities while retaining optimality?
  3. We are developing tools that leverage optimal transport theory to create methods that require up to 50% less energy than the current state of the art approaches
Koray G. Kachar, Alex A. Gorodetsky: Dynamic multi-agent assignment via discrete optimal transport. In: Forthcoming.

Bayesian inference for low-rank tensors

Useful machine learning (ML) models must balance interpretability, computational expense, expres- sivity, and prediction uncertainty. For example, shallow ML models such as linear regression or the perceptron have been studied for decades, are well understood and efficiently computed, and their uncertainty can be quantified via linear inverse problems. However, they often lack the expressiveness to deal with complex nonlinear phenomena exhibited by many applications. More expressive models use deep networks; however are they are also more data hungry and lack robustness to noise.

Posterior distribution over tensor elements

In this project, we seek to quantify uncertainty in ML models for a class of modeling formats that is both expressive and simple to analyze — low-multilinear-rank models. These models also introduce nonlinearities, like neural networks, but they also facilitate significantly simpler analysis than general black-box neural networks because they exploit an interpretable type of structure, separability. Furthermore, they are a foundational numerical analysis technique that can be embedded within neural networks to reduce the number of unknowns and improve scalability.

Our aim is to embed uncertainty quantification into emerging supervised learning approaches based on low-rank tensor decompositions. which have historically been used for unsupervised learning. These approaches are attractive for wide varying applications because they are interpretable, scalable, and highly expressive. However, their predictions have not yet leveraged uncertainty quantification to improve reliability. In this work we seek to answer the question: how can uncertainty in training and prediction using low-rank tensor decompositions be quantified in the context of supervised learning to enhance reliability?


Funding sources


Previous projects

A foundational technique that enables UQ and control for high-fidelity systems is the approximation of computationally intensive aspects of a problem in a simpler way. Examples include

  • Approximating the likelihood to make Markov chain Monte Carlo for Bayesian inference faster
  • Approximate dynamic programming for enabling the solution of large scale dynamic programming problems arising from Markov decision processes
Low-rank tensor decompositions
tensor decompositions
Tensor Decompositions

Tensor decompositions, or high-dimensional analogues of the matrix SVD, are a promising technique to break the curse of dimensionality associated with storing high dimensional data. For instance, a seven-dimensional array with one hundred points in each dimension includes one trillion points in total, whereas a low-rank representation can generate an accurate representation with 70000 points if certain structure exists. We have developed continuous extensions to low-rank tensor decompositions that enable low-rank approximation and computation with high-dimensional functions.  The framework enables adaptive, efficient, and flexible algorithms that scale polynomially with dimensionality. For example,  we obtain scaling for adaptive integration of functions with up to 600 dimensions, and 7 orders of magnitude compression of cost functionals in stochastic optimal control.

Alex Gorodetsky, John Jakeman: Gradient-based Optimization for Regression in the Functional Tensor-Train Format. In: Journal of Computational Physics, 374 , pp. 1219–1238, 2018.
Experimental design
Experimental Design Strategies
Our optimal design strategy (left) compared to others

One problem prevalent in surrogate modeling is the question of how to allocate resources for analyzing high-fidelity models with a limited computational budget. To enable control and UQ for such models, experimental design strategies must be developed to effectively explore the parameter space to build accurate models.  Our work contributes to experimental design for Gaussian process regression through developing an optimization algorithm to choose experiments that minimize the average predicted uncertainty of the surrogate. We showed that the resulting approximation properties are superior to those of other common design choices, especially for problems with complex input parameter geometries.

Alex Gorodetsky, Youssef Marzouk: Mercer Kernels and Integrated Variance Experimental Design: Connections Between Gaussian Process Regression and Polynomial Approximation . In: SIAM/ASA Journal of Uncertainty Quantification, 4 (1), pp. 796-828, 2016, ISSN: 2166-2525.
Markov decision processes and stochastic optimal control
Time lapse image showing the quadcopter passing through the window.
Time lapse image showing the quadcopter passing through the window.

Motion planning and control problems are embedded and essential in almost all robotics applications. These problems are often formulated as stochastic optimal control problems and solved using dynamic programming algorithms. Unfortunately, most existing algorithms that guarantee convergence to optimal solutions suffer from the curse of dimensionality.

We have proposed compressed versions of dynamic programming algorithms, such as value iteration and policy iteration, that alleviate the curse of dimensionality in problems that exhibit low-rank structure. The run time of the new algorithms scales polynomially with dimension and polynomially with the rank of the value function. The algorithms are applied to challenging simulated and experimental problems. For instance, we consider an experimental quadcopter for which we develop a controller using a nonlinear, non-holonomic, non-control-affine dynamical system model. The compressed controller requires roughly 1 MB of space in compressed format; an equivalent look up table would have required around 20 TB of memory. I embedded the control on a quadcopter to demonstration maneuvering a quadcopter through a small window, as shown in Figure 3. The low-rank dynamic programming algorithms are widely applicable, and we have applied them to syntactically co-safe linear temporal logic specifications and to differential games

John Irvin Alora, Alex Gorodetsky, Sertac Karaman, Youssef Marzouk, Nathan Lowry: Automated synthesis of low-rank control systems from sc-LTL specifications using tensor-train decompositions. In: 2016 IEEE 55th Conference on Decision and Control (CDC), pp. 1131-1138, Las Vegas, NV, USA, 2016, ISBN: 978-1-5090-1837-6.
Ezra Tal, Alex Gorodetsky, Sertac Karaman: Continuous Tensor Train-Based Dynamic Programming for High-Dimensional Zero-Sum Differential Games. In: 2018 American Control Conference (ACC), pp. 6086-6093, IEEE, Milwaukee, WI, 2018, ISSN: 2378-5861.
Alex Gorodetsky, Sertac Karaman, Youssef Marzouk: Efficient High-Dimensional Stochastic Optimal Motion Control using Tensor-Train Decomposition. In: Proceedings of Robotics: Science and Systems, Rome, Italy, 2015.
Alex Gorodetsky, Sertac Karaman, Youssef Marzouk: High-Dimensional Stochastic Optimal Control using Continuous Tensor Decompositions. In: International Journal of Robotics Research, 37 (2-3), pp. 340-377, 2018.
Low-rank data assimilation
Filtering of Lorenz 1996 System
Filtering of Lorenz 1996 System

We seek to extend Bayesian state estimation techniques to dynamical systems with larger state dimensions. This task is important to application areas including autonomy, weather prediction, ocean monitoring, vehicle tracking and many others. The problem is computationally challenging because it requires performing Bayesian inference repeatedly and, for some applications, in real time, and because it requires propagating a probability distribution through nonlinear dynamical systems. Analytic solutions to practical applications often do not exist, and both of these tasks are enormously challenging.

We have developed an algorithm that enables the extension of the class of integration-based Gaussian framework for filtering nonlinear dynamical systems to problems with tens of dimensions. Historically, this framework has only been applied to dynamical systems with a handful of dimensions. Utilizing low-rank functional approximations we created a robust and accurate Gaussian filtering algorithm that scales quadratically with dimension and polynomially with rank. Future applications will explore its use in real-time systems.

Alex Gorodetsky, Sertac Karaman, Youssef Marzouk: Low-rank tensor integration for Gaussian filtering of continuous time nonlinear systems. In: 2017 IEEE 56th Annual Conference on Decision and Control (CDC), pp. 2789-2794, Melbourne, VIC, Australia, 2017, ISBN: 978-1-5090-2873-3.
System identification
Eigenvalues identified system
Eigenvalues of the identified system

One common problem when dealing with autonomous systems is understanding the system model. Often, instead of a specified system model, there exists a large amount of system-response data in the form of a Hankel matrix. The goal of system identification is to use experimental or simulated data to compute a reduced-order model to be used within design, optimization, and control. For systems with many inputs, many outputs, or large time-series of system-response data, established methods that factorize the Hankel matrix, such as the singular value decomposition (SVD) are prohibitively expensive. We have developed algorithms to reduce the complexity of traditional algorithms from cubic to quadratic with respect to Hankel matrix size. This reduction is realized by replacing the SVD with a CUR decomposition that directly seeks a low-rank approximation. The resulting algorithm yields results almost indistinguishable from an SVD-based approach, but with 25 times speedup.

Boris Kramer, Alex Gorodetsky: System identification via CUR-factored Hankel approximation. In: SIAM Journal on Scientific Computing, 40 (2), pp. A848–A866, 2018, ISSN: 1095-7197.