My research is guided by the desire to perform (1) real-time uncertainty quantification and control and use (2) high-fidelity system models and large data sets to enable automated systems. Many approaches to these problems are either too narrow in application scope or limited by the curse of dimensionality; they require exponentially increasing computational expense with the number of degrees of freedom. Algorithmic breakthroughs are needed to allow high performance and reliable autonomous robotic systems that can effectively explore, operate, and learn about complex environments. Our research aims to enable
- Optimality guarantees for systems with high dimensional and nonlinear dynamics
- Increased confidence and expansion of the role of simulation and high-fidelity modeling for autonomy
- Incorporation of uncertainty in dynamic environments into the decision making process of autonomous agents
These breakthroughs will lie at the intersection of both (i) computational science and engineering and (ii) autonomy.
Compression for high-fidelity models
A foundational technique that enables UQ and control for high-fidelity systems is the approximation of computationally intensive aspects of a problem in a simpler way. Examples include
- Approximating the likelihood to make Markov chain Monte Carlo for Bayesian inference faster
- Approximate dynamic programming for enabling the solution of large scale dynamic programming problems arising from Markov decision processes
Low-rank tensor decompositions
Tensor decompositions, or high-dimensional analogues of the matrix SVD, are a promising technique to break the curse of dimensionality associated with storing high dimensional data. For instance, a seven-dimensional array with one hundred points in each dimension includes one trillion points in total, whereas a low-rank representation can generate an accurate representation with 70000 points if certain structure exists. We have developed continuous extensions to low-rank tensor decompositions that enable low-rank approximation and computation with high-dimensional functions. The framework enables adaptive, efficient, and flexible algorithms that scale polynomially with dimensionality. For example, we obtain scaling for adaptive integration of functions with up to 600 dimensions, and 7 orders of magnitude compression of cost functionals in stochastic optimal control.
One problem prevalent in surrogate modeling is the question of how to allocate resources for analyzing high-fidelity models with a limited computational budget. To enable control and UQ for such models, experimental design strategies must be developed to effectively explore the parameter space to build accurate models. Our work contributes to experimental design for Gaussian process regression through developing an optimization algorithm to choose experiments that minimize the average predicted uncertainty of the surrogate. We showed that the resulting approximation properties are superior to those of other common design choices, especially for problems with complex input parameter geometries.
Decision making under uncertainty
Stochastic optimal control and Markov decision processes
Motion planning and control problems are embedded and essential in almost all robotics applications. These problems are often formulated as stochastic optimal control problems and solved using dynamic programming algorithms. Unfortunately, most existing algorithms that guarantee convergence to optimal solutions suffer from the curse of dimensionality.
We have proposed compressed versions of dynamic programming algorithms, such as value iteration and policy iteration, that alleviate the curse of dimensionality in problems that exhibit low-rank structure. The run time of the new algorithms scales polynomially with dimension and polynomially with the rank of the value function. The algorithms are applied to challenging simulated and experimental problems. For instance, we consider an experimental quadcopter for which we develop a controller using a nonlinear, non-holonomic, non-control-affine dynamical system model. The compressed controller requires roughly 1 MB of space in compressed format; an equivalent look up table would have required around 20 TB of memory. I embedded the control on a quadcopter to demonstration maneuvering a quadcopter through a small window, as shown in Figure 3. The low-rank dynamic programming algorithms are widely applicable, and we have applied them to syntactically co-safe linear temporal logic specifications and to differential games
Bayesian inference and estimation
We seek to extend Bayesian state estimation techniques to dynamical systems with larger state dimensions. This task is important to application areas including autonomy, weather prediction, ocean monitoring, vehicle tracking and many others. The problem is computationally challenging because it requires performing Bayesian inference repeatedly and, for some applications, in real time, and because it requires propagating a probability distribution through nonlinear dynamical systems. Analytic solutions to practical applications often do not exist, and both of these tasks are enormously challenging.
We have developed an algorithm that enables the extension of the class of integration-based Gaussian framework for filtering nonlinear dynamical systems to problems with tens of dimensions. Historically, this framework has only been applied to dynamical systems with a handful of dimensions. Utilizing low-rank functional approximations we created a robust and accurate Gaussian filtering algorithm that scales quadratically with dimension and polynomially with rank. Future applications will explore its use in real-time systems.
One common problem when dealing with autonomous systems is understanding the system model. Often, instead of a specified system model, there exists a large amount of system-response data in the form of a Hankel matrix. The goal of system identification is to use experimental or simulated data to compute a reduced-order model to be used within design, optimization, and control. For systems with many inputs, many outputs, or large time-series of system-response data, established methods that factorize the Hankel matrix, such as the singular value decomposition (SVD) are prohibitively expensive. We have developed algorithms to reduce the complexity of traditional algorithms from cubic to quadratic with respect to Hankel matrix size. This reduction is realized by replacing the SVD with a CUR decomposition that directly seeks a low-rank approximation. The resulting algorithm yields results almost indistinguishable from an SVD-based approach, but with 25 times speedup.