JMM 2019 Baltimore
 

2018 Short Course

 This Short Course took place January 14-15, 2019, before the Joint Mathematics Meetings in Baltimore.

Introduction

Interest in the theory and application of sums of squares (SOS) polynomials has exploded in the last two decades, spanning a wide spectrum of mathematical disciplines from real algebraic geometry to convex geometry, combinatorics, real analysis, theoretical computer science, quantum information and engineering. This two-day short course, organized by Pablo Parrilo, Massachusetts Institute of Technology, and Rekha Thomas, University of Washington, will offer six introductory lectures on the theory of SOS polynomials and their applications. The six speakers represent a broad and diverse cross-section of the many aspects of sum of squares techniques and their interconnections.

Short course organizers

The origins of SOS polynomials are anchored in the 19th century by Hilbert's famous characterization of nonnegative polynomials that are SOS. In 1924 Artin gave an affirmative answer to Hilbert's 17th problem on whether all nonnegative polynomials were SOS of rational functions. From this the field of real algebraic geometry was born the study of real solutions to polynomial systems. While real solutions of polynomials equations are considerably more complicated than their complex counterpart, their role in applications cannot be overstated. SOS polynomials have experienced a renaissance in the last few years following the work of Shor, Nesterov, Lasserre, and Parrilo that connected them to modern optimization via semidefinite programming. An active interdisciplinary community now exists around SOS polynomials with a variety of conferences and research programs at many of the top institutions worldwide. While the theory is rich and fascinating with many open questions, the wide array of applications are equally enticing, allowing for many different angles and access points to the field.

Logistics

Each of the six lectures in the course will be broken into two 45-minute sections. Problem sessions are planned for both days, and speakers and organizers will be available to assist participants with these exercises. Some exercises will be facilitated by numerical software such as Macaulay2, Maple™, MATLAB, or Julia.   Participants with a desire to experiment with computations and an interest in applications should find this experience especially stimulating. The speakers will also provide participants with a list of research questions to guide their explorations after the course.

Please bring your laptop to the course, and if possible, download Macauley2 or Julia before the course starts.

This short course is aimed at students with minimal to no background in the theory of SOS polynomials. However, familiarity with properties of polyhedra, convex sets, and polynomials will provide useful background. Appendix A in the book Semidefinite Optimization and Convex Algebraic Geometry, G. Blekherman, P.A. Parrilo and R.R. Thomas (eds.), MOS-SIAM Series on Optimization, SIAM 2012, is recommended as useful preparatory reading. A free pdf of the book can be found at www.mit.edu/~parrilo/sdocag/.

Speakers and Lecture Topics

Photo courtesy Sung Ha Kang.

Overview of SOS polynomials

Grigoriy Blekherman, Georgia Institute of Technology.

Nonnegativity of polynomials and its relations with sums of squares are classical topics in real algebraic geometry. This area has a rich and distinguished mathematical history beginning with Hilbert's fundamental work, his 17th problem, and its solution by Artin. Work in this area involves a blend of ideas and techniques from algebraic geometry, convex geometry and optimization. Recently SOS algorithms and relaxations have found numerous applications in engineering and theoretical computer science. There is also an emergent understanding that the study of nonnegative and SOS polynomials on a variety is inextricably linked to classical topics in algebraic geometry and commutative algebra, such as minimal free resolutions.

This exciting blend of ideas can be demonstrated with a single prominent example: the cone of positive semidefinite (PSD) matrices. On the one hand this cone can be viewed as the set of all homogeneous quadratic polynomials (forms) nonnegative on all of $\mathbb{R}^n$, while on the other hand it is also a convex cone in the vector space of real symmetric matrices. It is also known that quadratic forms nonnegative on $\mathbb{R}^n$ are always SOS. An affine linear slice of the PSD cone is called a spectrahedron. The object of semidefinite programming is optimization of a linear function over a spectrahedron, and it is the engine that makes SOS algorithms work. SOS algorithms naturally lead to spectrahedra via convex duality, and understanding algebraic and convexity properties of these spectrahedra is the key to understanding the quality of these algorithms.

Photo courtesy Hamza Fawzi

Lifts of convex sets

Hamza Fawzi, University of Cambridge

A central question in optimization is to maximize (or minimize) a linear function on a given convex set. Such a problem may be easy or hard depending on the geometry of the convex set. Motivated by this problem, this lecture considers the following question: given a convex set, is it possible to express it as the projection of a simpler convex set in a higher-dimensional space? Such a lift of the convex set allows us to reformulate the original optimization problem as an easier one over the higher-dimensional convex set. In order to make this question precise we need a way to measure the complexity of convex sets. We will focus in this lecture on two classes of lifts, namely polyhedral and spectrahedral lifts, where a natural notion of complexity can be defined. For spectrahedral lifts, we will see that the existence of lifts is characterized by the existence of SOS certificates for a certain class of nonnegative functions. We will give some examples of convex sets that admit small lifts, and others that do not, and will discuss applications in optimization.

Photo courtesy Kimberly Lipinacci

Engineering applications of SOS polynomials

Georgina Hall, INSEAD

The problem of optimizing over the cone of nonnegative polynomials is a fundamental problem that appears in many different areas of engineering and computational mathematics. Long thought to be intractable, several breakthrough papers in the early 2000s showed that this problem could be tackled by using SOS programming, a class of optimization problems which is intimately connected to semidefinite programming.

In this lecture, we present a number of problems where the need to optimize over the cone of nonnegative polynomials arises and discuss how they can be reformulated using SOS programming. Problems of this type include, but are not limited to, problems in power engineering (e.g., the optimal power flow problem) control and robotics (e.g., formal safety verification), machine learning and statistics (e.g., shape-constrained regression), and game theory (e.g., Nash equilibria computation). We conclude by highlighting some directions that could be pursued to further disseminate these techniques within more applied fields. Among other things, we address the current challenge that scalability represents for SOS programming and discuss recent trends in software development.

Photo courtesy Kate McKenna
Crabapple Photography

Connections to theoretical computer science

Ankur Moitra, Massachusetts Institute of Technology

Many hard optimization problems---like finding large cliques in a graph---can be cast as maximizing a linear function over a convex, but highly complicated domain. For example, if the feasible region is a polytope it often has an exponential number of vertices and facets without an obvious way to decide if a given point is contained inside. The SOS hierarchy gives a sequence of tighter and tighter relaxations. Each of them can be efficiently optimized over, and so it yields a sequence of algorithms that trade off complexity with accuracy. However in many cases, understanding exactly how powerful these algorithms are turns out to be quite challenging.

In this survey, we will discuss two central problems in theoretical computer science: (1) In the planted clique problem, the goal is to find a large clique that has been added to a random graph, and has recently found important applications in establishing computational vs. statistical tradeoffs in machine learning. (2) The unique games conjecture asserts that it is hard to find an assignment that satisfies many clauses in certain two-variable constraint satisfaction problems. If true, it would characterize the best possible approximation ratio achievable by efficient algorithms for a broad class of problems. We will explore how a deeper understanding of the power of SOS proofs has been intertwined with progress on these questions.

Photo courtesy Mauricio Velasco

Algebraic geometry through the lens of sums of squares

 

Mauricio Velasco, Universidad de los Andes

SOS optimization has a wealth of connections to pure mathematics allowing us to define new invariants of real projective varieties and to connect algebraic geometry, convexity and optimization. This lecture will illustrate these connections by focusing on a basic example: (*) the classification of those (real, projective) varieties on which nonnegative quadratic forms and sums-of-squares of linear forms coincide.

In the first part of the lecture, we will give a self-contained introduction to projective algebraic geometry and describe the classification of "varieties of minimal degree", one of the great achievements of the Italian school of algebraic geometry in the XIX century. In the second part of the lecture we will explain why varieties of minimal degree play a central "extremal" role in the theory of SOS on varieties and are the building blocks for solving problem (*) above. The solution of (*) allows us to generalize and synthesize all known results of equality between nonnegative polynomials and sums of squares (by Choi-Lam-Reznick, Grone, Hilbert, Yakubovich, among others).

 

Photo courtesy Cynthia Vinzant

The geometry of spectrahedra

Cynthia Vinzant, North Carolina State University

A spectrahedron is an affine slice of the convex cone of positive semidefinite real symmetric matrices. Spectrahedra form a rich class of convex bodies that are computationally tractable and appear in many areas of mathematics. Examples include polytopes, ellipsoids, and more exotic convex sets, like the convex hull of some curves. Techniques for maximizing a linear function over a spectrahedron are called semidefinite programs. These numerical polynomial-time algorithms generalize linear programs and form a powerful tool in optimization.

The geometry of spectrahedra is fundamental to the theory of semidefinite programming, just as the geometry of polyhedra is to that of linear programming. This lecture will introduce the theory of spectrahedra and describe how convex geometry, real algebraic geometry, and topology all contribute to our understanding of their intricate geometry. We will use this to explore examples and applications coming from distance geometry, moment problems, and combinatorial optimization.