Sum-of-squares: proofs, beliefs, and algorithms — Boaz Barak and David Steurer

Index PDF

\[ \newcommand{\undefined}{} \newcommand{\hfill}{} \newcommand{\qedhere}{\square} \newcommand{\qed}{\square} \newcommand{\ensuremath}[1]{#1} \newcommand{\bbA}{\mathbb A} \newcommand{\bbB}{\mathbb B} \newcommand{\bbC}{\mathbb C} \newcommand{\bbD}{\mathbb D} \newcommand{\bbE}{\mathbb E} \newcommand{\bbF}{\mathbb F} \newcommand{\bbG}{\mathbb G} \newcommand{\bbH}{\mathbb H} \newcommand{\bbI}{\mathbb I} \newcommand{\bbJ}{\mathbb J} \newcommand{\bbK}{\mathbb K} \newcommand{\bbL}{\mathbb L} \newcommand{\bbM}{\mathbb M} \newcommand{\bbN}{\mathbb N} \newcommand{\bbO}{\mathbb O} \newcommand{\bbP}{\mathbb P} \newcommand{\bbQ}{\mathbb Q} \newcommand{\bbR}{\mathbb R} \newcommand{\bbS}{\mathbb S} \newcommand{\bbT}{\mathbb T} \newcommand{\bbU}{\mathbb U} \newcommand{\bbV}{\mathbb V} \newcommand{\bbW}{\mathbb W} \newcommand{\bbX}{\mathbb X} \newcommand{\bbY}{\mathbb Y} \newcommand{\bbZ}{\mathbb Z} \newcommand{\sA}{\mathscr A} \newcommand{\sB}{\mathscr B} \newcommand{\sC}{\mathscr C} \newcommand{\sD}{\mathscr D} \newcommand{\sE}{\mathscr E} \newcommand{\sF}{\mathscr F} \newcommand{\sG}{\mathscr G} \newcommand{\sH}{\mathscr H} \newcommand{\sI}{\mathscr I} \newcommand{\sJ}{\mathscr J} \newcommand{\sK}{\mathscr K} \newcommand{\sL}{\mathscr L} \newcommand{\sM}{\mathscr M} \newcommand{\sN}{\mathscr N} \newcommand{\sO}{\mathscr O} \newcommand{\sP}{\mathscr P} \newcommand{\sQ}{\mathscr Q} \newcommand{\sR}{\mathscr R} \newcommand{\sS}{\mathscr S} \newcommand{\sT}{\mathscr T} \newcommand{\sU}{\mathscr U} \newcommand{\sV}{\mathscr V} \newcommand{\sW}{\mathscr W} \newcommand{\sX}{\mathscr X} \newcommand{\sY}{\mathscr Y} \newcommand{\sZ}{\mathscr Z} \newcommand{\sfA}{\mathsf A} \newcommand{\sfB}{\mathsf B} \newcommand{\sfC}{\mathsf C} \newcommand{\sfD}{\mathsf D} \newcommand{\sfE}{\mathsf E} \newcommand{\sfF}{\mathsf F} \newcommand{\sfG}{\mathsf G} \newcommand{\sfH}{\mathsf H} \newcommand{\sfI}{\mathsf I} \newcommand{\sfJ}{\mathsf J} \newcommand{\sfK}{\mathsf K} \newcommand{\sfL}{\mathsf L} \newcommand{\sfM}{\mathsf M} \newcommand{\sfN}{\mathsf N} \newcommand{\sfO}{\mathsf O} \newcommand{\sfP}{\mathsf P} \newcommand{\sfQ}{\mathsf Q} \newcommand{\sfR}{\mathsf R} \newcommand{\sfS}{\mathsf S} \newcommand{\sfT}{\mathsf T} \newcommand{\sfU}{\mathsf U} \newcommand{\sfV}{\mathsf V} \newcommand{\sfW}{\mathsf W} \newcommand{\sfX}{\mathsf X} \newcommand{\sfY}{\mathsf Y} \newcommand{\sfZ}{\mathsf Z} \newcommand{\cA}{\mathcal A} \newcommand{\cB}{\mathcal B} \newcommand{\cC}{\mathcal C} \newcommand{\cD}{\mathcal D} \newcommand{\cE}{\mathcal E} \newcommand{\cF}{\mathcal F} \newcommand{\cG}{\mathcal G} \newcommand{\cH}{\mathcal H} \newcommand{\cI}{\mathcal I} \newcommand{\cJ}{\mathcal J} \newcommand{\cK}{\mathcal K} \newcommand{\cL}{\mathcal L} \newcommand{\cM}{\mathcal M} \newcommand{\cN}{\mathcal N} \newcommand{\cO}{\mathcal O} \newcommand{\cP}{\mathcal P} \newcommand{\cQ}{\mathcal Q} \newcommand{\cR}{\mathcal R} \newcommand{\cS}{\mathcal S} \newcommand{\cT}{\mathcal T} \newcommand{\cU}{\mathcal U} \newcommand{\cV}{\mathcal V} \newcommand{\cW}{\mathcal W} \newcommand{\cX}{\mathcal X} \newcommand{\cY}{\mathcal Y} \newcommand{\cZ}{\mathcal Z} \newcommand{\bfA}{\mathbf A} \newcommand{\bfB}{\mathbf B} \newcommand{\bfC}{\mathbf C} \newcommand{\bfD}{\mathbf D} \newcommand{\bfE}{\mathbf E} \newcommand{\bfF}{\mathbf F} \newcommand{\bfG}{\mathbf G} \newcommand{\bfH}{\mathbf H} \newcommand{\bfI}{\mathbf I} \newcommand{\bfJ}{\mathbf J} \newcommand{\bfK}{\mathbf K} \newcommand{\bfL}{\mathbf L} \newcommand{\bfM}{\mathbf M} \newcommand{\bfN}{\mathbf N} \newcommand{\bfO}{\mathbf O} \newcommand{\bfP}{\mathbf P} \newcommand{\bfQ}{\mathbf Q} \newcommand{\bfR}{\mathbf R} \newcommand{\bfS}{\mathbf S} \newcommand{\bfT}{\mathbf T} \newcommand{\bfU}{\mathbf U} \newcommand{\bfV}{\mathbf V} \newcommand{\bfW}{\mathbf W} \newcommand{\bfX}{\mathbf X} \newcommand{\bfY}{\mathbf Y} \newcommand{\bfZ}{\mathbf Z} \newcommand{\rmA}{\mathrm A} \newcommand{\rmB}{\mathrm B} \newcommand{\rmC}{\mathrm C} \newcommand{\rmD}{\mathrm D} \newcommand{\rmE}{\mathrm E} \newcommand{\rmF}{\mathrm F} \newcommand{\rmG}{\mathrm G} \newcommand{\rmH}{\mathrm H} \newcommand{\rmI}{\mathrm I} \newcommand{\rmJ}{\mathrm J} \newcommand{\rmK}{\mathrm K} \newcommand{\rmL}{\mathrm L} \newcommand{\rmM}{\mathrm M} \newcommand{\rmN}{\mathrm N} \newcommand{\rmO}{\mathrm O} \newcommand{\rmP}{\mathrm P} \newcommand{\rmQ}{\mathrm Q} \newcommand{\rmR}{\mathrm R} \newcommand{\rmS}{\mathrm S} \newcommand{\rmT}{\mathrm T} \newcommand{\rmU}{\mathrm U} \newcommand{\rmV}{\mathrm V} \newcommand{\rmW}{\mathrm W} \newcommand{\rmX}{\mathrm X} \newcommand{\rmY}{\mathrm Y} \newcommand{\rmZ}{\mathrm Z} \newcommand{\paren}[1]{( #1 )} \newcommand{\Paren}[1]{\left( #1 \right)} \newcommand{\bigparen}[1]{\bigl( #1 \bigr)} \newcommand{\Bigparen}[1]{\Bigl( #1 \Bigr)} \newcommand{\biggparen}[1]{\biggl( #1 \biggr)} \newcommand{\Biggparen}[1]{\Biggl( #1 \Biggr)} \newcommand{\abs}[1]{\lvert #1 \rvert} \newcommand{\Abs}[1]{\left\lvert #1 \right\rvert} \newcommand{\bigabs}[1]{\bigl\lvert #1 \bigr\rvert} \newcommand{\Bigabs}[1]{\Bigl\lvert #1 \Bigr\rvert} \newcommand{\biggabs}[1]{\biggl\lvert #1 \biggr\rvert} \newcommand{\Biggabs}[1]{\Biggl\lvert #1 \Biggr\rvert} \newcommand{\card}[1]{\lvert #1 \rvert} \newcommand{\Card}[1]{\left\lvert #1 \right\rvert} \newcommand{\bigcard}[1]{\bigl\lvert #1 \bigr\rvert} \newcommand{\Bigcard}[1]{\Bigl\lvert #1 \Bigr\rvert} \newcommand{\biggcard}[1]{\biggl\lvert #1 \biggr\rvert} \newcommand{\Biggcard}[1]{\Biggl\lvert #1 \Biggr\rvert} \newcommand{\norm}[1]{\lVert #1 \rVert} \newcommand{\Norm}[1]{\left\lVert #1 \right\rVert} \newcommand{\bignorm}[1]{\bigl\lVert #1 \bigr\rVert} \newcommand{\Bignorm}[1]{\Bigl\lVert #1 \Bigr\rVert} \newcommand{\biggnorm}[1]{\biggl\lVert #1 \biggr\rVert} \newcommand{\Biggnorm}[1]{\Biggl\lVert #1 \Biggr\rVert} \newcommand{\iprod}[1]{\langle #1 \rangle} \newcommand{\Iprod}[1]{\left\langle #1 \right\rangle} \newcommand{\bigiprod}[1]{\bigl\langle #1 \bigr\rangle} \newcommand{\Bigiprod}[1]{\Bigl\langle #1 \Bigr\rangle} \newcommand{\biggiprod}[1]{\biggl\langle #1 \biggr\rangle} \newcommand{\Biggiprod}[1]{\Biggl\langle #1 \Biggr\rangle} \newcommand{\set}[1]{\lbrace #1 \rbrace} \newcommand{\Set}[1]{\left\lbrace #1 \right\rbrace} \newcommand{\bigset}[1]{\bigl\lbrace #1 \bigr\rbrace} \newcommand{\Bigset}[1]{\Bigl\lbrace #1 \Bigr\rbrace} \newcommand{\biggset}[1]{\biggl\lbrace #1 \biggr\rbrace} \newcommand{\Biggset}[1]{\Biggl\lbrace #1 \Biggr\rbrace} \newcommand{\bracket}[1]{\lbrack #1 \rbrack} \newcommand{\Bracket}[1]{\left\lbrack #1 \right\rbrack} \newcommand{\bigbracket}[1]{\bigl\lbrack #1 \bigr\rbrack} \newcommand{\Bigbracket}[1]{\Bigl\lbrack #1 \Bigr\rbrack} \newcommand{\biggbracket}[1]{\biggl\lbrack #1 \biggr\rbrack} \newcommand{\Biggbracket}[1]{\Biggl\lbrack #1 \Biggr\rbrack} \newcommand{\ucorner}[1]{\ulcorner #1 \urcorner} \newcommand{\Ucorner}[1]{\left\ulcorner #1 \right\urcorner} \newcommand{\bigucorner}[1]{\bigl\ulcorner #1 \bigr\urcorner} \newcommand{\Bigucorner}[1]{\Bigl\ulcorner #1 \Bigr\urcorner} \newcommand{\biggucorner}[1]{\biggl\ulcorner #1 \biggr\urcorner} \newcommand{\Biggucorner}[1]{\Biggl\ulcorner #1 \Biggr\urcorner} \newcommand{\ceil}[1]{\lceil #1 \rceil} \newcommand{\Ceil}[1]{\left\lceil #1 \right\rceil} \newcommand{\bigceil}[1]{\bigl\lceil #1 \bigr\rceil} \newcommand{\Bigceil}[1]{\Bigl\lceil #1 \Bigr\rceil} \newcommand{\biggceil}[1]{\biggl\lceil #1 \biggr\rceil} \newcommand{\Biggceil}[1]{\Biggl\lceil #1 \Biggr\rceil} \newcommand{\floor}[1]{\lfloor #1 \rfloor} \newcommand{\Floor}[1]{\left\lfloor #1 \right\rfloor} \newcommand{\bigfloor}[1]{\bigl\lfloor #1 \bigr\rfloor} \newcommand{\Bigfloor}[1]{\Bigl\lfloor #1 \Bigr\rfloor} \newcommand{\biggfloor}[1]{\biggl\lfloor #1 \biggr\rfloor} \newcommand{\Biggfloor}[1]{\Biggl\lfloor #1 \Biggr\rfloor} \newcommand{\lcorner}[1]{\llcorner #1 \lrcorner} \newcommand{\Lcorner}[1]{\left\llcorner #1 \right\lrcorner} \newcommand{\biglcorner}[1]{\bigl\llcorner #1 \bigr\lrcorner} \newcommand{\Biglcorner}[1]{\Bigl\llcorner #1 \Bigr\lrcorner} \newcommand{\bigglcorner}[1]{\biggl\llcorner #1 \biggr\lrcorner} \newcommand{\Bigglcorner}[1]{\Biggl\llcorner #1 \Biggr\lrcorner} \newcommand{\e}{\varepsilon} \newcommand{\eps}{\varepsilon} \newcommand{\from}{\colon} \newcommand{\super}[2]{#1^{(#2)}} \newcommand{\varsuper}[2]{#1^{\scriptscriptstyle (#2)}} \newcommand{\tensor}{\otimes} \newcommand{\eset}{\emptyset} \newcommand{\sse}{\subseteq} \newcommand{\sst}{\substack} \newcommand{\ot}{\otimes} \newcommand{\Esst}[1]{\bbE_{\substack{#1}}} \newcommand{\vbig}{\vphantom{\bigoplus}} \newcommand{\seteq}{\mathrel{\mathop:}=} \newcommand{\defeq}{\stackrel{\mathrm{def}}=} \newcommand{\Mid}{\mathrel{}\middle|\mathrel{}} \newcommand{\Ind}{\mathbf 1} \newcommand{\bits}{\{0,1\}} \newcommand{\sbits}{\{\pm 1\}} \newcommand{\R}{\mathbb R} \newcommand{\Rnn}{\R_{\ge 0}} \newcommand{\N}{\mathbb N} \newcommand{\Z}{\mathbb Z} \newcommand{\Q}{\mathbb Q} \newcommand{\mper}{\,.} \newcommand{\mcom}{\,,} \DeclareMathOperator{\Id}{Id} \DeclareMathOperator{\cone}{cone} \DeclareMathOperator{\vol}{vol} \DeclareMathOperator{\val}{val} \DeclareMathOperator{\opt}{opt} \DeclareMathOperator{\Opt}{Opt} \DeclareMathOperator{\Val}{Val} \DeclareMathOperator{\LP}{LP} \DeclareMathOperator{\SDP}{SDP} \DeclareMathOperator{\Tr}{Tr} \DeclareMathOperator{\Inf}{Inf} \DeclareMathOperator{\poly}{poly} \DeclareMathOperator{\polylog}{polylog} \DeclareMathOperator{\argmax}{arg\,max} \DeclareMathOperator{\argmin}{arg\,min} \DeclareMathOperator{\qpoly}{qpoly} \DeclareMathOperator{\qqpoly}{qqpoly} \DeclareMathOperator{\conv}{conv} \DeclareMathOperator{\Conv}{Conv} \DeclareMathOperator{\supp}{supp} \DeclareMathOperator{\sign}{sign} \DeclareMathOperator{\mspan}{span} \DeclareMathOperator{\mrank}{rank} \DeclareMathOperator{\E}{\mathbb E} \DeclareMathOperator{\pE}{\tilde{\mathbb E}} \DeclareMathOperator{\Pr}{\mathbb P} \DeclareMathOperator{\Span}{Span} \DeclareMathOperator{\Cone}{Cone} \DeclareMathOperator{\junta}{junta} \DeclareMathOperator{\NSS}{NSS} \DeclareMathOperator{\SA}{SA} \DeclareMathOperator{\SOS}{SOS} \newcommand{\iprod}[1]{\langle #1 \rangle} \newcommand{\R}{\mathbb{R}} \newcommand{\cE}{\mathcal{E}} \newcommand{\E}{\mathbb{E}} \newcommand{\pE}{\tilde{\mathbb{E}}} \newcommand{\N}{\mathbb{N}} \renewcommand{\P}{\mathcal{P}} \notag \]
\[ \newcommand{\sleq}{\ensuremath{\preceq}} \newcommand{\sgeq}{\ensuremath{\succeq}} \newcommand{\diag}{\ensuremath{\mathrm{diag}}} \newcommand{\support}{\ensuremath{\mathrm{support}}} \newcommand{\zo}{\ensuremath{\{0,1\}}} \newcommand{\pmo}{\ensuremath{\{\pm 1\}}} \newcommand{\uppersos}{\ensuremath{\overline{\mathrm{sos}}}} \newcommand{\lambdamax}{\ensuremath{\lambda_{\mathrm{max}}}} \newcommand{\rank}{\ensuremath{\mathrm{rank}}} \newcommand{\Mslow}{\ensuremath{M_{\mathrm{slow}}}} \newcommand{\Mfast}{\ensuremath{M_{\mathrm{fast}}}} \newcommand{\Mdiag}{\ensuremath{M_{\mathrm{diag}}}} \newcommand{\Mcross}{\ensuremath{M_{\mathrm{cross}}}} \newcommand{\eqdef}{\ensuremath{ =^{def}}} \newcommand{\threshold}{\ensuremath{\mathrm{threshold}}} \newcommand{\vbls}{\ensuremath{\mathrm{vbls}}} \newcommand{\cons}{\ensuremath{\mathrm{cons}}} \newcommand{\edges}{\ensuremath{\mathrm{edges}}} \newcommand{\cl}{\ensuremath{\mathrm{cl}}} \newcommand{\xor}{\ensuremath{\oplus}} \newcommand{\1}{\ensuremath{\mathrm{1}}} \notag \]
\[ \newcommand{\transpose}[1]{\ensuremath{#1{}^{\mkern-2mu\intercal}}} \newcommand{\dyad}[1]{\ensuremath{#1#1{}^{\mkern-2mu\intercal}}} \newcommand{\nchoose}[1]{\ensuremath{{n \choose #1}}} \newcommand{\generated}[1]{\ensuremath{\langle #1 \rangle}} \notag \]

Is sos an “optimal algorithm”?

We have alluded several times in this course to the intuition that sum of squares might be an “optimal algorithm” in some settings, but why would we think that? and what does this even mean? In this lecture we explore these questions:

Optimal algorithms for optimization problems.

Let’s start with the first question. There are several ways to define optimality, and we will choose one variant for concreteness. In particular, let us assume that we are dealing with NP maximization problems. That is, problems of the following type:

  • Input: Function \(f\from\bits^n\to [0,1]\) in some explicit form that allows us to evaluate it.
  • Goal: Find \(x\in\bits^n\) that maximizes \(f(x)\).

We define \(opt(f)=\max_{x\in\bits^n} f(x)\).

This is a pretty general formalism. For example, in the Max Cut problem, the function \(f\) is the one that maps a cut (encoded as a string) to the fraction of edges that it cuts. In the small set expansion problem, given a graph \(G=(V,E)\) we could define \(f\) so that \(f(x)=0\) if \(x\) does not encode a sparse set, and otherwise \(f(x)\) equals \(1-|E(S,\overline{S})|/(d|S|)\) where \(S\) is the set encoded by \(x\).

A computational problem can now be thought of as a set \(\cF = \cup_n \cF_n\) where \(\cF_n\) is some subset of all functions from \(\bits^n\) to \([0,1]\), while a class of problems \(\cC\) is a family of such problems. We will consider algorithms that take the function \(f\) (in some representation) and output an assignment \(x\in\bits^n\). We are restricting ourselves to NP problems, and so assume that the representation of \(f\) has \(\poly(n)\) size and that one can use it to evaluate \(f\) on an input \(x\) in \(\poly(n)\) time.

Let \(T\from\N\to\N\) and \(\epsilon>0\). We say that an algorithm \(A\) is \((T,\epsilon)\)-optimal for a problem \(\cF\) if for every every time \(T(n)\) algorithm \(B\) \[ \overline{\lim_{n\rightarrow\infty}} \bigl[\max_{f\in \cF_n} [opt(f)-A(f)] -\max_{f\in \cF_n} [opt(f)-B(f)] \bigr] \leq \epsilon \label{eq:optimality} \]

This is a notion of worst case approximate additive optimality. One can also consider multiplicative approximations, as well as notions such as average case optimality (where we would replace the maximum over \(f\in\cF\) in Reference:eq:optimality with expectation over a random \(f \in \cF \cap [0,1]^{\bits^n}\) for arbitrarily large \(n\)). We can even consider instance optimality (where we would require the inquality to hold for every \(f\) pointwise) though one then has to be careful to rule out algorithms \(B\) that have the solution to a particular instance “hardwired” inside them.

Thus, while this is not the be all and end all notion of optimality, Reference:def-optimality does seem like a reasonable starting point to explore it. Let’s start by the following observation:

For every (“nice”, efficiently computable) time complexity function \(T\from\N\to\N\), there exists an algorithm \(A\) that runs in time \(T(n)\poly(n)\) and is \((T,\epsilon)\) optimal with respect to any NP problem \(\cF\).

On input \(f\), the algorithm \(A\) will enumerate the first \(n\) Turing machines \(M_1,\ldots,M_n\) and run each one of them for \(T(n)\) steps to obtain \(x_1=M_1(f),\ldots,x_n=M_n(f)\). It then outputs the \(x_i\) that maximizes \(f(x_i)\). We leave the analysis of this as an exercise.

So, there an optimal algorithm exists but this kind of “diagonalization based” algorithm is not very satisfying. So the real question is whether there is a “simple” or “nice” optimal algorithm, where one can think of several ways to define what “nice” means:

  • We know it when we see it.
  • Doesn’t use diagonalization.
  • A concrete enough algorithm that we can prove unconditional lower bounds for it.
  • Ties in to other mathematical notions such as convexity etc..

Why should a nice optimal algorithm exist?

Should nice optimal algorithms exist? We don’t really now. But it is a very important question. At the heart of it is whether we can understand the computational difficulty of problems, in the sense of having clean criteria (i.e., the performance of a nice optimal algorithm) that separates the easy problems from the hard ones, and allows us to make concrete predictions on which problems would be easy or hard. It can also in principle, reduce the task of designing algorithm, which now requires significant creativity and “ad hoc tailoring” to each problem, into a more systematic enterprise. This is similar to the way that equation solving in mathematics evolved from a challenge required the creative genius of people like Leibnitz, Euler, Gauss, etc.. to solve individual equations to a calculation that is now routinely and automatically done by computer programs.

There are some reasons to be hopeful for such a nice optimal algorithm, at least in some restricted domains such as combinatorial optimization:

  • The same algorithmic ideas, including notions such as greedy algorithms, divide and conquer, convex relaxations , keep recurring in algorithm design. This is particularly true for combinatorial optimization as opposed to say, algebraic algorithms (e.g., integer factorization) where ad-hoc tricks using algebraic identities are more prevalent.
  • In practice general purpose software packages for optimization are widely used for a host of different problems, rather than ad hoc programs for a particular application.

This seems consistent with the assumption that at least for optimization there are a few underlying basic ideas that suffice to distinguish between those problems that are inherently unsolvable and those that can be solved efficiently, while clever ad hoc tricks are useful for either gaining (often very important!) second order improvements, or facilitating the analysis.

Why sos?

So, one might hope that there is a nice optimal algorithm, and perhaps even one based on convex optimization, but why sum of squares? Once again, we don’t really know, and there are certain algorithmic frameworks (such as hyperbolic programming) that could perhaps offer stronger power. But we can try to get some evidence for optimality of sos. There are generally two kinds of potential evidence:

  • There are some results proving that sos is optimal algorithm in terms of worst case approximation ratio. Under the (somewhat controversial) Unique Games Conjecture, Raghavendra (2008) showed that sos (and in fact a very restricted special case of it) is an optimal worst-case approximation algorithm for every constraint satisfaction problems. Without assuming the UGC, the strongest result along those lines is of Chan (2016) who showed optimality of sos for a restricted subset of CSP’s.
  • There are results showing that sos captures other convex programming techniques. It has been shown that sos is stronger than certain linear and semidefinite programming hierarchies that have been considered in the literature such as those of Sherali and Adams (1990) and Lovász and Schrijver (1991) .

A recent result of Lee, Raghavendra, and Steurer (2015) shows that sos is optimal for CSP’s among all semidefinite programs of comparable size. This give powerful evidence that if sos is not the strongest convex optimization based efficient algorithm, one would have to go beyond semidefinite programming to beat it.Formalizing such results is tricky, because both linear programming and semidefinite programming are \(P\)-complete, which means that if we do not restrict the way that we formalize a constraint satisfaction problem as an LP or SDP then we can encode an arbitrary polynomial-time computation using it and hence proving optimality of sos would in particular imply (via (Grigoriev 2001, Schoenebeck (2008))) that no polynomial time algorithm can solve 3SAT and hence that \(P\neq NP\).

References

Chan, Siu On. 2016. “Approximation Resistance from Pairwise-Independent Subgroups.” J. ACM 63 (3): 27:1–27:32.

Grigoriev, Dima. 2001. “Linear Lower Bound on Degrees of Positivstellensatz Calculus Proofs for the Parity.” Theoret. Comput. Sci. 259 (1-2): 613–22. doi:10.1016/S0304-3975(00)00157-2.

Lee, James R., Prasad Raghavendra, and David Steurer. 2015. “Lower Bounds on the Size of Semidefinite Programming Relaxations.” In STOC, 567–76. ACM.

Lovász, László, and Alexander Schrijver. 1991. “Cones of Matrices and Set-Functions and 0-1 Optimization.” SIAM Journal on Optimization 1 (2). SIAM: 166–90.

Raghavendra, Prasad. 2008. “Optimal Algorithms and Inapproximability Results for Every Csp?” In STOC, 245–54. ACM.

Schoenebeck, Grant. 2008. “Linear Level Lasserre Lower Bounds for Certain K-Csps.” In FOCS, 593–602. IEEE Computer Society.

Sherali, Hanif D., and Warren P. Adams. 1990. “A Hierarchy of Relaxations Between the Continuous and Convex Hull Representations for Zero-One Programming Problems.” SIAM J. Discrete Math. 3 (3): 411–30. doi:10.1137/0403036.