Sum-of-squares: proofs, beliefs, and algorithms — Boaz Barak and David Steurer

Index PDF

\[ \newcommand{\undefined}{} \newcommand{\hfill}{} \newcommand{\qedhere}{\square} \newcommand{\qed}{\square} \newcommand{\ensuremath}[1]{#1} \newcommand{\bbA}{\mathbb A} \newcommand{\bbB}{\mathbb B} \newcommand{\bbC}{\mathbb C} \newcommand{\bbD}{\mathbb D} \newcommand{\bbE}{\mathbb E} \newcommand{\bbF}{\mathbb F} \newcommand{\bbG}{\mathbb G} \newcommand{\bbH}{\mathbb H} \newcommand{\bbI}{\mathbb I} \newcommand{\bbJ}{\mathbb J} \newcommand{\bbK}{\mathbb K} \newcommand{\bbL}{\mathbb L} \newcommand{\bbM}{\mathbb M} \newcommand{\bbN}{\mathbb N} \newcommand{\bbO}{\mathbb O} \newcommand{\bbP}{\mathbb P} \newcommand{\bbQ}{\mathbb Q} \newcommand{\bbR}{\mathbb R} \newcommand{\bbS}{\mathbb S} \newcommand{\bbT}{\mathbb T} \newcommand{\bbU}{\mathbb U} \newcommand{\bbV}{\mathbb V} \newcommand{\bbW}{\mathbb W} \newcommand{\bbX}{\mathbb X} \newcommand{\bbY}{\mathbb Y} \newcommand{\bbZ}{\mathbb Z} \newcommand{\sA}{\mathscr A} \newcommand{\sB}{\mathscr B} \newcommand{\sC}{\mathscr C} \newcommand{\sD}{\mathscr D} \newcommand{\sE}{\mathscr E} \newcommand{\sF}{\mathscr F} \newcommand{\sG}{\mathscr G} \newcommand{\sH}{\mathscr H} \newcommand{\sI}{\mathscr I} \newcommand{\sJ}{\mathscr J} \newcommand{\sK}{\mathscr K} \newcommand{\sL}{\mathscr L} \newcommand{\sM}{\mathscr M} \newcommand{\sN}{\mathscr N} \newcommand{\sO}{\mathscr O} \newcommand{\sP}{\mathscr P} \newcommand{\sQ}{\mathscr Q} \newcommand{\sR}{\mathscr R} \newcommand{\sS}{\mathscr S} \newcommand{\sT}{\mathscr T} \newcommand{\sU}{\mathscr U} \newcommand{\sV}{\mathscr V} \newcommand{\sW}{\mathscr W} \newcommand{\sX}{\mathscr X} \newcommand{\sY}{\mathscr Y} \newcommand{\sZ}{\mathscr Z} \newcommand{\sfA}{\mathsf A} \newcommand{\sfB}{\mathsf B} \newcommand{\sfC}{\mathsf C} \newcommand{\sfD}{\mathsf D} \newcommand{\sfE}{\mathsf E} \newcommand{\sfF}{\mathsf F} \newcommand{\sfG}{\mathsf G} \newcommand{\sfH}{\mathsf H} \newcommand{\sfI}{\mathsf I} \newcommand{\sfJ}{\mathsf J} \newcommand{\sfK}{\mathsf K} \newcommand{\sfL}{\mathsf L} \newcommand{\sfM}{\mathsf M} \newcommand{\sfN}{\mathsf N} \newcommand{\sfO}{\mathsf O} \newcommand{\sfP}{\mathsf P} \newcommand{\sfQ}{\mathsf Q} \newcommand{\sfR}{\mathsf R} \newcommand{\sfS}{\mathsf S} \newcommand{\sfT}{\mathsf T} \newcommand{\sfU}{\mathsf U} \newcommand{\sfV}{\mathsf V} \newcommand{\sfW}{\mathsf W} \newcommand{\sfX}{\mathsf X} \newcommand{\sfY}{\mathsf Y} \newcommand{\sfZ}{\mathsf Z} \newcommand{\cA}{\mathcal A} \newcommand{\cB}{\mathcal B} \newcommand{\cC}{\mathcal C} \newcommand{\cD}{\mathcal D} \newcommand{\cE}{\mathcal E} \newcommand{\cF}{\mathcal F} \newcommand{\cG}{\mathcal G} \newcommand{\cH}{\mathcal H} \newcommand{\cI}{\mathcal I} \newcommand{\cJ}{\mathcal J} \newcommand{\cK}{\mathcal K} \newcommand{\cL}{\mathcal L} \newcommand{\cM}{\mathcal M} \newcommand{\cN}{\mathcal N} \newcommand{\cO}{\mathcal O} \newcommand{\cP}{\mathcal P} \newcommand{\cQ}{\mathcal Q} \newcommand{\cR}{\mathcal R} \newcommand{\cS}{\mathcal S} \newcommand{\cT}{\mathcal T} \newcommand{\cU}{\mathcal U} \newcommand{\cV}{\mathcal V} \newcommand{\cW}{\mathcal W} \newcommand{\cX}{\mathcal X} \newcommand{\cY}{\mathcal Y} \newcommand{\cZ}{\mathcal Z} \newcommand{\bfA}{\mathbf A} \newcommand{\bfB}{\mathbf B} \newcommand{\bfC}{\mathbf C} \newcommand{\bfD}{\mathbf D} \newcommand{\bfE}{\mathbf E} \newcommand{\bfF}{\mathbf F} \newcommand{\bfG}{\mathbf G} \newcommand{\bfH}{\mathbf H} \newcommand{\bfI}{\mathbf I} \newcommand{\bfJ}{\mathbf J} \newcommand{\bfK}{\mathbf K} \newcommand{\bfL}{\mathbf L} \newcommand{\bfM}{\mathbf M} \newcommand{\bfN}{\mathbf N} \newcommand{\bfO}{\mathbf O} \newcommand{\bfP}{\mathbf P} \newcommand{\bfQ}{\mathbf Q} \newcommand{\bfR}{\mathbf R} \newcommand{\bfS}{\mathbf S} \newcommand{\bfT}{\mathbf T} \newcommand{\bfU}{\mathbf U} \newcommand{\bfV}{\mathbf V} \newcommand{\bfW}{\mathbf W} \newcommand{\bfX}{\mathbf X} \newcommand{\bfY}{\mathbf Y} \newcommand{\bfZ}{\mathbf Z} \newcommand{\rmA}{\mathrm A} \newcommand{\rmB}{\mathrm B} \newcommand{\rmC}{\mathrm C} \newcommand{\rmD}{\mathrm D} \newcommand{\rmE}{\mathrm E} \newcommand{\rmF}{\mathrm F} \newcommand{\rmG}{\mathrm G} \newcommand{\rmH}{\mathrm H} \newcommand{\rmI}{\mathrm I} \newcommand{\rmJ}{\mathrm J} \newcommand{\rmK}{\mathrm K} \newcommand{\rmL}{\mathrm L} \newcommand{\rmM}{\mathrm M} \newcommand{\rmN}{\mathrm N} \newcommand{\rmO}{\mathrm O} \newcommand{\rmP}{\mathrm P} \newcommand{\rmQ}{\mathrm Q} \newcommand{\rmR}{\mathrm R} \newcommand{\rmS}{\mathrm S} \newcommand{\rmT}{\mathrm T} \newcommand{\rmU}{\mathrm U} \newcommand{\rmV}{\mathrm V} \newcommand{\rmW}{\mathrm W} \newcommand{\rmX}{\mathrm X} \newcommand{\rmY}{\mathrm Y} \newcommand{\rmZ}{\mathrm Z} \newcommand{\paren}[1]{( #1 )} \newcommand{\Paren}[1]{\left( #1 \right)} \newcommand{\bigparen}[1]{\bigl( #1 \bigr)} \newcommand{\Bigparen}[1]{\Bigl( #1 \Bigr)} \newcommand{\biggparen}[1]{\biggl( #1 \biggr)} \newcommand{\Biggparen}[1]{\Biggl( #1 \Biggr)} \newcommand{\abs}[1]{\lvert #1 \rvert} \newcommand{\Abs}[1]{\left\lvert #1 \right\rvert} \newcommand{\bigabs}[1]{\bigl\lvert #1 \bigr\rvert} \newcommand{\Bigabs}[1]{\Bigl\lvert #1 \Bigr\rvert} \newcommand{\biggabs}[1]{\biggl\lvert #1 \biggr\rvert} \newcommand{\Biggabs}[1]{\Biggl\lvert #1 \Biggr\rvert} \newcommand{\card}[1]{\lvert #1 \rvert} \newcommand{\Card}[1]{\left\lvert #1 \right\rvert} \newcommand{\bigcard}[1]{\bigl\lvert #1 \bigr\rvert} \newcommand{\Bigcard}[1]{\Bigl\lvert #1 \Bigr\rvert} \newcommand{\biggcard}[1]{\biggl\lvert #1 \biggr\rvert} \newcommand{\Biggcard}[1]{\Biggl\lvert #1 \Biggr\rvert} \newcommand{\norm}[1]{\lVert #1 \rVert} \newcommand{\Norm}[1]{\left\lVert #1 \right\rVert} \newcommand{\bignorm}[1]{\bigl\lVert #1 \bigr\rVert} \newcommand{\Bignorm}[1]{\Bigl\lVert #1 \Bigr\rVert} \newcommand{\biggnorm}[1]{\biggl\lVert #1 \biggr\rVert} \newcommand{\Biggnorm}[1]{\Biggl\lVert #1 \Biggr\rVert} \newcommand{\iprod}[1]{\langle #1 \rangle} \newcommand{\Iprod}[1]{\left\langle #1 \right\rangle} \newcommand{\bigiprod}[1]{\bigl\langle #1 \bigr\rangle} \newcommand{\Bigiprod}[1]{\Bigl\langle #1 \Bigr\rangle} \newcommand{\biggiprod}[1]{\biggl\langle #1 \biggr\rangle} \newcommand{\Biggiprod}[1]{\Biggl\langle #1 \Biggr\rangle} \newcommand{\set}[1]{\lbrace #1 \rbrace} \newcommand{\Set}[1]{\left\lbrace #1 \right\rbrace} \newcommand{\bigset}[1]{\bigl\lbrace #1 \bigr\rbrace} \newcommand{\Bigset}[1]{\Bigl\lbrace #1 \Bigr\rbrace} \newcommand{\biggset}[1]{\biggl\lbrace #1 \biggr\rbrace} \newcommand{\Biggset}[1]{\Biggl\lbrace #1 \Biggr\rbrace} \newcommand{\bracket}[1]{\lbrack #1 \rbrack} \newcommand{\Bracket}[1]{\left\lbrack #1 \right\rbrack} \newcommand{\bigbracket}[1]{\bigl\lbrack #1 \bigr\rbrack} \newcommand{\Bigbracket}[1]{\Bigl\lbrack #1 \Bigr\rbrack} \newcommand{\biggbracket}[1]{\biggl\lbrack #1 \biggr\rbrack} \newcommand{\Biggbracket}[1]{\Biggl\lbrack #1 \Biggr\rbrack} \newcommand{\ucorner}[1]{\ulcorner #1 \urcorner} \newcommand{\Ucorner}[1]{\left\ulcorner #1 \right\urcorner} \newcommand{\bigucorner}[1]{\bigl\ulcorner #1 \bigr\urcorner} \newcommand{\Bigucorner}[1]{\Bigl\ulcorner #1 \Bigr\urcorner} \newcommand{\biggucorner}[1]{\biggl\ulcorner #1 \biggr\urcorner} \newcommand{\Biggucorner}[1]{\Biggl\ulcorner #1 \Biggr\urcorner} \newcommand{\ceil}[1]{\lceil #1 \rceil} \newcommand{\Ceil}[1]{\left\lceil #1 \right\rceil} \newcommand{\bigceil}[1]{\bigl\lceil #1 \bigr\rceil} \newcommand{\Bigceil}[1]{\Bigl\lceil #1 \Bigr\rceil} \newcommand{\biggceil}[1]{\biggl\lceil #1 \biggr\rceil} \newcommand{\Biggceil}[1]{\Biggl\lceil #1 \Biggr\rceil} \newcommand{\floor}[1]{\lfloor #1 \rfloor} \newcommand{\Floor}[1]{\left\lfloor #1 \right\rfloor} \newcommand{\bigfloor}[1]{\bigl\lfloor #1 \bigr\rfloor} \newcommand{\Bigfloor}[1]{\Bigl\lfloor #1 \Bigr\rfloor} \newcommand{\biggfloor}[1]{\biggl\lfloor #1 \biggr\rfloor} \newcommand{\Biggfloor}[1]{\Biggl\lfloor #1 \Biggr\rfloor} \newcommand{\lcorner}[1]{\llcorner #1 \lrcorner} \newcommand{\Lcorner}[1]{\left\llcorner #1 \right\lrcorner} \newcommand{\biglcorner}[1]{\bigl\llcorner #1 \bigr\lrcorner} \newcommand{\Biglcorner}[1]{\Bigl\llcorner #1 \Bigr\lrcorner} \newcommand{\bigglcorner}[1]{\biggl\llcorner #1 \biggr\lrcorner} \newcommand{\Bigglcorner}[1]{\Biggl\llcorner #1 \Biggr\lrcorner} \newcommand{\e}{\varepsilon} \newcommand{\eps}{\varepsilon} \newcommand{\from}{\colon} \newcommand{\super}[2]{#1^{(#2)}} \newcommand{\varsuper}[2]{#1^{\scriptscriptstyle (#2)}} \newcommand{\tensor}{\otimes} \newcommand{\eset}{\emptyset} \newcommand{\sse}{\subseteq} \newcommand{\sst}{\substack} \newcommand{\ot}{\otimes} \newcommand{\Esst}[1]{\bbE_{\substack{#1}}} \newcommand{\vbig}{\vphantom{\bigoplus}} \newcommand{\seteq}{\mathrel{\mathop:}=} \newcommand{\defeq}{\stackrel{\mathrm{def}}=} \newcommand{\Mid}{\mathrel{}\middle|\mathrel{}} \newcommand{\Ind}{\mathbf 1} \newcommand{\bits}{\{0,1\}} \newcommand{\sbits}{\{\pm 1\}} \newcommand{\R}{\mathbb R} \newcommand{\Rnn}{\R_{\ge 0}} \newcommand{\N}{\mathbb N} \newcommand{\Z}{\mathbb Z} \newcommand{\Q}{\mathbb Q} \newcommand{\mper}{\,.} \newcommand{\mcom}{\,,} \DeclareMathOperator{\Id}{Id} \DeclareMathOperator{\cone}{cone} \DeclareMathOperator{\vol}{vol} \DeclareMathOperator{\val}{val} \DeclareMathOperator{\opt}{opt} \DeclareMathOperator{\Opt}{Opt} \DeclareMathOperator{\Val}{Val} \DeclareMathOperator{\LP}{LP} \DeclareMathOperator{\SDP}{SDP} \DeclareMathOperator{\Tr}{Tr} \DeclareMathOperator{\Inf}{Inf} \DeclareMathOperator{\poly}{poly} \DeclareMathOperator{\polylog}{polylog} \DeclareMathOperator{\argmax}{arg\,max} \DeclareMathOperator{\argmin}{arg\,min} \DeclareMathOperator{\qpoly}{qpoly} \DeclareMathOperator{\qqpoly}{qqpoly} \DeclareMathOperator{\conv}{conv} \DeclareMathOperator{\Conv}{Conv} \DeclareMathOperator{\supp}{supp} \DeclareMathOperator{\sign}{sign} \DeclareMathOperator{\mspan}{span} \DeclareMathOperator{\mrank}{rank} \DeclareMathOperator{\E}{\mathbb E} \DeclareMathOperator{\pE}{\tilde{\mathbb E}} \DeclareMathOperator{\Pr}{\mathbb P} \DeclareMathOperator{\Span}{Span} \DeclareMathOperator{\Cone}{Cone} \DeclareMathOperator{\junta}{junta} \DeclareMathOperator{\NSS}{NSS} \DeclareMathOperator{\SA}{SA} \DeclareMathOperator{\SOS}{SOS} \newcommand{\iprod}[1]{\langle #1 \rangle} \newcommand{\R}{\mathbb{R}} \newcommand{\cE}{\mathcal{E}} \newcommand{\E}{\mathbb{E}} \newcommand{\pE}{\tilde{\mathbb{E}}} \newcommand{\N}{\mathbb{N}} \renewcommand{\P}{\mathcal{P}} \notag \]
\[ \newcommand{\sleq}{\ensuremath{\preceq}} \newcommand{\sgeq}{\ensuremath{\succeq}} \newcommand{\diag}{\ensuremath{\mathrm{diag}}} \newcommand{\support}{\ensuremath{\mathrm{support}}} \newcommand{\zo}{\ensuremath{\{0,1\}}} \newcommand{\pmo}{\ensuremath{\{\pm 1\}}} \newcommand{\uppersos}{\ensuremath{\overline{\mathrm{sos}}}} \newcommand{\lambdamax}{\ensuremath{\lambda_{\mathrm{max}}}} \newcommand{\rank}{\ensuremath{\mathrm{rank}}} \newcommand{\Mslow}{\ensuremath{M_{\mathrm{slow}}}} \newcommand{\Mfast}{\ensuremath{M_{\mathrm{fast}}}} \newcommand{\Mdiag}{\ensuremath{M_{\mathrm{diag}}}} \newcommand{\Mcross}{\ensuremath{M_{\mathrm{cross}}}} \newcommand{\eqdef}{\ensuremath{ =^{def}}} \newcommand{\threshold}{\ensuremath{\mathrm{threshold}}} \newcommand{\vbls}{\ensuremath{\mathrm{vbls}}} \newcommand{\cons}{\ensuremath{\mathrm{cons}}} \newcommand{\edges}{\ensuremath{\mathrm{edges}}} \newcommand{\cl}{\ensuremath{\mathrm{cl}}} \newcommand{\xor}{\ensuremath{\oplus}} \newcommand{\1}{\ensuremath{\mathrm{1}}} \notag \]
\[ \newcommand{\transpose}[1]{\ensuremath{#1{}^{\mkern-2mu\intercal}}} \newcommand{\dyad}[1]{\ensuremath{#1#1{}^{\mkern-2mu\intercal}}} \newcommand{\nchoose}[1]{\ensuremath{{n \choose #1}}} \newcommand{\generated}[1]{\ensuremath{\langle #1 \rangle}} \notag \]

Cheeger’s inequality

Let \(G\) be a \(d\)-regular graph with vertex set \(V=[n]\). For a vertex subset \(S\subseteq V\), we define its expansion \(\varphi_G(S)\) as: \[ \varphi_G(S) = \frac{\bigabs{E(S,V\setminus S)}}{\tfrac d n\cdot\bigabs{S}\cdot \bigabs{V \setminus S}}\,. \label{eq:expansion} \] Another way to say it is that the expansion of a set \(S\) is the number of edges between \(S\) and its complement in \(G\) as a fraction of the expected number of edges in a random graph with average degree \(d\).Up to a constant factor this is the same as the probability that we leave the set \(S\) if we start at a random vertex in \(S\) and go to one of its \(d\) neighbors at random. Can you see why?.

It is not difficult to check that the expansion of any set \(S\) is a number between \(0\) and \(2\).For example, in a bipartite \(d\)-regular graph each side of the bipartition has expansion \(2\). Most sets in a graph have expansion close to \(1\).Concretely, the expected expansion of a random vertex subset is close to \(1\). Therefore, an interesting question about a graph is whether it contains exceptional sets with expansion close to \(0\) or whether all sets have expansion bounded away from \(0\).

The expansion of graph \(G\), denoted \(\varphi(G)\), is the minimum expansion \(\varphi_G(S)\) over all sets \(S\subseteq[n]\).The literature on graph expansion defines several closely related quantities such as sparsest cut, expansion, and conductance that are all equivalent up to constant factors. We do not distinguish between these notions here. The problem of computational the expansion of a graph (and finding the corresponding set) is a fundamental graph problem, with a wide variety of applications to network design, analyzing Markov chains, and more. It is also widely used as a tool in many “divide and conquer” algorithms.

Given a regular graph \(G\), find vertex set \(S\subseteq V(G)\) so as to minimize \(\varphi_G(S)\).

For every \(\epsilon>0\), in a random regular graph of sufficiently large degree, \(\varphi_G(S)\) will be at least \(1-\epsilon\). On the other hand, if we “plant” a non-expanding set in a random graph by selecting a set \(S\) of half the vertices and conditioning the random edges touching \(S\) to stay inside it with probability \(1-\epsilon\), it might not be a priori clear how one can detect this set. For this reason, like in the max cut case, it is not a priori clear how one can certify that a highly expanding graph (such as a random \(d\)-regular graph) has expansion \(\varphi(G)\) smaller than \(1\) nor is it clear how to find any set with \(\varphi_G(S) \ll 1\), even if \(\varphi(G) = o(1)\). Nevertheless, like in the case of max cut, it turns out that one can in fact beat the “combinatorial” (or linear-programming based) algorithms.

Bounding rational functions using sum of squares

A priori it might not be clear how to apply the sum-of-squares algorithm to Min Expansion. So far we have talked about the problem of minimizing polynomials over the hypercube, but the expansion of a set \(S\) is a rational function of the characteristic vector of \(S\). In particular, if we let \(f_G(x)=\sum_{\set{i,j}\in E(G)}(x_i-x_j)^2\) and \(\abs{x}=\sum_{i=1}^n x_i\), then \[ \varphi(G) = \min_{x\in \bits^n} \frac{f_G(x)}{\tfrac dn \cdot \abs{x}\cdot (n-\abs{x})} \] The following observation allows us to apply sum-of-squares also for minimizing rational functions: in order to certify that for every \(x\in\bits^n\), a rational function of the form \(P(x)/Q(x)\) is at least \(\e>0\), all we need to do is to show that the polynomial \(P - \e \cdot Q\) is always non-negative.

The following theorem, known as the discrete Cheeger’s Inequality (obtained by Dodziuk (1984), and independently by Alon and Milman (1985) and Alon (1986) as a discrete version of (Cheeger 1970)), shows that degree-2 sum-of-squares does provide such a certificate, in particular, showing that we can efficiently certify that \(\varphi(G)\ge 0.001\) for every graph that satisfies \(\varphi(G)\ge 0.1\).

For every \(d\)-regular graph \(G\) with vertex set \([n]\), the following function has a degree-\(2\) sos certificate \[ f_G(x) - \tfrac12 \varphi(G)^2 \cdot \tfrac dn \abs{x}(n-\abs{x})\,. \]

The proof of the above theorem also shows that there is a polynomial-time algorithm to find \(S\) with \(\varphi_G(S) = O(\sqrt{\varphi(G)})\). Leighton and Rao (1988) gave a polynomial-time algorithm based on linear programming to find \(S\) with \(\varphi_G(S) = O(\log n )\varphi(G)\), that is, the algorithm achieves approximation ratio \(O(\log n)\).It is instructive to verify that the approximation guarantees of the algorithms based on degree-2 sum-of-squares and linear programming are incomparable. For small values of \(\varphi(G)\), the linear programming approach has stronger guarantees. For larger values of \(\varphi(G)\) (say \(\varphi(G)\ge 1/\log n\)) the guarantees of degree-2 sum-of-squares are stronger. In a breakthrough work, Arora, Rao, and Vazirani (2004) improved this approximation ratio \(O(\sqrt{\log n})\). Their algorithm uses the degree-\(4\) SOS algorithm, and we will see it later in this course. Shortly thereafter, Agarwal et al. (2005) gave the analogous result for Max Cut, namely an algorithm that given \(G\) with \({\mathrm{maxcut}}(G)=1-\e\), outputs \(S\) with \(\varphi_G(S) \geq 1 - O(\sqrt{\log n})\e\).

Rounding pseudo-distributions for Min Expansion

We now show how Reference:degree-2-sos-certificates-for-expansion is implied by the standard formulation of the discrete Cheeger’s inequality.

For any \(d\)-regular \(n\)-vertex graph \(G\) with adjacenecy matrix \(A_G\), there exists a set \(S\) of at most \(n/2\) vertices such that \(\varphi_S(G) \leq \sqrt{2\lambda}\) where \(\lambda\) is the second smallest eigenvalue of the normalized Laplacian \(L_G = \Id - \tfrac{1}{d}A_G\).

The proof, which we omit here, is not extremely complicated and can be found in several sources (e.g., see handouts 3 and 4 in Luca Trevisan’s course).

For every vector \(x\in \R^n\), \[ \iprod{x,L_G x}=\sum_{i=1}^n x_i^2 - \tfrac 2 d \sum_{\set{i,j}\in E(G)} x_i x_j = \tfrac{1}{d}f_G(x)\,. \] Moreover, the minimum eigenvector of \(L_G\) is always the all ones vector \(\Ind\). If the second smallest eigenvector of \(L_G\) is \(\lambda\) then for every vector \(x\in\R^n\), its projection \(y = x - \tfrac{1}{n}\iprod{\Ind,x}\Ind\) into the subspace orthogonal to \(\Ind\) satisfies \(f_G(y) \geq \lambda \cdot \norm{y}^2\).

These above facts together with the observation that \(|x| = \sum_{i=1}^n x_i^2\) over \(\bits^n\) are enough to derive Reference:degree-2-sos-certificates-for-expansion from Reference:thm-cheeger. We leave the details as an exercise.

The following exercises asks you to prove the corresponding statement about pseudo-distributions.

Let \(G\) be a \(d\)-regular graph on \(n\) vertices, let \(\e>0\), and let \(\mu\) be a degree-\(2\) pseudo-distribution over \(\bits^n\) such that \[ \pE_{\mu} f_G \le \e \cdot \pE_{\mu} \tfrac d n \abs{x}(n-\abs{x})\,. \] Prove that there exists a set \(S\subseteq V(G)\) with \(\varphi_G(S)\le \sqrt{2\e}\).

It turns out the proof of Cheeger’s inequality is constructive, and this can be used to show an efficient rounding algorithm that takes any pseudo-distribution satisfying \(\pE n f_G \leq \epsilon \pE d|x|(n-|x|)\) and obtains from it an actual set \(S\) with \(\varphi_G(S) \leq O(\sqrt{\epsilon})\).

References

Agarwal, Amit, Moses Charikar, Konstantin Makarychev, and Yury Makarychev. 2005. “O(sqrt(log N)) Approximation Algorithms for Min Uncut, Min 2CNF Deletion, and Directed Cut Problems.” In STOC, 573–81. ACM.

Alon, Noga. 1986. “Eigenvalues and Expanders.” Combinatorica 6 (2): 83–96.

Alon, Noga, and V. D. Milman. 1985. “Lambda\({}_{\mbox{1}}\), Isoperimetric Inequalities for Graphs, and Superconcentrators.” J. Comb. Theory, Ser. B 38 (1): 73–88.

Arora, Sanjeev, Satish Rao, and Umesh V. Vazirani. 2004. “Expander Flows, Geometric Embeddings and Graph Partitioning.” In STOC, 222–31. ACM.

Cheeger, Jeff. 1970. “A Lower Bound for the Smallest Eigenvalue of the Laplacian.” In Problems in Analysis (Papers Dedicated to Salomon Bochner, 1969), 195–99. Princeton Univ. Press, Princeton, N. J.

Dodziuk, Jozef. 1984. “Difference Equations, Isoperimetric Inequality and Transience of Certain Random Walks.” Trans. Amer. Math. Soc. 284 (2): 787–94. doi:10.2307/1999107.

Leighton, Frank Thomson, and Satish Rao. 1988. “An Approximate Max-Flow Min-Cut Theorem for Uniform Multicommodity Flow Problems with Applications to Approximation Algorithms.” In FOCS, 422–31. IEEE Computer Society.