i,j=1,…,n. \begin{aligned} & \text{minimize} && \mathrm{tr}(\Lambda^+M^+) - \mathrm{tr}(\Lambda^-M^-)\\ & \text{subject to} && r^Tx \ge r_\mathrm{min}\\ &&& xx^T \le \Lambda^+ - \Lambda^-\\ &&& 1^Tx = 1\\ &&& 0 \le x \le 1\\ &&& \Lambda^+_{ij}, \Lambda^-_{ij} \ge 0, \quad i,j =1, \dots, n. \end{aligned} minimizesubject totr(Λ+M+)−tr(Λ−M−)rTx≥rminxxT≤Λ+−Λ−1Tx=10≤x≤1Λij+,Λij−≥0,i,j=1,…,n. The (extra!) variables Λ+,Λ−∈Rn×n are included along with the original variable x∈Rn, and the same problem data as before. This problem, by use of the Schur complement applied to the semidefinite inequality, is easily recast into standard SDP form and can be solved by most standard convex optimization problem solvers, such as SCS or Mosek.
Quick edit (3/18/22): Thanks to bodonoghue85 for finding a typo!
[1] | See, e.g. page 429 of this paper, where they seem to re-prove basic things like "the supremum of a bunch of convex functions is convex," and "partial minimization of a convex function is convex," but I'm honestly not 100% sure. |
[3] | This is often referred to as strong duality, c.f. section 5.3.2 of Convex Optimization. There are some additional conditions for equality to hold, but these are almost always met in practice. |