Calculus rules

The calculus rules described in the following allow to modify and combine functions, to obtain new ones with efficiently computable proximal mapping.

Duality

ProximalOperators.ConjugateType
Conjugate(f)

Return the convex conjugate (also known as Fenchel conjugate, or Fenchel-Legendre transform) of function f, that is

\[f^*(x) = \sup_y \{ \langle y, x \rangle - f(y) \}.\]

source

Functions combination

ProximalOperators.PointwiseMinimumType
PointwiseMinimum(f_1, ..., f_k)

Given functions f_1 to f_k, return their pointwise minimum, that is function

\[g(x) = \min\{f_1(x), ..., f_k(x)\}\]

Note that g is a nonconvex function in general.

source
ProximalOperators.SeparableSumType
SeparableSum(f_1, ..., f_k)

Given functions f_1 to f_k, return their separable sum, that is

\[g(x_1, ..., x_k) = \sum_{i=1}^k f_i(x_i).\]

The object g constructed in this way can be evaluated at Tuples of length k. Likewise, the prox and prox! methods for g operate with (input and output) Tuples of length k.

Example:

f = SeparableSum(NormL1(), NuclearNorm()); # separable sum of two functions
x = randn(10); # some random vector
Y = randn(20, 30); # some random matrix
f_xY = f((x, Y)); # evaluates f at (x, Y)
(u, V), f_uV = prox(f, (x, Y), 1.3); # computes prox at (x, Y)
source
ProximalOperators.SlicedSeparableSumType
SlicedSeparableSum((f_1, ..., f_k), (J_1, ..., J_k))

Return the function

\[g(x) = \sum_{i=1}^k f_i(x_{J_i}).\]

SlicedSeparableSum(f, (J_1, ..., J_k))

Analogous to the previous one, but apply the same function f to all slices of the variable x:

\[g(x) = \sum_{i=1}^k f(x_{J_i}).\]

source

Functions regularization

ProximalOperators.MoreauEnvelopeType
MoreauEnvelope(f, γ=1)

Return the Moreau envelope (also known as Moreau-Yosida regularization) of function f with parameter γ (positive), that is

\[f^γ(x) = \min_z \left\{ f(z) + \tfrac{1}{2γ}\|z-x\|^2 \right\}.\]

If $f$ is convex, then $f^γ$ is a smooth, convex, lower approximation to $f$, having the same minima as the original function.

source
ProximalOperators.RegularizeType
Regularize(f, ρ=1.0, a=0.0)

Given function f, and optional parameters ρ (positive) and a, return

\[g(x) = f(x) + \tfrac{ρ}{2}\|x-a\|².\]

Parameter a can be either an array or a scalar, in which case it is subtracted component-wise from x in the above expression.

source

Pre- and post-transformations

ProximalOperators.PrecomposeType
Precompose(f, L, μ, b)

Return the function

\[g(x) = f(Lx + b)\]

where $f$ is a convex function and $L$ is a linear mapping: this must satisfy $LL^* = μI$ for $μ > 0$. Furthermore, either $f$ is separable or parameter μ is a scalar, for the prox of $g$ to be computable.

Parameter L defines $L$ through the mul! method. Therefore L can be an AbstractMatrix for example, but not necessarily.

In this case, prox and prox! are computed according to Prop. 24.14 in Bauschke, Combettes "Convex Analysis and Monotone Operator Theory in Hilbert Spaces", 2nd edition, 2016. The same result is Prop. 23.32 in the 1st edition of the same book.

source
ProximalOperators.PrecomposeDiagonalType
PrecomposeDiagonal(f, a, b)

Return the function

\[g(x) = f(\mathrm{diag}(a)x + b)\]

Function $f$ must be convex and separable, or a must be a scalar, for the prox of $g$ to be computable. Parametes a and b can be arrays of multiple dimensions, according to the shape/size of the input x that will be provided to the function: the way the above expression for $g$ should be thought of, is g(x) = f(a.*x + b).

source
ProximalOperators.TiltType
Tilt(f, a, b=0.0)

Given function f, an array a and a constant b (optional), return function

\[g(x) = f(x) + \langle a, x \rangle + b.\]

source