Functions and constraints

Once an expression is created it is possible to create the Terms defining the optimization problem. These can consists of either Smooth functions, Nonsmooth functions, Inequality constraints or Equality constraints.

Smooth functions

StructuredOptimization.lsFunction
ls(x::AbstractExpression)

Returns the squared norm (least squares) of x:

\[f (\mathbf{x}) = \frac{1}{2} \| \mathbf{x} \|^2\]

(shorthand of 1/2*norm(x)^2).

source
StructuredOptimization.huberlossFunction
huberloss(x::AbstractExpression, ρ=1.0)

Applies the Huber loss function:

\[f(\mathbf{x}) = \begin{cases} \tfrac{1}{2}\| \mathbf{x} \|^2 & \text{if} \ \| \mathbf{x} \| \leq \rho \\ \rho (\| \mathbf{x} \| - \tfrac{\rho}{2}) & \text{otherwise}. \end{cases}\]
source
StructuredOptimization.sqrhingelossFunction
sqrhingeloss(x::AbstractExpression, y::Array)

Applies the squared Hinge loss function

\[f( \mathbf{x} ) = \sum_{i} \max\{0, 1 - y_i x_i \}^2,\]

where y is an array containing $y_i$.

source
StructuredOptimization.crossentropyFunction
crossentropy(x::AbstractExpression, y::Array)

Applies the cross entropy loss function:

\[f(\mathbf{x}) = -1/N \sum_{i}^{N} y_i \log (x_i)+(1-y_i) \log (1-x_i),\]

where y is an array of length $N$ containing $y_i$ having $0 \leq y_i \leq 1$.

source
StructuredOptimization.logisticlossFunction
logbarrier(x::AbstractExpression, y::AbstractArray)

Applies the logistic loss function:

\[f(\mathbf{x}) = \sum_{i} \log(1+ \exp(-y_i x_i)),\]

where y is an array containing $y_i$.

source
LinearAlgebra.dotFunction
dot(c::AbstractVector, x::AbstractExpression)

Applies the function:

\[f(\mathbf{x}) = \mathbf{c}^{T}\mathbf{x}.\]
source

Nonsmooth functions

LinearAlgebra.normFunction
norm(x::AbstractExpression, p=2, [q,] [dim=1])

Returns the norm of x.

Supported norms:

  • p = 0 $l_0$-pseudo-norm

  • p = 1 $l_1$-norm

  • p = 2 $l_2$-norm

  • p = Inf $l_{\infty}$-norm

  • p = * nuclear norm

  • p = 2, q = 1 $l_{2,1}$ mixed norm (aka Sum-of-$l_2$-norms)

\[f(\mathbf{X}) = \sum_i \| \mathbf{x}_i \|\]

where $\mathbf{x}_i$ is the $i$-th column if dim == 1 (or row if dim == 2) of $\mathbf{X}$.

source
Base.maximumFunction
maximum(x::AbstractExpression)

Applies the function:

\[f(\mathbf{x}) = \max \{x_i : i = 1,\ldots, n \}.\]
source
StructuredOptimization.hingelossFunction
hingeloss(x::AbstractExpression, y::Array)

Applies the Hinge loss function

\[f( \mathbf{x} ) = \sum_{i} \max\{0, 1 - y_i x_i \},\]

where y is an array containing $y_i$.

source

Inequality constraints

Base.:<=Function

Inequalities constrains

Norm Inequalities constraints

  • norm(x::AbstractExpression, 0) <= n::Integer

    $\mathrm{nnz}(\mathbf{x}) \leq n$

  • norm(x::AbstractExpression, 1) <= r::Number

    $\sum_i \| x_i \| \leq r$

  • norm(x::AbstractExpression, 2) <= r::Number

    $\| \mathbf{x} \| \leq r$

  • norm(x::AbstractExpression, Inf) <= r::Number

    $\max \{ x_1, x_2, \dots \} \leq r$

Box inequality constraints

  • x::AbstractExpression <= u::Union{AbstractArray, Real}

    $x_i \leq u_i$

  • x::AbstractExpression >= l::Union{AbstractArray, Real}

    $x_i \geq l_i$

    Notice that the notation x in [l,u] is also possible.

Rank inequality constraints

  • rank(X::AbstractExpression) <= n::Integer

    $\mathrm{rank}(\mathbf{X}) \leq r$

    Notice that the expression X must have a codomain with dimension equal to 2.

source

Equality constraints

Base.:==Function

Equalities constraints

Affine space constraint

  • ex == b::Union{Real,AbstractArray}

    Requires expression to be affine.

    Example

    julia> A,b  = randn(10,5), randn(10);
    
    julia> x = Variable(5);
    
    julia> A*x == b

Norm equality constraint

  • norm(x::AbstractExpression) == r::Number

    $\| \mathbf{x} \| = r$

Binary constraint

  • x::AbstractExpression == (l, u)

    $\mathbf{x} = \mathbf{l}$ or $\mathbf{x} = \mathbf{u}$

source

Smoothing

Sometimes the optimization problem might involve non-smooth terms which do not have efficiently computable proximal mappings. It is possible to smoothen these terms by means of the Moreau envelope.

StructuredOptimization.smoothFunction
smooth(t::Term, gamma = 1.0)

Smooths the nonsmooth term t using Moreau envelope:

\[f^{\gamma}(\mathbf{x}) = \min_{\mathbf{z}} \left\{ f(\mathbf{z}) + \tfrac{1}{2\gamma}\|\mathbf{z}-\mathbf{x}\|^2 \right\}.\]

Example

julia> x = Variable(4)
Variable(Float64, (4,))

julia> x = Variable(4);

julia> t = smooth(norm(x,1))
source

Duality

In some cases it is more convenient to solve the dual problem instead of the primal problem. It is possible to convert a problem into its dual by means of the convex conjugate.

See the Total Variation demo for an example of such procedure.

Base.conjFunction
conj(t::Term)

Returns the convex conjugate transform of t:

\[f^*(\mathbf{x}) = \sup_{\mathbf{y}} \{ \langle \mathbf{y}, \mathbf{x} \rangle - f(\mathbf{y}) \}.\]

Example

julia> x = Variable(4)
Variable(Float64, (4,))

julia> x = Variable(4);

julia> t = conj(norm(x,1))
source