Solvers
Minimizing a function
StructuredOptimization.@minimize
— Macro@minimize cost [st ctr] [with slv_opt]
Minimize a given problem with cost function cost
, constraints ctr
and solver options slv_opt
.
Example
julia> using StructuredOptimization
julia> A, b, x = randn(10,4), randn(10), Variable(4);
julia> @minimize ls(A*x-b) + 0.5*norm(x);
julia> ~x # access array with solution
julia> @minimize ls(A*x-b) st x >= 0.;
julia> ~x # access array with solution
julia> @minimize ls(A*x-b) st norm(x) == 2.0 with ForwardBackward(fast=true);
julia> ~x # access array with solution
Returns as output a tuple containing the optimization variables and the number of iterations spent by the solver algorithm.
By default warm-starting is always enabled. For example, if two problems that involve the same variables are solved consecutively, the second one will be automatically warm-started by the solution of the first one. That is because the variables are always linked to their respective data vectors. If one wants to avoid this, the optimization variables needs to be manually re-initialized before solving the second problem e.g. to a vector of zeros: ~x .= 0.0
.
Specifying solver and options
You can pick the algorithm to use as Solver
object from the ProximalAlgorithms.jl
package. Currently, the following algorithms are supported.
ProximalAlgorithms.ForwardBackward
— TypeForwardBackward([gamma, adaptive, fast, maxit, tol, verbose, freq])
Instantiate the Forward-Backward splitting algorithm (see [1, 2]) for solving optimization problems of the form
minimize f(Ax) + g(x),
where f
is smooth and A
is a linear mapping (for example, a matrix). If solver = ForwardBackward(args...)
, then the above problem is solved with
solver(x0, [f, A, g, L])
Optional keyword arguments:
gamma::Real
(default:nothing
), the stepsize to use; defaults to1/L
if not set (butL
is).adaptive::Bool
(default:false
), if true, forces the method stepsize to be adaptively adjusted.fast::Bool
(default:false
), if true, uses Nesterov acceleration.maxit::Integer
(default:10000
), maximum number of iterations to perform.tol::Real
(default:1e-8
), absolute tolerance on the fixed-point residual.verbose::Bool
(default:true
), whether or not to print information during the iterations.freq::Integer
(default:10
), frequency of verbosity.
If gamma
is not specified at construction time, the following keyword argument can be used to set the stepsize parameter:
L::Real
(default:nothing
), the Lipschitz constant of the gradient of x ↦ f(Ax).
References:
[1] Tseng, "On Accelerated Proximal Gradient Methods for Convex-Concave Optimization" (2008).
[2] Beck, Teboulle, "A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems", SIAM Journal on Imaging Sciences, vol. 2, no. 1, pp. 183-202 (2009).
ProximalAlgorithms.ZeroFPR
— TypeZeroFPR([gamma, adaptive, memory, maxit, tol, verbose, freq, alpha, beta])
Instantiate the ZeroFPR algorithm (see [1]) for solving optimization problems of the form
minimize f(Ax) + g(x),
where f
is smooth and A
is a linear mapping (for example, a matrix). If solver = ZeroFPR(args...)
, then the above problem is solved with
solver(x0, [f, A, g, L])
Optional keyword arguments:
gamma::Real
(default:nothing
), the stepsize to use; defaults toalpha/L
if not set (butL
is).adaptive::Bool
(default:false
), if true, forces the method stepsize to be adaptively adjusted even ifL
is provided (this behaviour is always enforced ifL
is not provided).memory::Integer
(default:5
), memory parameter for L-BFGS.maxit::Integer
(default:1000
), maximum number of iterations to perform.tol::Real
(default:1e-8
), absolute tolerance on the fixed-point residual.verbose::Bool
(default:true
), whether or not to print information during the iterations.freq::Integer
(default:10
), frequency of verbosity.alpha::Real
(default:0.95
), stepsize to inverse-Lipschitz-constant ratio; should be in (0, 1).beta::Real
(default:0.5
), sufficient decrease parameter; should be in (0, 1).
If gamma
is not specified at construction time, the following keyword argument can be used to set the stepsize parameter:
L::Real
(default:nothing
), the Lipschitz constant of the gradient of x ↦ f(Ax).
References:
[1] Themelis, Stella, Patrinos, "Forward-backward envelope for the sum of two nonconvex functions: Further properties and nonmonotone line-search algorithms", SIAM Journal on Optimization, vol. 28, no. 3, pp. 2274–2303 (2018).
ProximalAlgorithms.PANOC
— TypePANOC([gamma, adaptive, memory, maxit, tol, verbose, freq, alpha, beta])
Instantiate the PANOC algorithm (see [1]) for solving optimization problems of the form
minimize f(Ax) + g(x),
where f
is smooth and A
is a linear mapping (for example, a matrix). If solver = PANOC(args...)
, then the above problem is solved with
solver(x0, [f, A, g, L])
Optional keyword arguments:
gamma::Real
(default:nothing
), the stepsize to use; defaults toalpha/L
if not set (butL
is).adaptive::Bool
(default:false
), if true, forces the method stepsize to be adaptively adjusted even ifL
is provided (this behaviour is always enforced ifL
is not provided).memory::Integer
(default:5
), memory parameter for L-BFGS.maxit::Integer
(default:1000
), maximum number of iterations to perform.tol::Real
(default:1e-8
), absolute tolerance on the fixed-point residual.verbose::Bool
(default:true
), whether or not to print information during the iterations.freq::Integer
(default:10
), frequency of verbosity.alpha::Real
(default:0.95
), stepsize to inverse-Lipschitz-constant ratio; should be in (0, 1).beta::Real
(default:0.5
), sufficient decrease parameter; should be in (0, 1).
If gamma
is not specified at construction time, the following keyword argument can be used to set the stepsize parameter:
L::Real
(default:nothing
), the Lipschitz constant of the gradient of x ↦ f(Ax).
References:
[1] Stella, Themelis, Sopasakis, Patrinos, "A simple and efficient algorithm for nonlinear model predictive control", 56th IEEE Conference on Decision and Control (2017).
Parse and solve
The macro @minimize
automatically parse and solve the problem. An alternative syntax is given by the function problem
and solve
.
StructuredOptimization.problem
— Functionproblems(terms...)
Constructs a problem.
Example
julia> x = Variable(4)
Variable(Float64, (4,))
julia> A, b = randn(10,4), randn(10);
julia> p = problem(ls(A*x-b), norm(x) <= 1)
StructuredOptimization.solve
— Functionsolve(terms::Tuple, solver::ForwardBackwardSolver)
Takes as input a tuple containing the terms defining the problem and the solver options.
Solves the problem returning a tuple containing the iterations taken and the build solver.
Example
julia> x = Variable(4)
Variable(Float64, (4,))
julia> A, b = randn(10,4), randn(10);
julia> p = problem(ls(A*x - b ), norm(x) <= 1);
julia> solve(p, ForwardBackward());
julia> ~x
Once again, the Solver
objects is to be picked from ProximalAlgorithms.jl
).
References
[1] Tseng, On Accelerated Proximal Gradient Methods for Convex-Concave Optimization (2008).
[2] Beck, Teboulle, A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems, SIAM Journal on Imaging Sciences, vol. 2, no. 1, pp. 183-202 (2009).
[3] Themelis, Stella, Patrinos, Forward-backward envelope for the sum of two nonconvex functions: Further properties and nonmonotone line-search algorithms, arXiv:1606.06256 (2016).
[4] Stella, Themelis, Sopasakis, Patrinos, A simple and efficient algorithm for nonlinear model predictive control, 56th IEEE Conference on Decision and Control (2017).