Problem types and algorithms

Warning

This page is under construction, and may be incomplete.

Depending on the structure a problem can be reduced to, different types of algorithms will apply. The major distinctions are in the number of objective terms, whether any of them is differentiable, whether they are composed with some linear mapping (which in general complicates evaluating the proximal mapping). Based on this we can split problems, and algorithms that apply to them, in three categories:

In what follows, the list of available algorithms is given, with links to the documentation for their constructors and their underlying iterator type.

Two-terms: $f + g$

This is the most popular model, by far the most thoroughly studied, and an abundance of algorithms exist to solve problems in this form.

AlgorithmAssumptionsOracleImplementationReferences
Proximal gradient$f$ smooth$\nabla f$, $\operatorname{prox}_{\gamma g}$ForwardBackwardPierre-Louis Lions, Bertrand Mercier (1979)
Douglas-Rachford$\operatorname{prox}_{\gamma f}$, $\operatorname{prox}_{\gamma g}$DouglasRachfordJonathan Eckstein, Dimitri P Bertsekas (1992)
Fast proximal gradient$f$ convex, smooth, $g$ convex$\nabla f$, $\operatorname{prox}_{\gamma g}$FastForwardBackwardPaul Tseng (2008), Amir Beck, Marc Teboulle (2009)
PANOC$f$ smooth$\nabla f$, $\operatorname{prox}_{\gamma g}$PANOCLorenzo Stella, Andreas Themelis, Pantelis Sopasakis, Panagiotis Patrinos (2017)
ZeroFPR$f$ smooth$\nabla f$, $\operatorname{prox}_{\gamma g}$ZeroFPRAndreas Themelis, Lorenzo Stella, Panagiotis Patrinos (2018)
Douglas-Rachford line-search$f$ smooth$\operatorname{prox}_{\gamma f}$, $\operatorname{prox}_{\gamma g}$DRLSAndreas Themelis, Lorenzo Stella, Panagiotis Patrinos (2020)
PANOC+$f$ locally smooth$\nabla f$, $\operatorname{prox}_{\gamma g}$PANOCplusAlberto De Marchi, Andreas Themelis (2021)
ProximalAlgorithms.ForwardBackwardFunction
ForwardBackward(; <keyword-arguments>)

Constructs the forward-backward splitting algorithm [1].

This algorithm solves optimization problems of the form

minimize f(x) + g(x),

where f is smooth.

The returned object has type IterativeAlgorithm{ForwardBackwardIteration}, and can be called with the problem's arguments to trigger its solution.

See also: ForwardBackwardIteration, IterativeAlgorithm.

Arguments

  • maxit::Int=10_000: maximum number of iteration
  • tol::1e-8: tolerance for the default stopping criterion
  • stop::Function: termination condition, stop(::T, state) should return true when to stop the iteration
  • solution::Function: solution mapping, solution(::T, state) should return the identified solution
  • verbose::Bool=false: whether the algorithm state should be displayed
  • freq::Int=100: every how many iterations to display the algorithm state
  • display::Function: display function, display(::Int, ::T, state) should display a summary of the iteration state
  • kwargs...: additional keyword arguments to pass on to the ForwardBackwardIteration constructor upon call

References

  1. Lions, Mercier, “Splitting algorithms for the sum of two nonlinear operators,” SIAM Journal on Numerical Analysis, vol. 16, pp. 964–979 (1979).
source
ProximalAlgorithms.ForwardBackwardIterationType
ForwardBackwardIteration(; <keyword-arguments>)

Iterator implementing the forward-backward splitting algorithm [1].

This iterator solves optimization problems of the form

minimize f(x) + g(x),

where f is smooth.

See also: ForwardBackward.

Arguments

  • x0: initial point.
  • f=Zero(): smooth objective term.
  • g=Zero(): proximable objective term.
  • Lf=nothing: Lipschitz constant of the gradient of f.
  • gamma=nothing: stepsize to use, defaults to 1/Lf if not set (but Lf is).
  • adaptive=false: forces the method stepsize to be adaptively adjusted.
  • minimum_gamma=1e-7: lower bound to gamma in case adaptive == true.

References

  1. Lions, Mercier, “Splitting algorithms for the sum of two nonlinear operators,” SIAM Journal on Numerical Analysis, vol. 16, pp. 964–979 (1979).
source
ProximalAlgorithms.DouglasRachfordFunction
DouglasRachford(; <keyword-arguments>)

Constructs the Douglas-Rachford splitting algorithm [1].

This algorithm solves convex optimization problems of the form

minimize f(x) + g(x).

The returned object has type IterativeAlgorithm{DouglasRachfordIteration}, and can be called with the problem's arguments to trigger its solution.

See also: DouglasRachfordIteration, IterativeAlgorithm.

Arguments

  • maxit::Int=1_000: maximum number of iteration
  • tol::1e-8: tolerance for the default stopping criterion
  • stop::Function: termination condition, stop(::T, state) should return true when to stop the iteration
  • solution::Function: solution mapping, solution(::T, state) should return the identified solution
  • verbose::Bool=false: whether the algorithm state should be displayed
  • freq::Int=100: every how many iterations to display the algorithm state
  • display::Function: display function, display(::Int, ::T, state) should display a summary of the iteration state
  • kwargs...: additional keyword arguments to pass on to the DouglasRachfordIteration constructor upon call

References

  1. Eckstein, Bertsekas, "On the Douglas-Rachford Splitting Method and the Proximal Point Algorithm for Maximal Monotone Operators", Mathematical Programming, vol. 55, no. 1, pp. 293-318 (1989).
source
ProximalAlgorithms.DouglasRachfordIterationType
DouglasRachfordIteration(; <keyword-arguments>)

Iterator implementing the Douglas-Rachford splitting algorithm [1].

This iterator solves convex optimization problems of the form

minimize f(x) + g(x).

See also: DouglasRachford.

Arguments

  • x0: initial point.
  • f=Zero(): proximable objective term.
  • g=Zero(): proximable objective term.
  • gamma: stepsize to use.

References

  1. Eckstein, Bertsekas, "On the Douglas-Rachford Splitting Method and the Proximal Point Algorithm for Maximal Monotone Operators", Mathematical Programming, vol. 55, no. 1, pp. 293-318 (1989).
source
ProximalAlgorithms.FastForwardBackwardFunction
FastForwardBackward(; <keyword-arguments>)

Constructs the accelerated forward-backward splitting algorithm [1, 2].

This algorithm solves convex optimization problems of the form

minimize f(x) + g(x),

where f is smooth.

The returned object has type IterativeAlgorithm{FastForwardBackwardIteration}, and can be called with the problem's arguments to trigger its solution.

See also: FastForwardBackwardIteration, IterativeAlgorithm.

Arguments

  • maxit::Int=10_000: maximum number of iteration
  • tol::1e-8: tolerance for the default stopping criterion
  • stop::Function: termination condition, stop(::T, state) should return true when to stop the iteration
  • solution::Function: solution mapping, solution(::T, state) should return the identified solution
  • verbose::Bool=false: whether the algorithm state should be displayed
  • freq::Int=100: every how many iterations to display the algorithm state
  • display::Function: display function, display(::Int, ::T, state) should display a summary of the iteration state
  • kwargs...: additional keyword arguments to pass on to the FastForwardBackwardIteration constructor upon call

References

  1. Tseng, "On Accelerated Proximal Gradient Methods for Convex-Concave Optimization" (2008).
  2. Beck, Teboulle, "A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems", SIAM Journal on Imaging Sciences, vol. 2, no. 1, pp. 183-202 (2009).
source
ProximalAlgorithms.FastForwardBackwardIterationType
FastForwardBackwardIteration(; <keyword-arguments>)

Iterator implementing the accelerated forward-backward splitting algorithm [1, 2].

This iterator solves convex optimization problems of the form

minimize f(x) + g(x),

where f is smooth.

See also: FastForwardBackward.

Arguments

  • x0: initial point.
  • f=Zero(): smooth objective term.
  • g=Zero(): proximable objective term.
  • mf=0: convexity modulus of f.
  • Lf=nothing: Lipschitz constant of the gradient of f.
  • gamma=nothing: stepsize, defaults to 1/Lf if Lf is set, and nothing otherwise.
  • adaptive=true: makes gamma adaptively adjust during the iterations; this is by default gamma === nothing.
  • minimum_gamma=1e-7: lower bound to gamma in case adaptive == true.
  • extrapolation_sequence=nothing: sequence (iterator) of extrapolation coefficients to use for acceleration.

References

  1. Tseng, "On Accelerated Proximal Gradient Methods for Convex-Concave Optimization" (2008).
  2. Beck, Teboulle, "A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems", SIAM Journal on Imaging Sciences, vol. 2, no. 1, pp. 183-202 (2009).
source
ProximalAlgorithms.PANOCFunction
PANOC(; <keyword-arguments>)

Constructs the PANOC algorithm [1].

This algorithm solves optimization problems of the form

minimize f(Ax) + g(x),

where f is smooth and A is a linear mapping (for example, a matrix).

The returned object has type IterativeAlgorithm{PANOCIteration}, and can be called with the problem's arguments to trigger its solution.

See also: PANOCIteration, IterativeAlgorithm.

Arguments

  • maxit::Int=1_000: maximum number of iteration
  • tol::1e-8: tolerance for the default stopping criterion
  • stop::Function: termination condition, stop(::T, state) should return true when to stop the iteration
  • solution::Function: solution mapping, solution(::T, state) should return the identified solution
  • verbose::Bool=false: whether the algorithm state should be displayed
  • freq::Int=10: every how many iterations to display the algorithm state
  • display::Function: display function, display(::Int, ::T, state) should display a summary of the iteration state
  • kwargs...: additional keyword arguments to pass on to the PANOCIteration constructor upon call

References

  1. Stella, Themelis, Sopasakis, Patrinos, "A simple and efficient algorithm for nonlinear model predictive control", 56th IEEE Conference on Decision and Control (2017).
source
ProximalAlgorithms.PANOCIterationType
PANOCIteration(; <keyword-arguments>)

Iterator implementing the PANOC algorithm [1].

This iterator solves optimization problems of the form

minimize f(Ax) + g(x),

where f is smooth and A is a linear mapping (for example, a matrix).

See also: PANOC.

Arguments

  • x0: initial point.
  • f=Zero(): smooth objective term.
  • A=I: linear operator (e.g. a matrix).
  • g=Zero(): proximable objective term.
  • Lf=nothing: Lipschitz constant of the gradient of x ↦ f(Ax).
  • gamma=nothing: stepsize to use, defaults to 1/Lf if not set (but Lf is).
  • adaptive=false: forces the method stepsize to be adaptively adjusted.
  • minimum_gamma=1e-7: lower bound to gamma in case adaptive == true.
  • max_backtracks=20: maximum number of line-search backtracks.
  • directions=LBFGS(5): strategy to use to compute line-search directions.

References

  1. Stella, Themelis, Sopasakis, Patrinos, "A simple and efficient algorithm for nonlinear model predictive control", 56th IEEE Conference on Decision and Control (2017).
source
ProximalAlgorithms.ZeroFPRFunction
ZeroFPR(; <keyword-arguments>)

Constructs the ZeroFPR algorithm [1].

This algorithm solves optimization problems of the form

minimize f(Ax) + g(x),

where f is smooth and A is a linear mapping (for example, a matrix).

The returned object has type IterativeAlgorithm{ZeroFPRIteration}, and can be called with the problem's arguments to trigger its solution.

See also: ZeroFPRIteration, IterativeAlgorithm.

Arguments

  • maxit::Int=1_000: maximum number of iteration
  • tol::1e-8: tolerance for the default stopping criterion
  • stop::Function: termination condition, stop(::T, state) should return true when to stop the iteration
  • solution::Function: solution mapping, solution(::T, state) should return the identified solution
  • verbose::Bool=false: whether the algorithm state should be displayed
  • freq::Int=10: every how many iterations to display the algorithm state
  • display::Function: display function, display(::Int, ::T, state) should display a summary of the iteration state
  • kwargs...: additional keyword arguments to pass on to the ZeroFPRIteration constructor upon call

References

  1. Themelis, Stella, Patrinos, "Forward-backward envelope for the sum of two nonconvex functions: Further properties and nonmonotone line-search algorithms", SIAM Journal on Optimization, vol. 28, no. 3, pp. 2274-2303 (2018).
source
ProximalAlgorithms.ZeroFPRIterationType
ZeroFPRIteration(; <keyword-arguments>)

Iterator implementing the ZeroFPR algorithm [1].

This iterator solves optimization problems of the form

minimize f(Ax) + g(x),

where f is smooth and A is a linear mapping (for example, a matrix).

See also: ZeroFPR.

Arguments

  • x0: initial point.
  • f=Zero(): smooth objective term.
  • A=I: linear operator (e.g. a matrix).
  • g=Zero(): proximable objective term.
  • Lf=nothing: Lipschitz constant of the gradient of x ↦ f(Ax).
  • gamma=nothing: stepsize to use, defaults to 1/Lf if not set (but Lf is).
  • adaptive=false: forces the method stepsize to be adaptively adjusted.
  • minimum_gamma=1e-7: lower bound to gamma in case adaptive == true.
  • max_backtracks=20: maximum number of line-search backtracks.
  • directions=LBFGS(5): strategy to use to compute line-search directions.

References

  1. Themelis, Stella, Patrinos, "Forward-backward envelope for the sum of two nonconvex functions: Further properties and nonmonotone line-search algorithms", SIAM Journal on Optimization, vol. 28, no. 3, pp. 2274-2303 (2018).
source
ProximalAlgorithms.DRLSFunction
DRLS(; <keyword-arguments>)

Constructs the Douglas-Rachford line-search algorithm [1].

This algorithm solves convex optimization problems of the form

minimize f(x) + g(x),

where f is smooth.

The returned object has type IterativeAlgorithm{DRLSIteration}, and can be called with the problem's arguments to trigger its solution.

See also: DRLSIteration, IterativeAlgorithm.

Arguments

  • maxit::Int=1_000: maximum number of iteration
  • tol::1e-8: tolerance for the default stopping criterion
  • stop::Function: termination condition, stop(::T, state) should return true when to stop the iteration
  • solution::Function: solution mapping, solution(::T, state) should return the identified solution
  • verbose::Bool=false: whether the algorithm state should be displayed
  • freq::Int=10: every how many iterations to display the algorithm state
  • display::Function: display function, display(::Int, ::T, state) should display a summary of the iteration state
  • kwargs...: additional keyword arguments to pass on to the DRLSIteration constructor upon call

References

  1. Themelis, Stella, Patrinos, "Douglas-Rachford splitting and ADMM for nonconvex optimization: Accelerated and Newton-type linesearch algorithms", arXiv:2005.10230, 2020.
source
ProximalAlgorithms.DRLSIterationType
DRLSIteration(; <keyword-arguments>)

Iterator implementing the Douglas-Rachford line-search algorithm [1].

This iterator solves optimization problems of the form

minimize f(x) + g(x),

where f is smooth.

See also: DRLS.

Arguments

  • x0: initial point.
  • f=Zero(): smooth objective term.
  • g=Zero(): proximable objective term.
  • mf=nothing: convexity modulus of f.
  • Lf=nothing: Lipschitz constant of the gradient of f.
  • gamma: stepsize to use, chosen appropriately based on Lf and mf by defaults.
  • max_backtracks=20: maximum number of line-search backtracks.
  • directions=LBFGS(5): strategy to use to compute line-search directions.

References

  1. Themelis, Stella, Patrinos, "Douglas-Rachford splitting and ADMM for nonconvex optimization: Accelerated and Newton-type linesearch algorithms", arXiv:2005.10230, 2020.
source
ProximalAlgorithms.PANOCplusFunction
PANOCplus(; <keyword-arguments>)

Constructs the the PANOCplus algorithm [1].

This algorithm solves optimization problems of the form

minimize f(Ax) + g(x),

where f is locally smooth and A is a linear mapping (for example, a matrix).

The returned object has type IterativeAlgorithm{PANOCplusIteration}, and can be called with the problem's arguments to trigger its solution.

See also: PANOCplusIteration, IterativeAlgorithm.

Arguments

  • maxit::Int=1_000: maximum number of iteration
  • tol::1e-8: tolerance for the default stopping criterion
  • stop::Function: termination condition, stop(::T, state) should return true when to stop the iteration
  • solution::Function: solution mapping, solution(::T, state) should return the identified solution
  • verbose::Bool=false: whether the algorithm state should be displayed
  • freq::Int=10: every how many iterations to display the algorithm state
  • display::Function: display function, display(::Int, ::T, state) should display a summary of the iteration state
  • kwargs...: additional keyword arguments to pass on to the PANOCplusIteration constructor upon call

References

  1. De Marchi, Themelis, "Proximal gradient algorithms under local Lipschitz gradient continuity: a convergence and robustness analysis of PANOC", arXiv:2112.13000 (2021).
source
ProximalAlgorithms.PANOCplusIterationType
PANOCplusIteration(; <keyword-arguments>)

Iterator implementing the PANOCplus algorithm [1].

This iterator solves optimization problems of the form

minimize f(Ax) + g(x),

where f is locally smooth and A is a linear mapping (for example, a matrix).

See also: PANOCplus.

Arguments

  • x0: initial point.
  • f=Zero(): smooth objective term.
  • A=I: linear operator (e.g. a matrix).
  • g=Zero(): proximable objective term.
  • Lf=nothing: Lipschitz constant of the gradient of x ↦ f(Ax).
  • gamma=nothing: stepsize to use, defaults to 1/Lf if not set (but Lf is).
  • adaptive=false: forces the method stepsize to be adaptively adjusted.
  • minimum_gamma=1e-7: lower bound to gamma in case adaptive == true.
  • max_backtracks=20: maximum number of line-search backtracks.
  • directions=LBFGS(5): strategy to use to compute line-search directions.

References

  1. De Marchi, Themelis, "Proximal gradient algorithms under local Lipschitz gradient continuity: a convergence and robustness analysis of PANOC", arXiv:2112.13000 (2021).
source

Three-terms: $f + g + h$

When more than one non-differentiable term is there in the objective, algorithms from the previous section do not in general apply out of the box, since $\operatorname{prox}_{\gamma (g + h)}$ does not have a closed form unless in particular cases. Therefore, ad-hoc iteration schemes have been studied.

AlgorithmAssumptionsOracleImplementationReferences
Davis-Yin$f$ convex and smooth, $g, h$ convex$\nabla f$, $\operatorname{prox}_{\gamma g}$, $\operatorname{prox}_{\gamma h}$DavisYinDamek Davis, Wotao Yin (2017)
ProximalAlgorithms.DavisYinFunction
DavisYin(; <keyword-arguments>)

Constructs the Davis-Yin splitting algorithm [1].

This algorithm solves convex optimization problems of the form

minimize f(x) + g(x) + h(x),

where f is smooth.

The returned object has type IterativeAlgorithm{DavisYinIteration}, and can be called with the problem's arguments to trigger its solution.

See also: DavisYinIteration, IterativeAlgorithm.

Arguments

  • maxit::Int=10_000: maximum number of iteration
  • tol::1e-8: tolerance for the default stopping criterion
  • stop::Function: termination condition, stop(::T, state) should return true when to stop the iteration
  • solution::Function: solution mapping, solution(::T, state) should return the identified solution
  • verbose::Bool=false: whether the algorithm state should be displayed
  • freq::Int=100: every how many iterations to display the algorithm state
  • display::Function: display function, display(::Int, ::T, state) should display a summary of the iteration state
  • kwargs...: additional keyword arguments to pass on to the DavisYinIteration constructor upon call

References

  1. Davis, Yin. "A Three-Operator Splitting Scheme and its Optimization Applications", Set-Valued and Variational Analysis, vol. 25, no. 4, pp. 829–858 (2017).
source
ProximalAlgorithms.DavisYinIterationType
DavisYinIteration(; <keyword-arguments>)

Iterator implementing the Davis-Yin splitting algorithm [1].

This iterator solves convex optimization problems of the form

minimize f(x) + g(x) + h(x),

where f is smooth.

See also DavisYin.

Arguments

  • x0: initial point.
  • f=Zero(): smooth objective term.
  • g=Zero(): proximable objective term.
  • h=Zero(): proximable objective term.
  • Lf=nothing: Lipschitz constant of the gradient of h.
  • gamma=nothing: stepsize to use, defaults to 1/Lf if not set (but Lf is).

References

  1. Davis, Yin. "A Three-Operator Splitting Scheme and its Optimization Applications", Set-Valued and Variational Analysis, vol. 25, no. 4, pp. 829-858 (2017).
source

Primal-dual: $f + g + h \circ L$

When a function $h$ is composed with a linear operator $L$, the proximal operator of $h \circ L$ does not have a closed form in general. For this reason, specific algorithms by the name of "primal-dual" splitting schemes are often applied to this model.

AlgorithmAssumptionsOracleImplementationReferences
Chambolle-Pock$f\equiv 0$, $g, h$ convex, $L$ linear operator$\operatorname{prox}_{\gamma g}$, $\operatorname{prox}_{\gamma h}$, $L$, $L^*$ChambollePockAntonin Chambolle, Thomas Pock (2011)
Vu-Condat$f$ convex and smooth, $g, h$ convex, $L$ linear operator$\nabla f$, $\operatorname{prox}_{\gamma g}$, $\operatorname{prox}_{\gamma h}$, $L$, $L^*$VuCodatBằng Công Vũ (2013), Laurent Condat (2013)
AFBA$f$ convex and smooth, $g, h$ convex, $L$ linear operator$\nabla f$, $\operatorname{prox}_{\gamma g}$, $\operatorname{prox}_{\gamma h}$, $L$, $L^*$AFBAPuya Latafat, Panagiotis Patrinos (2017)
ProximalAlgorithms.ChambollePockFunction
ChambollePock(; <keyword-arguments>)

Constructs the Chambolle-Pock primal-dual algorithm [1].

This algorithm solves convex optimization problems of the form

minimize g(x) + h(L x),

where g and h are possibly nonsmooth, and L is a linear mapping.

The returned object has type IterativeAlgorithm{AFBAIteration}, and can be called with the problem's arguments to trigger its solution.

See also: ChambollePockIteration, AFBAIteration, IterativeAlgorithm.

Arguments

  • maxit::Int=10_000: maximum number of iteration
  • tol::1e-5: tolerance for the default stopping criterion
  • stop::Function: termination condition, stop(::T, state) should return true when to stop the iteration
  • solution::Function: solution mapping, solution(::T, state) should return the identified solution
  • verbose::Bool=false: whether the algorithm state should be displayed
  • freq::Int=100: every how many iterations to display the algorithm state
  • display::Function: display function, display(::Int, ::T, state) should display a summary of the iteration state
  • kwargs...: additional keyword arguments to pass on to the AFBAIteration constructor upon call

References

  1. Chambolle, Pock, "A First-Order Primal-Dual Algorithm for Convex Problems with Applications to Imaging", Journal of Mathematical Imaging and Vision, vol. 40, no. 1, pp. 120-145 (2011).
source
ProximalAlgorithms.ChambollePockIterationFunction
ChambollePockIteration(; <keyword-arguments>)

Iterator implementing the Chambolle-Pock primal-dual algorithm [1].

This iterator solves convex optimization problems of the form

minimize g(x) + h(L x),

where g and h are possibly nonsmooth, and L is a linear mapping.

See also: AFBAIteration, ChambollePock.

This iteration is equivalent to AFBAIteration with theta=2, f=Zero(), l=IndZero(); for all other arguments see AFBAIteration.

References

  1. Chambolle, Pock, "A First-Order Primal-Dual Algorithm for Convex Problems with Applications to Imaging", Journal of Mathematical Imaging and Vision, vol. 40, no. 1, pp. 120-145 (2011).
source
ProximalAlgorithms.VuCondatFunction
VuCondat(; <keyword-arguments>)

Constructs the Vũ-Condat primal-dual algorithm [1, 2].

This algorithm solves convex optimization problems of the form

minimize f(x) + g(x) + (h □ l)(L x),

where f is smooth, g and h are possibly nonsmooth and l is strongly convex. Symbol denotes the infimal convolution, and L is a linear mapping.

The returned object has type IterativeAlgorithm{AFBAIteration}, and can be called with the problem's arguments to trigger its solution.

See also: VuCondatIteration, AFBAIteration, IterativeAlgorithm.

Arguments

  • maxit::Int=10_000: maximum number of iteration
  • tol::1e-5: tolerance for the default stopping criterion
  • stop::Function: termination condition, stop(::T, state) should return true when to stop the iteration
  • solution::Function: solution mapping, solution(::T, state) should return the identified solution
  • verbose::Bool=false: whether the algorithm state should be displayed
  • freq::Int=100: every how many iterations to display the algorithm state
  • display::Function: display function, display(::Int, ::T, state) should display a summary of the iteration state
  • kwargs...: additional keyword arguments to pass on to the AFBAIteration constructor upon call

References

  1. Condat, "A primal-dual splitting method for convex optimization involving Lipschitzian, proximable and linear composite terms", Journal of Optimization Theory and Applications, vol. 158, no. 2, pp 460-479 (2013).
  2. Vũ, "A splitting algorithm for dual monotone inclusions involving cocoercive operators", Advances in Computational Mathematics, vol. 38, no. 3, pp. 667-681 (2013).
source
ProximalAlgorithms.VuCondatIterationFunction
VuCondatIteration(; <keyword-arguments>)

Iterator implementing the Vũ-Condat primal-dual algorithm [1, 2].

This iterator solves convex optimization problems of the form

minimize f(x) + g(x) + (h □ l)(L x),

where f is smooth, g and h are possibly nonsmooth and l is strongly convex. Symbol denotes the infimal convolution, and L is a linear mapping.

This iteration is equivalent to AFBAIteration with theta=2; for all other arguments see AFBAIteration.

See also: AFBAIteration, VuCondat.

References

  1. Condat, "A primal-dual splitting method for convex optimization involving Lipschitzian, proximable and linear composite terms", Journal of Optimization Theory and Applications, vol. 158, no. 2, pp 460-479 (2013).
  2. Vũ, "A splitting algorithm for dual monotone inclusions involving cocoercive operators", Advances in Computational Mathematics, vol. 38, no. 3, pp. 667-681 (2013).
source
ProximalAlgorithms.AFBAFunction
AFBA(; <keyword-arguments>)

Constructs the asymmetric forward-backward-adjoint algorithm (AFBA, see [1]).

This algorithm solves convex optimization problems of the form

minimize f(x) + g(x) + (h □ l)(L x),

where f is smooth, g and h are possibly nonsmooth and l is strongly convex. Symbol denotes the infimal convolution, and L is a linear mapping.

The returned object has type IterativeAlgorithm{AFBAIteration}, and can be called with the problem's arguments to trigger its solution.

See also: AFBAIteration, IterativeAlgorithm.

Arguments

  • maxit::Int=10_000: maximum number of iteration
  • tol::1e-5: tolerance for the default stopping criterion
  • stop::Function: termination condition, stop(::T, state) should return true when to stop the iteration
  • solution::Function: solution mapping, solution(::T, state) should return the identified solution
  • verbose::Bool=false: whether the algorithm state should be displayed
  • freq::Int=100: every how many iterations to display the algorithm state
  • display::Function: display function, display(::Int, ::T, state) should display a summary of the iteration state
  • kwargs...: additional keyword arguments to pass on to the AFBAIteration constructor upon call

References

  1. Latafat, Patrinos, "Asymmetric forward-backward-adjoint splitting for solving monotone inclusions involving three operators", Computational Optimization and Applications, vol. 68, no. 1, pp. 57-93 (2017).
  2. Latafat, Patrinos, "Primal-dual proximal algorithms for structured convex optimization: a unifying framework", In Large-Scale and Distributed Optimization, Giselsson and Rantzer, Eds. Springer International Publishing, pp. 97-120 (2018).
source
ProximalAlgorithms.AFBAIterationType
AFBAIteration(; <keyword-arguments>)

Iterator implementing the asymmetric forward-backward-adjoint algorithm (AFBA, see [1]).

This iterator solves convex optimization problems of the form

minimize f(x) + g(x) + (h □ l)(L x),

where f is smooth, g and h are possibly nonsmooth and l is strongly convex. Symbol denotes the infimal convolution, and L is a linear mapping.

Points x0 and y0 are the initial primal and dual iterates, respectively. If unspecified, functions f, g, and h default to the identically zero function, l defaults to the indicator of the set {0}, and L defaults to the identity. Important keyword arguments, in case f and l are set, are the Lipschitz constants beta_f and beta_l (see below).

The iterator implements Algorithm 3 of [1] with constant stepsize (α_n=λ) for several prominant special cases:

  1. θ = 2 ==> Corresponds to the Vu-Condat Algorithm [3, 4].
  2. θ = 1, μ=1
  3. θ = 0, μ=1
  4. θ ∈ [0,∞), μ=0

See [2, Section 5.2] and [1, Figure 1] for stepsize conditions, special cases, and relation to other algorithms.

See also: AFBA.

Arguments

  • x0: initial primal point.
  • y0: initial dual point.
  • f=Zero(): smooth objective term.
  • g=Zero(): proximable objective term.
  • h=Zero(): proximable objective term.
  • l=IndZero(): strongly convex function.
  • L=I: linear operator (e.g. a matrix).
  • beta_f=0: Lipschitz constant of the gradient of f.
  • beta_l=0: Lipschitz constant of the gradient of l conjugate.
  • theta=1: nonnegative algorithm parameter.
  • mu=1: algorithm parameter in the range [0,1].
  • gamma1: primal stepsize (see [1] for the default choice).
  • gamma2: dual stepsize (see [1] for the default choice).

References

  1. Latafat, Patrinos, "Asymmetric forward-backward-adjoint splitting for solving monotone inclusions involving three operators", Computational Optimization and Applications, vol. 68, no. 1, pp. 57-93 (2017).
  2. Latafat, Patrinos, "Primal-dual proximal algorithms for structured convex optimization: a unifying framework", In Large-Scale and Distributed Optimization, Giselsson and Rantzer, Eds. Springer International Publishing, pp. 97-120 (2018).
  3. Condat, "A primal-dual splitting method for convex optimization involving Lipschitzian, proximable and linear composite terms", Journal of Optimization Theory and Applications, vol. 158, no. 2, pp 460-479 (2013).
  4. Vũ, "A splitting algorithm for dual monotone inclusions involving cocoercive operators", Advances in Computational Mathematics, vol. 38, no. 3, pp. 667-681 (2013).
source