Problem types and algorithms
This page is under construction, and may be incomplete.
Depending on the structure a problem can be reduced to, different types of algorithms will apply. The major distinctions are in the number of objective terms, whether any of them is differentiable, whether they are composed with some linear mapping (which in general complicates evaluating the proximal mapping). Based on this we can split problems, and algorithms that apply to them, in three categories:
In what follows, the list of available algorithms is given, with links to the documentation for their constructors and their underlying iterator type.
Two-terms: $f + g$
This is the most popular model, by far the most thoroughly studied, and an abundance of algorithms exist to solve problems in this form.
Algorithm | Assumptions | Oracle | Implementation | References |
---|---|---|---|---|
Proximal gradient | $f$ smooth | $\nabla f$, $\operatorname{prox}_{\gamma g}$ | ForwardBackward | Pierre-Louis Lions, Bertrand Mercier (1979) |
Douglas-Rachford | $\operatorname{prox}_{\gamma f}$, $\operatorname{prox}_{\gamma g}$ | DouglasRachford | Jonathan Eckstein, Dimitri P Bertsekas (1992) | |
Fast proximal gradient | $f$ convex, smooth, $g$ convex | $\nabla f$, $\operatorname{prox}_{\gamma g}$ | FastForwardBackward | Paul Tseng (2008), Amir Beck, Marc Teboulle (2009) |
PANOC | $f$ smooth | $\nabla f$, $\operatorname{prox}_{\gamma g}$ | PANOC | Lorenzo Stella, Andreas Themelis, Pantelis Sopasakis, Panagiotis Patrinos (2017) |
ZeroFPR | $f$ smooth | $\nabla f$, $\operatorname{prox}_{\gamma g}$ | ZeroFPR | Andreas Themelis, Lorenzo Stella, Panagiotis Patrinos (2018) |
Douglas-Rachford line-search | $f$ smooth | $\operatorname{prox}_{\gamma f}$, $\operatorname{prox}_{\gamma g}$ | DRLS | Andreas Themelis, Lorenzo Stella, Panagiotis Patrinos (2020) |
PANOC+ | $f$ locally smooth | $\nabla f$, $\operatorname{prox}_{\gamma g}$ | PANOCplus | Alberto De Marchi, Andreas Themelis (2021) |
ProximalAlgorithms.ForwardBackward
— FunctionForwardBackward(; <keyword-arguments>)
Constructs the forward-backward splitting algorithm [1].
This algorithm solves optimization problems of the form
minimize f(x) + g(x),
where f
is smooth.
The returned object has type IterativeAlgorithm{ForwardBackwardIteration}
, and can be called with the problem's arguments to trigger its solution.
See also: ForwardBackwardIteration
, IterativeAlgorithm
.
Arguments
maxit::Int=10_000
: maximum number of iterationtol::1e-8
: tolerance for the default stopping criterionstop::Function
: termination condition,stop(::T, state)
should returntrue
when to stop the iterationsolution::Function
: solution mapping,solution(::T, state)
should return the identified solutionverbose::Bool=false
: whether the algorithm state should be displayedfreq::Int=100
: every how many iterations to display the algorithm statedisplay::Function
: display function,display(::Int, ::T, state)
should display a summary of the iteration statekwargs...
: additional keyword arguments to pass on to theForwardBackwardIteration
constructor upon call
References
- Lions, Mercier, “Splitting algorithms for the sum of two nonlinear operators,” SIAM Journal on Numerical Analysis, vol. 16, pp. 964–979 (1979).
ProximalAlgorithms.ForwardBackwardIteration
— TypeForwardBackwardIteration(; <keyword-arguments>)
Iterator implementing the forward-backward splitting algorithm [1].
This iterator solves optimization problems of the form
minimize f(x) + g(x),
where f
is smooth.
See also: ForwardBackward
.
Arguments
x0
: initial point.f=Zero()
: smooth objective term.g=Zero()
: proximable objective term.Lf=nothing
: Lipschitz constant of the gradient off
.gamma=nothing
: stepsize to use, defaults to1/Lf
if not set (butLf
is).adaptive=false
: forces the method stepsize to be adaptively adjusted.minimum_gamma=1e-7
: lower bound togamma
in caseadaptive == true
.
References
- Lions, Mercier, “Splitting algorithms for the sum of two nonlinear operators,” SIAM Journal on Numerical Analysis, vol. 16, pp. 964–979 (1979).
ProximalAlgorithms.DouglasRachford
— FunctionDouglasRachford(; <keyword-arguments>)
Constructs the Douglas-Rachford splitting algorithm [1].
This algorithm solves convex optimization problems of the form
minimize f(x) + g(x).
The returned object has type IterativeAlgorithm{DouglasRachfordIteration}
, and can be called with the problem's arguments to trigger its solution.
See also: DouglasRachfordIteration
, IterativeAlgorithm
.
Arguments
maxit::Int=1_000
: maximum number of iterationtol::1e-8
: tolerance for the default stopping criterionstop::Function
: termination condition,stop(::T, state)
should returntrue
when to stop the iterationsolution::Function
: solution mapping,solution(::T, state)
should return the identified solutionverbose::Bool=false
: whether the algorithm state should be displayedfreq::Int=100
: every how many iterations to display the algorithm statedisplay::Function
: display function,display(::Int, ::T, state)
should display a summary of the iteration statekwargs...
: additional keyword arguments to pass on to theDouglasRachfordIteration
constructor upon call
References
- Eckstein, Bertsekas, "On the Douglas-Rachford Splitting Method and the Proximal Point Algorithm for Maximal Monotone Operators", Mathematical Programming, vol. 55, no. 1, pp. 293-318 (1989).
ProximalAlgorithms.DouglasRachfordIteration
— TypeDouglasRachfordIteration(; <keyword-arguments>)
Iterator implementing the Douglas-Rachford splitting algorithm [1].
This iterator solves convex optimization problems of the form
minimize f(x) + g(x).
See also: DouglasRachford
.
Arguments
x0
: initial point.f=Zero()
: proximable objective term.g=Zero()
: proximable objective term.gamma
: stepsize to use.
References
- Eckstein, Bertsekas, "On the Douglas-Rachford Splitting Method and the Proximal Point Algorithm for Maximal Monotone Operators", Mathematical Programming, vol. 55, no. 1, pp. 293-318 (1989).
ProximalAlgorithms.FastForwardBackward
— FunctionFastForwardBackward(; <keyword-arguments>)
Constructs the accelerated forward-backward splitting algorithm [1, 2].
This algorithm solves convex optimization problems of the form
minimize f(x) + g(x),
where f
is smooth.
The returned object has type IterativeAlgorithm{FastForwardBackwardIteration}
, and can be called with the problem's arguments to trigger its solution.
See also: FastForwardBackwardIteration
, IterativeAlgorithm
.
Arguments
maxit::Int=10_000
: maximum number of iterationtol::1e-8
: tolerance for the default stopping criterionstop::Function
: termination condition,stop(::T, state)
should returntrue
when to stop the iterationsolution::Function
: solution mapping,solution(::T, state)
should return the identified solutionverbose::Bool=false
: whether the algorithm state should be displayedfreq::Int=100
: every how many iterations to display the algorithm statedisplay::Function
: display function,display(::Int, ::T, state)
should display a summary of the iteration statekwargs...
: additional keyword arguments to pass on to theFastForwardBackwardIteration
constructor upon call
References
- Tseng, "On Accelerated Proximal Gradient Methods for Convex-Concave Optimization" (2008).
- Beck, Teboulle, "A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems", SIAM Journal on Imaging Sciences, vol. 2, no. 1, pp. 183-202 (2009).
ProximalAlgorithms.FastForwardBackwardIteration
— TypeFastForwardBackwardIteration(; <keyword-arguments>)
Iterator implementing the accelerated forward-backward splitting algorithm [1, 2].
This iterator solves convex optimization problems of the form
minimize f(x) + g(x),
where f
is smooth.
See also: FastForwardBackward
.
Arguments
x0
: initial point.f=Zero()
: smooth objective term.g=Zero()
: proximable objective term.mf=0
: convexity modulus off
.Lf=nothing
: Lipschitz constant of the gradient off
.gamma=nothing
: stepsize, defaults to1/Lf
ifLf
is set, andnothing
otherwise.adaptive=true
: makesgamma
adaptively adjust during the iterations; this is by defaultgamma === nothing
.minimum_gamma=1e-7
: lower bound togamma
in caseadaptive == true
.extrapolation_sequence=nothing
: sequence (iterator) of extrapolation coefficients to use for acceleration.
References
- Tseng, "On Accelerated Proximal Gradient Methods for Convex-Concave Optimization" (2008).
- Beck, Teboulle, "A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems", SIAM Journal on Imaging Sciences, vol. 2, no. 1, pp. 183-202 (2009).
ProximalAlgorithms.PANOC
— FunctionPANOC(; <keyword-arguments>)
Constructs the PANOC algorithm [1].
This algorithm solves optimization problems of the form
minimize f(Ax) + g(x),
where f
is smooth and A
is a linear mapping (for example, a matrix).
The returned object has type IterativeAlgorithm{PANOCIteration}
, and can be called with the problem's arguments to trigger its solution.
See also: PANOCIteration
, IterativeAlgorithm
.
Arguments
maxit::Int=1_000
: maximum number of iterationtol::1e-8
: tolerance for the default stopping criterionstop::Function
: termination condition,stop(::T, state)
should returntrue
when to stop the iterationsolution::Function
: solution mapping,solution(::T, state)
should return the identified solutionverbose::Bool=false
: whether the algorithm state should be displayedfreq::Int=10
: every how many iterations to display the algorithm statedisplay::Function
: display function,display(::Int, ::T, state)
should display a summary of the iteration statekwargs...
: additional keyword arguments to pass on to thePANOCIteration
constructor upon call
References
- Stella, Themelis, Sopasakis, Patrinos, "A simple and efficient algorithm for nonlinear model predictive control", 56th IEEE Conference on Decision and Control (2017).
ProximalAlgorithms.PANOCIteration
— TypePANOCIteration(; <keyword-arguments>)
Iterator implementing the PANOC algorithm [1].
This iterator solves optimization problems of the form
minimize f(Ax) + g(x),
where f
is smooth and A
is a linear mapping (for example, a matrix).
See also: PANOC
.
Arguments
x0
: initial point.f=Zero()
: smooth objective term.A=I
: linear operator (e.g. a matrix).g=Zero()
: proximable objective term.Lf=nothing
: Lipschitz constant of the gradient of x ↦ f(Ax).gamma=nothing
: stepsize to use, defaults to1/Lf
if not set (butLf
is).adaptive=false
: forces the method stepsize to be adaptively adjusted.minimum_gamma=1e-7
: lower bound togamma
in caseadaptive == true
.max_backtracks=20
: maximum number of line-search backtracks.directions=LBFGS(5)
: strategy to use to compute line-search directions.
References
- Stella, Themelis, Sopasakis, Patrinos, "A simple and efficient algorithm for nonlinear model predictive control", 56th IEEE Conference on Decision and Control (2017).
ProximalAlgorithms.ZeroFPR
— FunctionZeroFPR(; <keyword-arguments>)
Constructs the ZeroFPR algorithm [1].
This algorithm solves optimization problems of the form
minimize f(Ax) + g(x),
where f
is smooth and A
is a linear mapping (for example, a matrix).
The returned object has type IterativeAlgorithm{ZeroFPRIteration}
, and can be called with the problem's arguments to trigger its solution.
See also: ZeroFPRIteration
, IterativeAlgorithm
.
Arguments
maxit::Int=1_000
: maximum number of iterationtol::1e-8
: tolerance for the default stopping criterionstop::Function
: termination condition,stop(::T, state)
should returntrue
when to stop the iterationsolution::Function
: solution mapping,solution(::T, state)
should return the identified solutionverbose::Bool=false
: whether the algorithm state should be displayedfreq::Int=10
: every how many iterations to display the algorithm statedisplay::Function
: display function,display(::Int, ::T, state)
should display a summary of the iteration statekwargs...
: additional keyword arguments to pass on to theZeroFPRIteration
constructor upon call
References
- Themelis, Stella, Patrinos, "Forward-backward envelope for the sum of two nonconvex functions: Further properties and nonmonotone line-search algorithms", SIAM Journal on Optimization, vol. 28, no. 3, pp. 2274-2303 (2018).
ProximalAlgorithms.ZeroFPRIteration
— TypeZeroFPRIteration(; <keyword-arguments>)
Iterator implementing the ZeroFPR algorithm [1].
This iterator solves optimization problems of the form
minimize f(Ax) + g(x),
where f
is smooth and A
is a linear mapping (for example, a matrix).
See also: ZeroFPR
.
Arguments
x0
: initial point.f=Zero()
: smooth objective term.A=I
: linear operator (e.g. a matrix).g=Zero()
: proximable objective term.Lf=nothing
: Lipschitz constant of the gradient of x ↦ f(Ax).gamma=nothing
: stepsize to use, defaults to1/Lf
if not set (butLf
is).adaptive=false
: forces the method stepsize to be adaptively adjusted.minimum_gamma=1e-7
: lower bound togamma
in caseadaptive == true
.max_backtracks=20
: maximum number of line-search backtracks.directions=LBFGS(5)
: strategy to use to compute line-search directions.
References
- Themelis, Stella, Patrinos, "Forward-backward envelope for the sum of two nonconvex functions: Further properties and nonmonotone line-search algorithms", SIAM Journal on Optimization, vol. 28, no. 3, pp. 2274-2303 (2018).
ProximalAlgorithms.DRLS
— FunctionDRLS(; <keyword-arguments>)
Constructs the Douglas-Rachford line-search algorithm [1].
This algorithm solves convex optimization problems of the form
minimize f(x) + g(x),
where f
is smooth.
The returned object has type IterativeAlgorithm{DRLSIteration}
, and can be called with the problem's arguments to trigger its solution.
See also: DRLSIteration
, IterativeAlgorithm
.
Arguments
maxit::Int=1_000
: maximum number of iterationtol::1e-8
: tolerance for the default stopping criterionstop::Function
: termination condition,stop(::T, state)
should returntrue
when to stop the iterationsolution::Function
: solution mapping,solution(::T, state)
should return the identified solutionverbose::Bool=false
: whether the algorithm state should be displayedfreq::Int=10
: every how many iterations to display the algorithm statedisplay::Function
: display function,display(::Int, ::T, state)
should display a summary of the iteration statekwargs...
: additional keyword arguments to pass on to theDRLSIteration
constructor upon call
References
- Themelis, Stella, Patrinos, "Douglas-Rachford splitting and ADMM for nonconvex optimization: Accelerated and Newton-type linesearch algorithms", arXiv:2005.10230, 2020.
ProximalAlgorithms.DRLSIteration
— TypeDRLSIteration(; <keyword-arguments>)
Iterator implementing the Douglas-Rachford line-search algorithm [1].
This iterator solves optimization problems of the form
minimize f(x) + g(x),
where f
is smooth.
See also: DRLS
.
Arguments
x0
: initial point.f=Zero()
: smooth objective term.g=Zero()
: proximable objective term.mf=nothing
: convexity modulus of f.Lf=nothing
: Lipschitz constant of the gradient of f.gamma
: stepsize to use, chosen appropriately based on Lf and mf by defaults.max_backtracks=20
: maximum number of line-search backtracks.directions=LBFGS(5)
: strategy to use to compute line-search directions.
References
- Themelis, Stella, Patrinos, "Douglas-Rachford splitting and ADMM for nonconvex optimization: Accelerated and Newton-type linesearch algorithms", arXiv:2005.10230, 2020.
ProximalAlgorithms.PANOCplus
— FunctionPANOCplus(; <keyword-arguments>)
Constructs the the PANOCplus algorithm [1].
This algorithm solves optimization problems of the form
minimize f(Ax) + g(x),
where f
is locally smooth and A
is a linear mapping (for example, a matrix).
The returned object has type IterativeAlgorithm{PANOCplusIteration}
, and can be called with the problem's arguments to trigger its solution.
See also: PANOCplusIteration
, IterativeAlgorithm
.
Arguments
maxit::Int=1_000
: maximum number of iterationtol::1e-8
: tolerance for the default stopping criterionstop::Function
: termination condition,stop(::T, state)
should returntrue
when to stop the iterationsolution::Function
: solution mapping,solution(::T, state)
should return the identified solutionverbose::Bool=false
: whether the algorithm state should be displayedfreq::Int=10
: every how many iterations to display the algorithm statedisplay::Function
: display function,display(::Int, ::T, state)
should display a summary of the iteration statekwargs...
: additional keyword arguments to pass on to thePANOCplusIteration
constructor upon call
References
- De Marchi, Themelis, "Proximal gradient algorithms under local Lipschitz gradient continuity: a convergence and robustness analysis of PANOC", arXiv:2112.13000 (2021).
ProximalAlgorithms.PANOCplusIteration
— TypePANOCplusIteration(; <keyword-arguments>)
Iterator implementing the PANOCplus algorithm [1].
This iterator solves optimization problems of the form
minimize f(Ax) + g(x),
where f
is locally smooth and A
is a linear mapping (for example, a matrix).
See also: PANOCplus
.
Arguments
x0
: initial point.f=Zero()
: smooth objective term.A=I
: linear operator (e.g. a matrix).g=Zero()
: proximable objective term.Lf=nothing
: Lipschitz constant of the gradient of x ↦ f(Ax).gamma=nothing
: stepsize to use, defaults to1/Lf
if not set (butLf
is).adaptive=false
: forces the method stepsize to be adaptively adjusted.minimum_gamma=1e-7
: lower bound togamma
in caseadaptive == true
.max_backtracks=20
: maximum number of line-search backtracks.directions=LBFGS(5)
: strategy to use to compute line-search directions.
References
- De Marchi, Themelis, "Proximal gradient algorithms under local Lipschitz gradient continuity: a convergence and robustness analysis of PANOC", arXiv:2112.13000 (2021).
Three-terms: $f + g + h$
When more than one non-differentiable term is there in the objective, algorithms from the previous section do not in general apply out of the box, since $\operatorname{prox}_{\gamma (g + h)}$ does not have a closed form unless in particular cases. Therefore, ad-hoc iteration schemes have been studied.
Algorithm | Assumptions | Oracle | Implementation | References |
---|---|---|---|---|
Davis-Yin | $f$ convex and smooth, $g, h$ convex | $\nabla f$, $\operatorname{prox}_{\gamma g}$, $\operatorname{prox}_{\gamma h}$ | DavisYin | Damek Davis, Wotao Yin (2017) |
ProximalAlgorithms.DavisYin
— FunctionDavisYin(; <keyword-arguments>)
Constructs the Davis-Yin splitting algorithm [1].
This algorithm solves convex optimization problems of the form
minimize f(x) + g(x) + h(x),
where f
is smooth.
The returned object has type IterativeAlgorithm{DavisYinIteration}
, and can be called with the problem's arguments to trigger its solution.
See also: DavisYinIteration
, IterativeAlgorithm
.
Arguments
maxit::Int=10_000
: maximum number of iterationtol::1e-8
: tolerance for the default stopping criterionstop::Function
: termination condition,stop(::T, state)
should returntrue
when to stop the iterationsolution::Function
: solution mapping,solution(::T, state)
should return the identified solutionverbose::Bool=false
: whether the algorithm state should be displayedfreq::Int=100
: every how many iterations to display the algorithm statedisplay::Function
: display function,display(::Int, ::T, state)
should display a summary of the iteration statekwargs...
: additional keyword arguments to pass on to theDavisYinIteration
constructor upon call
References
- Davis, Yin. "A Three-Operator Splitting Scheme and its Optimization Applications", Set-Valued and Variational Analysis, vol. 25, no. 4, pp. 829–858 (2017).
ProximalAlgorithms.DavisYinIteration
— TypeDavisYinIteration(; <keyword-arguments>)
Iterator implementing the Davis-Yin splitting algorithm [1].
This iterator solves convex optimization problems of the form
minimize f(x) + g(x) + h(x),
where f
is smooth.
See also DavisYin
.
Arguments
x0
: initial point.f=Zero()
: smooth objective term.g=Zero()
: proximable objective term.h=Zero()
: proximable objective term.Lf=nothing
: Lipschitz constant of the gradient of h.gamma=nothing
: stepsize to use, defaults to1/Lf
if not set (butLf
is).
References
- Davis, Yin. "A Three-Operator Splitting Scheme and its Optimization Applications", Set-Valued and Variational Analysis, vol. 25, no. 4, pp. 829-858 (2017).
Primal-dual: $f + g + h \circ L$
When a function $h$ is composed with a linear operator $L$, the proximal operator of $h \circ L$ does not have a closed form in general. For this reason, specific algorithms by the name of "primal-dual" splitting schemes are often applied to this model.
Algorithm | Assumptions | Oracle | Implementation | References |
---|---|---|---|---|
Chambolle-Pock | $f\equiv 0$, $g, h$ convex, $L$ linear operator | $\operatorname{prox}_{\gamma g}$, $\operatorname{prox}_{\gamma h}$, $L$, $L^*$ | ChambollePock | Antonin Chambolle, Thomas Pock (2011) |
Vu-Condat | $f$ convex and smooth, $g, h$ convex, $L$ linear operator | $\nabla f$, $\operatorname{prox}_{\gamma g}$, $\operatorname{prox}_{\gamma h}$, $L$, $L^*$ | VuCodat | Bằng Công Vũ (2013), Laurent Condat (2013) |
AFBA | $f$ convex and smooth, $g, h$ convex, $L$ linear operator | $\nabla f$, $\operatorname{prox}_{\gamma g}$, $\operatorname{prox}_{\gamma h}$, $L$, $L^*$ | AFBA | Puya Latafat, Panagiotis Patrinos (2017) |
ProximalAlgorithms.ChambollePock
— FunctionChambollePock(; <keyword-arguments>)
Constructs the Chambolle-Pock primal-dual algorithm [1].
This algorithm solves convex optimization problems of the form
minimize g(x) + h(L x),
where g
and h
are possibly nonsmooth, and L
is a linear mapping.
The returned object has type IterativeAlgorithm{AFBAIteration}
, and can be called with the problem's arguments to trigger its solution.
See also: ChambollePockIteration
, AFBAIteration
, IterativeAlgorithm
.
Arguments
maxit::Int=10_000
: maximum number of iterationtol::1e-5
: tolerance for the default stopping criterionstop::Function
: termination condition,stop(::T, state)
should returntrue
when to stop the iterationsolution::Function
: solution mapping,solution(::T, state)
should return the identified solutionverbose::Bool=false
: whether the algorithm state should be displayedfreq::Int=100
: every how many iterations to display the algorithm statedisplay::Function
: display function,display(::Int, ::T, state)
should display a summary of the iteration statekwargs...
: additional keyword arguments to pass on to theAFBAIteration
constructor upon call
References
- Chambolle, Pock, "A First-Order Primal-Dual Algorithm for Convex Problems with Applications to Imaging", Journal of Mathematical Imaging and Vision, vol. 40, no. 1, pp. 120-145 (2011).
ProximalAlgorithms.ChambollePockIteration
— FunctionChambollePockIteration(; <keyword-arguments>)
Iterator implementing the Chambolle-Pock primal-dual algorithm [1].
This iterator solves convex optimization problems of the form
minimize g(x) + h(L x),
where g
and h
are possibly nonsmooth, and L
is a linear mapping.
See also: AFBAIteration
, ChambollePock
.
This iteration is equivalent to AFBAIteration
with theta=2
, f=Zero()
, l=IndZero()
; for all other arguments see AFBAIteration
.
References
- Chambolle, Pock, "A First-Order Primal-Dual Algorithm for Convex Problems with Applications to Imaging", Journal of Mathematical Imaging and Vision, vol. 40, no. 1, pp. 120-145 (2011).
ProximalAlgorithms.VuCondat
— FunctionVuCondat(; <keyword-arguments>)
Constructs the Vũ-Condat primal-dual algorithm [1, 2].
This algorithm solves convex optimization problems of the form
minimize f(x) + g(x) + (h □ l)(L x),
where f
is smooth, g
and h
are possibly nonsmooth and l
is strongly convex. Symbol □
denotes the infimal convolution, and L
is a linear mapping.
The returned object has type IterativeAlgorithm{AFBAIteration}
, and can be called with the problem's arguments to trigger its solution.
See also: VuCondatIteration
, AFBAIteration
, IterativeAlgorithm
.
Arguments
maxit::Int=10_000
: maximum number of iterationtol::1e-5
: tolerance for the default stopping criterionstop::Function
: termination condition,stop(::T, state)
should returntrue
when to stop the iterationsolution::Function
: solution mapping,solution(::T, state)
should return the identified solutionverbose::Bool=false
: whether the algorithm state should be displayedfreq::Int=100
: every how many iterations to display the algorithm statedisplay::Function
: display function,display(::Int, ::T, state)
should display a summary of the iteration statekwargs...
: additional keyword arguments to pass on to theAFBAIteration
constructor upon call
References
- Condat, "A primal-dual splitting method for convex optimization involving Lipschitzian, proximable and linear composite terms", Journal of Optimization Theory and Applications, vol. 158, no. 2, pp 460-479 (2013).
- Vũ, "A splitting algorithm for dual monotone inclusions involving cocoercive operators", Advances in Computational Mathematics, vol. 38, no. 3, pp. 667-681 (2013).
ProximalAlgorithms.VuCondatIteration
— FunctionVuCondatIteration(; <keyword-arguments>)
Iterator implementing the Vũ-Condat primal-dual algorithm [1, 2].
This iterator solves convex optimization problems of the form
minimize f(x) + g(x) + (h □ l)(L x),
where f
is smooth, g
and h
are possibly nonsmooth and l
is strongly convex. Symbol □
denotes the infimal convolution, and L
is a linear mapping.
This iteration is equivalent to AFBAIteration
with theta=2
; for all other arguments see AFBAIteration
.
See also: AFBAIteration
, VuCondat
.
References
- Condat, "A primal-dual splitting method for convex optimization involving Lipschitzian, proximable and linear composite terms", Journal of Optimization Theory and Applications, vol. 158, no. 2, pp 460-479 (2013).
- Vũ, "A splitting algorithm for dual monotone inclusions involving cocoercive operators", Advances in Computational Mathematics, vol. 38, no. 3, pp. 667-681 (2013).
ProximalAlgorithms.AFBA
— FunctionAFBA(; <keyword-arguments>)
Constructs the asymmetric forward-backward-adjoint algorithm (AFBA, see [1]).
This algorithm solves convex optimization problems of the form
minimize f(x) + g(x) + (h □ l)(L x),
where f
is smooth, g
and h
are possibly nonsmooth and l
is strongly convex. Symbol □
denotes the infimal convolution, and L
is a linear mapping.
The returned object has type IterativeAlgorithm{AFBAIteration}
, and can be called with the problem's arguments to trigger its solution.
See also: AFBAIteration
, IterativeAlgorithm
.
Arguments
maxit::Int=10_000
: maximum number of iterationtol::1e-5
: tolerance for the default stopping criterionstop::Function
: termination condition,stop(::T, state)
should returntrue
when to stop the iterationsolution::Function
: solution mapping,solution(::T, state)
should return the identified solutionverbose::Bool=false
: whether the algorithm state should be displayedfreq::Int=100
: every how many iterations to display the algorithm statedisplay::Function
: display function,display(::Int, ::T, state)
should display a summary of the iteration statekwargs...
: additional keyword arguments to pass on to theAFBAIteration
constructor upon call
References
- Latafat, Patrinos, "Asymmetric forward-backward-adjoint splitting for solving monotone inclusions involving three operators", Computational Optimization and Applications, vol. 68, no. 1, pp. 57-93 (2017).
- Latafat, Patrinos, "Primal-dual proximal algorithms for structured convex optimization: a unifying framework", In Large-Scale and Distributed Optimization, Giselsson and Rantzer, Eds. Springer International Publishing, pp. 97-120 (2018).
ProximalAlgorithms.AFBAIteration
— TypeAFBAIteration(; <keyword-arguments>)
Iterator implementing the asymmetric forward-backward-adjoint algorithm (AFBA, see [1]).
This iterator solves convex optimization problems of the form
minimize f(x) + g(x) + (h □ l)(L x),
where f
is smooth, g
and h
are possibly nonsmooth and l
is strongly convex. Symbol □
denotes the infimal convolution, and L
is a linear mapping.
Points x0
and y0
are the initial primal and dual iterates, respectively. If unspecified, functions f
, g
, and h
default to the identically zero function, l
defaults to the indicator of the set {0}
, and L
defaults to the identity. Important keyword arguments, in case f
and l
are set, are the Lipschitz constants beta_f
and beta_l
(see below).
The iterator implements Algorithm 3 of [1] with constant stepsize (α_n=λ) for several prominant special cases:
- θ = 2 ==> Corresponds to the Vu-Condat Algorithm [3, 4].
- θ = 1, μ=1
- θ = 0, μ=1
- θ ∈ [0,∞), μ=0
See [2, Section 5.2] and [1, Figure 1] for stepsize conditions, special cases, and relation to other algorithms.
See also: AFBA
.
Arguments
x0
: initial primal point.y0
: initial dual point.f=Zero()
: smooth objective term.g=Zero()
: proximable objective term.h=Zero()
: proximable objective term.l=IndZero()
: strongly convex function.L=I
: linear operator (e.g. a matrix).beta_f=0
: Lipschitz constant of the gradient off
.beta_l=0
: Lipschitz constant of the gradient ofl
conjugate.theta=1
: nonnegative algorithm parameter.mu=1
: algorithm parameter in the range [0,1].gamma1
: primal stepsize (see [1] for the default choice).gamma2
: dual stepsize (see [1] for the default choice).
References
- Latafat, Patrinos, "Asymmetric forward-backward-adjoint splitting for solving monotone inclusions involving three operators", Computational Optimization and Applications, vol. 68, no. 1, pp. 57-93 (2017).
- Latafat, Patrinos, "Primal-dual proximal algorithms for structured convex optimization: a unifying framework", In Large-Scale and Distributed Optimization, Giselsson and Rantzer, Eds. Springer International Publishing, pp. 97-120 (2018).
- Condat, "A primal-dual splitting method for convex optimization involving Lipschitzian, proximable and linear composite terms", Journal of Optimization Theory and Applications, vol. 158, no. 2, pp 460-479 (2013).
- Vũ, "A splitting algorithm for dual monotone inclusions involving cocoercive operators", Advances in Computational Mathematics, vol. 38, no. 3, pp. 667-681 (2013).