copt.minimize_proximal_gradient¶
-
copt.
minimize_proximal_gradient
(f_grad, x0, prox=None, tol=1e-06, max_iter=500, verbose=0, callback=None, step_size='adaptive', accelerated=False, max_iter_backtracking=1000, backtracking_factor=0.6)[source]¶ Proximal gradient descent.
Solves problems of the form
minimize_x f(x) + g(x)
where we have access to the gradient of f and the proximal operator of g.
- Parameters
f_grad – callable. Value and gradient of f:
f_grad(x) -> float, array-like
.x0 – array-like of size n_features Initial guess of solution.
prox – callable, optional. Proximal operator g.
tol – float
max_iter – int, optional. Maximum number of iterations.
verbose – int, optional. Verbosity level, from 0 (no output) to 2 (output on each iteration)
callback – callable. callback function (optional). Takes a single argument (x) with the current coefficients in the algorithm. The algorithm will exit if callback returns False.
step_size – float or “adaptive” or (float, “adaptive”). Step-size value and/or strategy.
accelerated – boolean Whether to use the accelerated variant of the algorithm.
max_iter_backtracking – int
backtracking_factor – float
- Returns
- The optimization result represented as a
scipy.optimize.OptimizeResult
object. Important attributes are:x
the solution array,success
a Boolean flag indicating if the optimizer exited successfully andmessage
which describes the cause of the termination. See scipy.optimize.OptimizeResult for a description of other attributes.
- Return type
res
References
Beck, Amir, and Marc Teboulle. “Gradient-based algorithms with applications to signal recovery.” Convex optimization in signal processing and communications (2009)
Examples