Linearly Constrained Optimization
Mathwrist White Paper Series
Abstract
This document presents a technical overview of the linearly constrained (LC) optimization feature in Mathwristโs C++ Numerical Programming Library (NPL). The objective is to minimize general n-dimensional smooth functions subject to linear and simple bound constraints. In Mathwrist NPL. we provide several linearly constrained optimization solvers that combine active set algorithm with line search methods.
1 Introduction
Let be a twice continuously differentiable function with gradient and Hessian . We want to solve the following optimization problem subject to a set of general linear constraints and simple bound constraints,
(1) |
As we discussed in our Linear Programming (LP) and Quadratic Programming (QP) white papers, we rely on an active-set based framework to handle general linear constraints and simple bounds. The formulation (1) of linearly constrained (LC) optimization problems differs from a LP or QP problem only in the form of objective functions.
NPL LC optimization solvers naturally extend the active-set framework with specific customization of computing descent directions using line search methods. Same as LP and QP solvers, the LC solvers directly accept constraint specification in formulation (1). For brevity and without loss of generality, the discusion hereinafter assumes a canonical formulation,
(2) |
Note that our implementation doesnโt convert formulation (1) to the canonical form (2). Please refer to our LP and QP white papers for the details of active-set method, i.e. how simple bounds and upper bounded linear constraints are handled. For the details of line search method, users can refer to our unconstrained optimization (UC) white paper.
We will use the same notation as our LP and QP white papers. In particular, and denote the index set of working constraints and non-working constraints respectively. denotes the subset of equality constraints in LC problem (2). By construction, . and represent the range space and null space matrix wrt active constraints .
2 Optimality Conditions
The solution of a LC problem satisfies the following necessary and sufficient conditions:
-
1.
is feasible. ;
-
2.
;
-
3.
is positive semi-definite (necessary) or positive definite (sufficient).
-
4.
Lagrange multiplier (necessary) or (sufficient), ;
Once an initial feasible point is found, the active-set method automatically ensures all subsequent points are feasible as well. The second condition is the stationary condition, effectively it requires the reduced gradient,
The third condition requires the reduced Hessian
to be at least positive semi-definite, also called curvature condition. The stationary and curvature conditions together determine if we can find a descent feasible direction wrt the current work set .
The last condition is the Lagrange multiplier condition. If , one can show that we can delete from the current working set and obtain a descent feasible direction wrt the updated working set.
3 Review of Active-set Workflow
Given an initial guess , we use the โcrash startโ procedure in [2] to find an initial feasible point and meanwhile identify an initial working set of constraints. This is the phase-I feasibility stage. Next, the algorithm enters iterations to repeatedly move along a descent feasible direction , update the working set and test optimality conditions.
At the -th iteration, if is not a stationary point wrt the current working set, or if doesnโt have positive curvature, a descent direction is computed as a null space direction for some . By moving along with a step length , we first hit some non-working constraint . Now equality holds at . Constraint is then added to the working set . We update .
If is a stationary point with positive curvature, optimality conditions are tested. A working constraint that violates the Lagrange multiplier condition is moved from to . At this iteration, only the working set is updated. We then start the iteration with . Whenever the working set changes, i.e. constraints are added to or removed from , and are efficiently updated by a low-rank matrix update scheme, instead of being refactored from scratch.
4 Direction from Line Search
For any search direction and step length , by the Taylor expansion of around ,
, where
We always move in a null space direction, . The above expansion is written as
(3) | |||||
(4) | |||||
(5) |
At a non-stationary point, we are able to compute a descent direction such that the linear term . However, when the reduced Hessian has positive curvature, the qudaratic term in equation (3) will dominate if is too large. We use a line search method to compute an acceptable step length , which behaves as the upper bound of the actual step length .
Mathwrist NPL offers two LC solver classes,
-
1.
class GS_ActiveSetNewton
-
2.
class GS_ActiveSetQuasiNewton
Both solvers are hierarchically derived from the active-set framework base class. They all use the line search method to compute step length and differ from each other on how to compute the step direction .
4.1 Modified Newton Method
In LC solver class GS_ActiveSetNewton, we compute a null space direction component by solving
(6) |
A modified Cholesky decomposition is applied to the reduced Hessian to solve the above linear equation. Whenever possible, we use an efficient low-rank update scheme to update the Cholesky matrix instead of performing a full factorization, [3].
As a by-product, the modified Cholesky decomposition also identifies whether the reduced Hessian is positive definite. At a stationary point, if has negative curvature, the method allows us to compute a negative curvature direction. This is an advantage over the Quasi-Newton method.
On the other hand, the disadvantage of using the modified Newton method is that, we need explicitly compute the Hessian matrix . NPL users have an option to let the LC solver numerically approximate by finite difference. This option control is provided as the last parameter in the constructor of class GS_ActiveSetNewton. When enabled, is approximated by function FunctionND::hess_fwd_diff_grad().
4.2 Quasi-Newton Method
In LC solver class GS_ActiveSetQuasiNewton, we compute a null space direction component by solving
(7) |
, where is a Quasi-Newton approximation to the real Hessian matrix . is obtained by BFGS update from . We can also define the reduced Hessian approximation . A reduced Cholesky decomposition is applied to when we solve .
5 Box Constrained Problems
A special case of the linearly constrained optimization problem is when there are only simple bounds imposed to unknown variable ,
(8) |
, which is also called a box-constrained optimization problem. The overall logic to solve box-constrained problems is still following an active-set based strategy. But in this situation the working set just holds a subset of variables that are fixed at their boundary. The non-working set holds free variables.
Now searching in a null space direction is simply to search in the subspace of free variables. We use two separate solver classes GS_SimpleBoundNewton and GS_SimpleBoundQuasiNewton to handle this special case more efficiently. Once a descent search direction is computed, we still use a line search method to compute the upper bound of step length .
The special case solver class GS_SimpleBoundNewton computes the null space direction component same as the general case in equation (6). Here the reduced gradient and reduced Hessian are just the free-variable parts of the gradient and Hessian . Modified Cholesky is used to solve the linear equation.
The special case solver class GS_SimpleBoundQuasiNewton computes as a Quasi-Newton direction (7). Again, is a Hessian approximation by BFGS update. The reduced Hessian approximation is simply the free variable part of .
6 Termination Control
All four LC solvers
-
โข
GS_ActiveSetNewton
-
โข
GS_ActiveSetQuasiNewton
-
โข
GS_SimpleBoundNewton
-
โข
GS_SimpleBoundQuasiNewton
are all indirectly derived from the IterativeMethod base class. They all share the max number iteration control and convergence tolerance control defined in IterativeMethod base class. Let be the convergence tolerance. At a stationary point, we test Lagrange multiplier condition by .
In addition, all four solvers use a stationary tolerance control to determine whether the current iteration is at a stationary point. If the max-norm of reduced gradient satisfies , or if , we consider is at a stationary point wrt the current working set of constraints.
References
- [1] Jorge Nocedal and Stephen J. Wright: Numerical Optimization, Springer, 1999
- [2] Philip E. Gill, Walter Murray and Margaret H. Wright: Practical Optimization, Academic Press, 1981
- [3] Philip E. Gill and Walter Murray: Newton-Type Methods for Unconstrained and Linearly Constrained Optimization. Mathematical Programming 7 (1974), pp. 311-350
- [4] Philip E. Gill, G. H. Golub, Walter Murray and Michael. A. Saunders: Methods for Modifying Matrix Factorizations, Mathematics of Computation, Volumn 28, Number 126, April 1974, pages 505-535
- [5] Philip E. Gill, Walter Murray and Michael A. Saunders: Methods for computing and modifying the LDV factors of a matrix, Mathematics of Computation, Volumn 29, Number 132, October 1975, pages 1051-1077
- [6] Anders. Forsgren, Philip E. Gill and Walter Murray: Computing Modified Newton Directions Using a Partial Cholesky Factorization, SIAM J. SCI. COMPUT. Vol. 16, No. 1, pp. 139-150