QPB#

purpose#

The qpb package uses a primal-dual interior-point method to solve a given non-convex quadratic program. The aim is to minimize the quadratic objective function

\[q(x) = f + g^T x + \frac{1}{2} x^T H x,\]
subject to the general linear constraints and simple bounds
\[c_l \leq A x \leq c_u \;\;\mbox{and} \;\; x_l \leq x \leq x_u,\]
where \(H\) and \(A\) are, respectively, given \(n\) by \(n\) symmetric and \(m\) by \(n\) rectangular matrices, \(g\) is a vector, \(f\) is a scalar, and any of the components of the vectors \(c_l\), \(c_u\), \(x_l\) or \(x_u\) may be infinite. The method offers the choice of direct and iterative solution of the key regularization subproblems, and is most suitable for problems involving a large number of unknowns \(x\).

If the matrix \(H\) is positive semi-definite, a global solution is found. However, if \(H\) is indefinite, the procedure may find a (weak second-order) critical point that is not the global solution to the given problem.

See Section 4 of $GALAHAD/doc/qpb.pdf for additional details.

terminolgy#

Any required solution \(x\) necessarily satisfies the primal optimality conditions

\[A x = c\;\;\mbox{(1)}\]
and
\[c_l \leq c \leq c_u, \;\; x_l \leq x \leq x_u,\;\;\mbox{(2)}\]
the dual optimality conditions
\[H x + g = A^{T} y + z,\;\; y = y_l + y_u \;\;\mbox{and}\;\; z = z_l + z_u,\;\;\mbox{(3)}\]
and
\[y_l \geq 0, \;\; y_u \leq 0, \;\; z_l \geq 0 \;\;\mbox{and}\;\; z_u \leq 0,\;\;\mbox{(4)}\]
and the complementary slackness conditions
\[( A x - c_l )^{T} y_l = 0,\;\; ( A x - c_u )^{T} y_u = 0,\;\; (x -x_l )^{T} z_l = 0 \;\;\mbox{and}\;\;(x -x_u )^{T} z_u = 0,\;\;\mbox{(5)}\]
where the vectors \(y\) and \(z\) are known as the Lagrange multipliers for the general linear constraints, and the dual variables for the bounds, respectively, and where the vector inequalities hold component-wise.

method#

Primal-dual interior point methods iterate towards a point that satisfies these conditions by ultimately aiming to satisfy (1), (3) and (5), while ensuring that (2) and (4) are satisfied as strict inequalities at each stage. Appropriate norms of the amounts by which (1), (3) and (5) fail to be satisfied are known as the primal and dual infeasibility, and the violation of complementary slackness, respectively. The fact that (2) and (4) are satisfied as strict inequalities gives such methods their other title, namely interior-point methods.

The problem is solved in two phases. The goal of the first “initial feasible point” phase is to find a strictly interior point which is primal feasible, that is that (1) is satisfied. The package LSQP is used for this purpose, and offers the options of either accepting the first strictly feasible point found, or preferably of aiming for the so-called “analytic center” of the feasible region. Having found such a suitable initial feasible point, the second “optimality” phase ensures that (1) remains satisfied while iterating to satisfy dual feasibility (3) and complementary slackness (5). The optimality phase proceeds by approximately minimizing a sequence of barrier functions

\[\frac{1}{2} x^T H x + g^T x + f - \mu \left[ \sum_{i=1}^{m} \log ( c_{i} - c_{i}^{l} ) + \sum_{i=1}^{m} \log ( c_{i}^{u} - c_{i} ) + \sum_{j=1}^{n} \log ( x_{j} - x_{j}^{l} ) + \sum_{j=1}^{n} \log ( x_{j}^{u} - x_{j} ) \right] ,\]
for an approriate sequence of positive barrier parameters \(\mu\) converging to zero while ensuring that (1) remain satisfied and that \(x\) and \(c\) are strictly interior points for (2). Note that terms in the above sumations corresponding to infinite bounds are ignored, and that equality constraints are treated specially.

Each of the barrier subproblems is solved using a trust-region method. Such a method generates a trial correction step \(\Delta (x, c)\) to the current iterate \((x, c)\) by replacing the nonlinear barrier function locally by a suitable quadratic model, and approximately minimizing this model in the intersection of (1) and a trust region \(\|\Delta (x, c)\| \leq \Delta\) for some appropriate strictly positive trust-region radius \(\Delta\) and norm \(\| \cdot \|\). The step is accepted/rejected and the radius adjusted on the basis of how accurately the model reproduces the value of barrier function at the trial step. If the step proves to be unacceptable, a linesearch is performed along the step to obtain an acceptable new iterate. In practice, the natural primal “Newton” model of the barrier function is frequently less successful than an alternative primal-dual model, and consequently the primal-dual model is usually to be preferred.

Once a barrier subproblem has been solved, extrapolation based on values and derivatives encountered on the central path is optionally used to determine a good starting point for the next subproblem. Traditional Taylor-series extrapolation has been superceded by more accurate Puiseux-series methods as these are particularly suited to deal with degeneracy.

The trust-region subproblem is approximately solved using the combined conjugate-gradient/Lanczos method implemented in the package GLTR. Such a method requires a suitable preconditioner, and in our case, the only flexibility we have is in approximating the model of the Hessian. Although using a fixed form of preconditioning is sometimes effective, we have provided the option of an automatic choice, that aims to balance the cost of applying the preconditioner against the needs for an accurate solution of the trust-region subproblem. The preconditioner is applied using the matrix factorization package SBLS, but options at this stage are to factorize the preconditioner as a whole (the so-called “augmented system” approach), or to perform a block elimination first (the “Schur-complement” approach). The latter is usually to be prefered when a (non-singular) diagonal preconditioner is used, but may be inefficient if any of the columns of \(A\) is too dense.

In order to make the solution as efficient as possible, the variables and constraints are reordered internally by the package QPP prior to solution. In particular, fixed variables, and free (unbounded on both sides) constraints are temporarily removed.

reference#

The method is described in detail in

A. R. Conn, N. I. M. Gould, D. Orban and Ph. L. Toint, ``A primal-dual trust-region algorithm for minimizing a non-convex function subject to general inequality and linear equality constraints’’. Mathematical Programming 87 (1999) 215-249.

matrix storage#

The unsymmetric \(m\) by \(n\) matrix \(A\) may be presented and stored in a variety of convenient input formats.

Dense storage format: The matrix \(A\) is stored as a compact dense matrix by rows, that is, the values of the entries of each row in turn are stored in order within an appropriate real one-dimensional array. In this case, component \(n \ast i + j\) of the storage array A_val will hold the value \(A_{ij}\) for \(0 \leq i \leq m-1\), \(0 \leq j \leq n-1\). The string A_type = ‘dense’ should be specified.

Dense by columns storage format: The matrix \(A\) is stored as a compact dense matrix by columns, that is, the values of the entries of each column in turn are stored in order within an appropriate real one-dimensional array. In this case, component \(m \ast j + i\) of the storage array A_val will hold the value \(A_{ij}\) for \(0 \leq i \leq m-1\), \(0 \leq j \leq n-1\). The string A_type = ‘dense_by_columns’ should be specified.

Sparse co-ordinate storage format: Only the nonzero entries of the matrices are stored. For the \(l\)-th entry, \(0 \leq l \leq ne-1\), of \(A\), its row index i, column index j and value \(A_{ij}\), \(0 \leq i \leq m-1\), \(0 \leq j \leq n-1\), are stored as the \(l\)-th components of the integer arrays A_row and A_col and real array A_val, respectively, while the number of nonzeros is recorded as A_ne = \(ne\). The string A_type = ‘coordinate’should be specified.

Sparse row-wise storage format: Again only the nonzero entries are stored, but this time they are ordered so that those in row i appear directly before those in row i+1. For the i-th row of \(A\) the i-th component of the integer array A_ptr holds the position of the first entry in this row, while A_ptr(m) holds the total number of entries. The column indices j, \(0 \leq j \leq n-1\), and values \(A_{ij}\) of the nonzero entries in the i-th row are stored in components l = A_ptr(i), \(\ldots\), A_ptr(i+1)-1, \(0 \leq i \leq m-1\), of the integer array A_col, and real array A_val, respectively. For sparse matrices, this scheme almost always requires less storage than its predecessor. The string A_type = ‘sparse_by_rows’ should be specified.

Sparse column-wise storage format: Once again only the nonzero entries are stored, but this time they are ordered so that those in column j appear directly before those in column j+1. For the j-th column of \(A\) the j-th component of the integer array A_ptr holds the position of the first entry in this column, while A_ptr(n) holds the total number of entries. The row indices i, \(0 \leq i \leq m-1\), and values \(A_{ij}\) of the nonzero entries in the j-th columnsare stored in components l = A_ptr(j), \(\ldots\), A_ptr(j+1)-1, \(0 \leq j \leq n-1\), of the integer array A_row, and real array A_val, respectively. As before, for sparse matrices, this scheme almost always requires less storage than the co-ordinate format. The string A_type = ‘sparse_by_columns’ should be specified.

The symmetric \(n\) by \(n\) matrix \(H\) may also be presented and stored in a variety of formats. But crucially symmetry is exploited by only storing values from the lower triangular part (i.e, those entries that lie on or below the leading diagonal).

Dense storage format: The matrix \(H\) is stored as a compact dense matrix by rows, that is, the values of the entries of each row in turn are stored in order within an appropriate real one-dimensional array. Since \(H\) is symmetric, only the lower triangular part (that is the part \(H_{ij}\) for \(0 \leq j \leq i \leq n-1\)) need be held. In this case the lower triangle should be stored by rows, that is component \(i * i / 2 + j\) of the storage array H_val will hold the value \(H_{ij}\) (and, by symmetry, \(H_{ji}\)) for \(0 \leq j \leq i \leq n-1\). The string H_type = ‘dense’ should be specified.

Sparse co-ordinate storage format: Only the nonzero entries of the matrices are stored. For the \(l\)-th entry, \(0 \leq l \leq ne-1\), of \(H\), its row index i, column index j and value \(H_{ij}\), \(0 \leq j \leq i \leq n-1\), are stored as the \(l\)-th components of the integer arrays H_row and H_col and real array H_val, respectively, while the number of nonzeros is recorded as H_ne = \(ne\). Note that only the entries in the lower triangle should be stored. The string H_type = ‘coordinate’ should be specified.

Sparse row-wise storage format: Again only the nonzero entries are stored, but this time they are ordered so that those in row i appear directly before those in row i+1. For the i-th row of \(H\) the i-th component of the integer array H_ptr holds the position of the first entry in this row, while H_ptr(n) holds the total number of entries. The column indices j, \(0 \leq j \leq i\), and values \(H_{ij}\) of the entries in the i-th row are stored in components l = H_ptr(i), …, H_ptr(i+1)-1 of the integer array H_col, and real array H_val, respectively. Note that as before only the entries in the lower triangle should be stored. For sparse matrices, this scheme almost always requires less storage than its predecessor. The string H_type = ‘sparse_by_rows’ should be specified.

Diagonal storage format: If \(H\) is diagonal (i.e., \(H_{ij} = 0\) for all \(0 \leq i \neq j \leq n-1\)) only the diagonals entries \(H_{ii}\), \(0 \leq i \leq n-1\) need be stored, and the first n components of the array H_val may be used for the purpose. The string H_type = ‘diagonal’ should be specified.

Multiples of the identity storage format: If \(H\) is a multiple of the identity matrix, (i.e., \(H = \alpha I\) where \(I\) is the n by n identity matrix and \(\alpha\) is a scalar), it suffices to store \(\alpha\) as the first component of H_val. The string H_type = ‘scaled_identity’ should be specified.

The identity matrix format: If \(H\) is the identity matrix, no values need be stored. The string H_type = ‘identity’ should be specified.

The zero matrix format: The same is true if \(H\) is the zero matrix, but now the string H_type = ‘zero’ or ‘none’ should be specified.

functions#

qpb.initialize()#

Set default option values and initialize private data

Returns:

optionsdict
dictionary containing default control options:
errorint

error and warning diagnostics occur on stream error.

outint

general output occurs on stream out.

print_levelint

the level of output required is specified by print_level. Possible values are

  • <=0

    gives no output,

  • 1

    gives a one-line summary for every iteration.

  • 2

    gives a summary of the inner iteration for each iteration.

  • >=3

    gives increasingly verbose (debugging) output.

start_printint

any printing will start on this iteration.

stop_printint

any printing will stop on this iteration.

maxitint

at most maxit inner iterations are allowed.

itref_maxint

the maximum number of iterative refinements allowed.

cg_maxitint

the maximum number of CG iterations allowed. If cg_maxit < 0, this number will be reset to the dimension of the system + 1.

indicator_typeint

specifies the type of indicator function used. Pssible values are

  • 1

    primal indicator: the constraint is active if and only if the

    distance to the nearest bound <= indicator_p_tol.

  • 2

    primal-dual indicator: the constraint is active if and only if the distance to the nearest bound <= indicator_tol_pd times the size of the corresponding multiplier.

  • 3

    primal-dual indicator: the constraint is active if and only if the distance tothe nearest bound <= indicator_tol_tapia times the distance to the same bound at the previous iteration.

restore_problemint

indicate whether and how much of the input problem should be restored on output. Possible values are

  • 0

    nothing restored.

  • 1

    scalar and vector parameters.

  • 2

    all parameters.

extrapolateint

should extrapolation be used to track the central path? Possible values

  • 0

    never.

  • 1

    after the final major iteration.

  • 2

    at each major iteration.

path_historyint

the maximum number of previous path points to use when fitting the data.

factorint

the factorization to be used. Possible values are

  • 0

    automatic.

  • 1

    Schur-complement factorization.

  • 2

    augmented-system factorization.

max_colint

the maximum number of nonzeros in a column of A which is permitted with the Schur-complement factorization.

indminint

an initial guess as to the integer workspace required by SBLS.

valminint

an initial guess as to the real workspace required by SBLS.

infeas_maxint

the number of iterations for which the overall infeasibility of the problem is not reduced by at least a factor reduce_infeas before the problem is flagged as infeasible (see reduce_infeas).

preconint

the preconditioner to be used for the CG is defined by precon. Possible values are

  • 0

    automatic.

  • 1

    no preconditioner, i.e, the identity within full factorization.

  • 2

    full factorization.

  • 3

    band within full factorization.

  • 4

    diagonal using the barrier terms within full factorization.

nsemibint

the semi-bandwidth of a band preconditioner, if appropriate.

path_derivativesint

the maximum order of path derivative to use.

fit_orderint

the order of (Puiseux) series to fit to the path data: <=0 to fit all data.

sif_file_deviceint

specifies the unit number to write generated SIF file describing the current problem.

infinityfloat

any bound larger than infinity in modulus will be regarded as infinite.

stop_pfloat

the required accuracy for the primal infeasibility.

stop_dfloat

the required accuracy for the dual infeasibility.

stop_cfloat

the required accuracy for the complementarity.

theta_dfloat

tolerances used to terminate the inner iteration (for given mu): dual feasibility <= MAX( theta_d * mu ** beta, 0.99 * stop_d ) complementarity <= MAX( theta_c * mu ** beta, 0.99 * stop_d ).

theta_cfloat

see theta_d.

betafloat

see theta_d.

prfeasfloat

initial primal variables will not be closer than prfeas from their bound.

dufeasfloat

initial dual variables will not be closer than dufeas from their bounds.

muzerofloat

the initial value of the barrier parameter. If muzero is not positive, it will be reset to an appropriate value.

reduce_infeasfloat

if the overall infeasibility of the problem is not reduced by at least a factor reduce_infeas over infeas_max iterations, the problem is flagged as infeasible (see infeas_max).

obj_unboundedfloat

if the objective function value is smaller than obj_unbounded, it will be flagged as unbounded from below.

pivot_tolfloat

the threshold pivot used by the matrix factorization. See the documentation for SBLS for details.

pivot_tol_for_dependenciesfloat

the threshold pivot used by the matrix factorization when attempting to detect linearly dependent constraints. See the documentation for FDC for details.

zero_pivotfloat

any pivots smaller than zero_pivot in absolute value will be regarded to zero when attempting to detect linearly dependent constraints.

identical_bounds_tolfloat

any pair of constraint bounds (c_l,c_u) or (x_l,x_u) that are closer than identical_bounds_tol will be reset to the average of their values.

inner_stop_relativefloat

the search direction is considered as an acceptable approximation to the minimizer of the model if the gradient of the model in the preconditioning(inverse) norm is less than max( inner_stop_relative * initial preconditioning(inverse) gradient norm, inner_stop_absolute ).

inner_stop_absolutefloat

see inner_stop_relative.

initial_radiusfloat

the initial trust-region radius.

mu_minfloat

start terminal extrapolation when mu reaches mu_min.

inner_fraction_optfloat

a search direction which gives at least inner_fraction_opt times the optimal model decrease will be found.

indicator_tol_pfloat

if indicator_type = 1, a constraint/bound will be deemed to be active <=> distance to nearest bound <= indicator_p_tol.

indicator_tol_pdfloat

if indicator_type = 2, a constraint/bound will be deemed to be active <=> distance to nearest bound <= indicator_tol_pd * size of corresponding multiplier.

indicator_tol_tapiafloat

if indicator_type = 3, a constraint/bound will be deemed to be active <=> distance to nearest bound <= indicator_tol_tapia * distance to same bound at previous iteration.

cpu_time_limitfloat

the maximum CPU time allowed (-ve means infinite).

clock_time_limitfloat

the maximum elapsed clock time allowed (-ve means infinite).

remove_dependenciesbool

the equality constraints will be preprocessed to remove any linear dependencies if True.

treat_zero_bounds_as_generalbool

any problem bound with the value zero will be treated as if it were a general value if True.

centerbool

if center is True, the algorithm will use the analytic center of the feasible set as its initial feasible point. Otherwise, a feasible point as close as possible to the initial point will be used. We recommend using the analytic center.

primalbool

if primal, is True, a primal barrier method will be used in place of t primal-dual method.

puiseuxbool

If extrapolation is to be used, decide between Puiseux and Taylor series.

feasolbool

if feasol is True, the final solution obtained will be perturbed so that variables close to their bounds are moved onto these bounds.

array_syntax_worse_than_do_loopbool

if array_syntax_worse_than_do_loop is True, f77-style do loops will be used rather than f90-style array syntax for vector operations.

space_criticalbool

if space_critical is True, every effort will be made to use as little space as possible. This may result in longer computation time.

deallocate_error_fatalbool

if deallocate_error_fatal is True, any array/pointer deallocation error will terminate execution. Otherwise, computation will continue.

generate_sif_filebool

if generate_sif_file is True, a SIF file describing the current problem is to be generated.

sif_file_namestr

name of generated SIF file containing input problem.

prefixstr

all output lines will be prefixed by the string contained in quotes within prefix, e.g. ‘word’ (note the qutoes) will result in the prefix word.

lsqp_optionsdict

default control options for LSQP (see lsqp.initialize).

fdc_optionsdict

default control options for FDC (see fdc.initialize).

sbls_optionsdict

default control options for SBLS (see sbls.initialize).

fit_optionsdict

default control options for FIT (see fit.initialize).

gltr_optionsdict

default control options for GLTR (see gltr.initialize).

qpb.load(n, m, H_type, H_ne, H_row, H_col, H_ptr, A_type, A_ne, A_row, A_col, A_ptr, options=None)#

Import problem data into internal storage prior to solution.

Parameters:

nint

holds the number of variables.

mint

holds the number of constraints.

H_typestring

specifies the symmetric storage scheme used for the Hessian \(H\). It should be one of ‘coordinate’, ‘sparse_by_rows’, ‘dense’, ‘diagonal’, ‘scaled_identity’, ‘identity’, ‘zero’ or ‘none’; lower or upper case variants are allowed.

H_neint

holds the number of entries in the lower triangular part of \(H\) in the sparse co-ordinate storage scheme. It need not be set for any of the other schemes.

H_rowndarray(H_ne)

holds the row indices of the lower triangular part of \(H\) in the sparse co-ordinate storage scheme. It need not be set for any of the other schemes, and in this case can be None.

H_colndarray(H_ne)

holds the column indices of the lower triangular part of \(H\) in either the sparse co-ordinate, or the sparse row-wise storage scheme. It need not be set when the other storage schemes are used, and in this case can be None.

H_ptrndarray(n+1)

holds the starting position of each row of the lower triangular part of \(H\), as well as the total number of entries, in the sparse row-wise storage scheme. It need not be set when the other schemes are used, and in this case can be None.

A_typestring

specifies the unsymmetric storage scheme used for the constraints Jacobian \(A\). It should be one of ‘coordinate’, ‘sparse_by_rows’ or ‘dense’; lower or upper case variants are allowed.

A_neint

holds the number of entries in \(A\) in the sparse co-ordinate storage scheme. It need not be set for any of the other two schemes.

A_rowndarray(A_ne)

holds the row indices of \(A\) in the sparse co-ordinate storage scheme. It need not be set for any of the other two schemes, and in this case can be None.

A_colndarray(A_ne)

holds the column indices of \(A\) in either the sparse co-ordinate, or the sparse row-wise storage scheme. It need not be set when the dense storage scheme is used, and in this case can be None.

A_ptrndarray(m+1)

holds the starting position of each row of \(A\), as well as the total number of entries, in the sparse row-wise storage scheme. It need not be set when the other schemes are used, and in this case can be None.

optionsdict, optional

dictionary of control options (see qpb.initialize).

qpb.solve_qp(n, m, f, g, H_ne, H_val, A_ne, A_val, c_l, c_u, x_l, x_u, x, y, z)#

Find a local solution of the non-convex quadratic program involving the quadratic objective function \(q(x)\).

Parameters:

nint

holds the number of variables.

mint

holds the number of residuals.

ffloat

holds the constant term \(f\) in the objective function.

gndarray(n)

holds the values of the linear term \(g\) in the objective function.

H_neint

holds the number of entries in the lower triangular part of the Hessian \(H\).

H_valndarray(H_ne)

holds the values of the nonzeros in the lower triangle of the Hessian \(H\) in the same order as specified in the sparsity pattern in qpb.load.

A_neint

holds the number of entries in the constraint Jacobian \(A\).

A_valndarray(A_ne)

holds the values of the nonzeros in the constraint Jacobian \(A\) in the same order as specified in the sparsity pattern in qpb.load.

c_lndarray(m)

holds the values of the lower bounds \(c_l\) on the constraints The lower bound on any component of \(A x\) that is unbounded from below should be set no larger than minus options.infinity.

c_undarray(m)

holds the values of the upper bounds \(c_l\) on the constraints The upper bound on any component of \(A x\) that is unbounded from above should be set no smaller than options.infinity.

x_lndarray(n)

holds the values of the lower bounds \(x_l\) on the variables. The lower bound on any component of \(x\) that is unbounded from below should be set no larger than minus options.infinity.

x_undarray(n)

holds the values of the upper bounds \(x_l\) on the variables. The upper bound on any component of \(x\) that is unbounded from above should be set no smaller than options.infinity.

xndarray(n)

holds the initial estimate of the minimizer \(x\), if known. This is not crucial, and if no suitable value is known, then any value, such as \(x=0\), suffices and will be adjusted accordingly.

yndarray(m)

holds the initial estimate of the Lagrange multipliers \(y\) associated with the general constraints, if known. This is not crucial, and if no suitable value is known, then any value, such as \(y=0\), suffices and will be adjusted accordingly.

zndarray(n)

holds the initial estimate of the dual variables \(z\) associated with the simple bound constraints, if known. This is not crucial, and if no suitable value is known, then any value, such as \(z=0\), suffices and will be adjusted accordingly.

Returns:

xndarray(n)

holds the values of the approximate minimizer \(x\) after a successful call.

cndarray(m)

holds the values of the residuals \(c(x) = Ax\).

yndarray(m)

holds the values of the Lagrange multipliers associated with the general linear constraints.

zndarray(n)

holds the values of the dual variables associated with the simple bound constraints.

c_statndarray(m)

holds the return status for each constraint. The i-th component will be negative if the value of the \(i\)-th constraint \((Ax)_i\)) lies on its lower bound, positive if it lies on its upper bound, and zero if it lies between bounds.

x_statndarray(n)

holds the return status for each variable. The i-th component will be negative if the \(i\)-th variable lies on its lower bound, positive if it lies on its upper bound, and zero if it lies between bounds.

[optional] qpb.information()

Provide optional output information

Returns:

informdict
dictionary containing output information:
statusint

return status. Possible values are:

  • 0

    The run was successful.

  • -1

    An allocation error occurred. A message indicating the offending array is written on unit options[‘error’], and the returned allocation status and a string containing the name of the offending array are held in inform[‘alloc_status’] and inform[‘bad_alloc’] respectively.

  • -2

    A deallocation error occurred. A message indicating the offending array is written on unit options[‘error’] and the returned allocation status and a string containing the name of the offending array are held in inform[‘alloc_status’] and inform[‘bad_alloc’] respectively.

  • -3

    The restriction n > 0 or m > 0 or requirement that type contains its relevant string ‘dense’, ‘coordinate’, ‘sparse_by_rows’, ‘diagonal’, ‘scaled_identity’, ‘identity’, ‘zero’ or ‘none’ has been violated.

  • -4

    The bound constraints are inconsistent.

  • -5

    The constraints appear to have no feasible point.

  • -7

    The objective function appears to be unbounded from below on the feasible set.

  • -9

    The analysis phase of the factorization failed; the return status from the factorization package is given by inform[‘factor_status’].

  • -10

    The factorization failed; the return status from the factorization package is given by inform[‘factor_status’].

  • -11

    The solution of a set of linear equations using factors from the factorization package failed; the return status from the factorization package is given by inform[‘factor_status’].

  • -16

    The problem is so ill-conditioned that further progress is impossible.

  • -17

    The step is too small to make further progress.

  • -18

    Too many iterations have been performed. This may happen if options[‘maxit’] is too small, but may also be symptomatic of a badly scaled problem.

  • -19

    The CPU time limit has been reached. This may happen if options[‘cpu_time_limit’] is too small, but may also be symptomatic of a badly scaled problem.

  • -23

    An entry from the strict upper triangle of \(H\) has been specified.

alloc_statusint

the status of the last attempted allocation/deallocation.

bad_allocstr

the name of the array for which an allocation/deallocation error occurred.

iterint

the total number of iterations required.

cg_iterint

the total number of conjugate gradient iterations required.

factorization_statusint

the return status from the factorization.

factorization_integerlong

the total integer workspace required for the factorization.

factorization_reallong

the total real workspace required for the factorization.

nfactsint

the total number of factorizations performed.

nbactsint

the total number of “wasted” function evaluations during the linesearch.

nmodsint

the total number of factorizations which were modified to ensure that th matrix was an appropriate preconditioner.

objfloat

the value of the objective function at the best estimate of the solution determined by QPB_solve.

non_negligible_pivotfloat

the smallest pivot which was not judged to be zero when detecting linear dependent constraints.

feasiblebool

is the returned “solution” feasible?.

timedict
dictionary containing timing information:
totalfloat

the total CPU time spent in the package.

preprocessfloat

the CPU time spent preprocessing the problem.

find_dependentfloat

the CPU time spent detecting linear dependencies.

analysefloat

the CPU time spent analysing the required matrices prior to factorizatio.

factorizefloat

the CPU time spent factorizing the required matrices.

solvefloat

the CPU time spent computing the search direction.

phase1_totalfloat

the total CPU time spent in the initial-point phase of the package.

phase1_analysefloat

the CPU time spent analysing the required matrices prior to factorizatio in the inital-point phase.

phase1_factorizefloat

the CPU time spent factorizing the required matrices in the inital-point phase.

phase1_solvefloat

the CPU time spent computing the search direction in the inital-point ph.

clock_totalfloat

the total clock time spent in the package.

clock_preprocessfloat

the clock time spent preprocessing the problem.

clock_find_dependentfloat

the clock time spent detecting linear dependencies.

clock_analysefloat

the clock time spent analysing the required matrices prior to factorizat.

clock_factorizefloat

the clock time spent factorizing the required matrices.

clock_solvefloat

the clock time spent computing the search direction.

clock_phase1_totalfloat

the total clock time spent in the initial-point phase of the package.

clock_phase1_analysefloat

the clock time spent analysing the required matrices prior to factorizat in the inital-point phase.

clock_phase1_factorizefloat

the clock time spent factorizing the required matrices in the inital-poi phase.

clock_phase1_solvefloat

the clock time spent computing the search direction in the inital-point.

lsqp_informdict

inform parameters for LSQP (see lsqp.information).

fdc_informdict

inform parameters for FDC (see fdc.information).

sbls_informdict

inform parameters for SBLS (see sbls.information).

fit_informdict

return information from FIT (see fit.information).

gltr_informdict

return information from GLTR (see gltr.information).

qpb.terminate()#

Deallocate all internal private storage.

example code#

from galahad import qpb
import numpy as np
np.set_printoptions(precision=4,suppress=True,floatmode='fixed')
print("\n** python test: qpb")

# set parameters
n = 3
m = 2
infinity = float("inf")

#  describe objective function

f = 1.0
g = np.array([0.0,2.0,0.0])
H_type = 'coordinate'
H_ne = 4
H_row = np.array([0,1,2,2])
H_col = np.array([0,1,1,2])
H_ptr = None
H_val = np.array([1.0,2.0,1.0,3.0])

#  describe constraints

A_type = 'coordinate'
A_ne = 4
A_row = np.array([0,0,1,1])
A_col = np.array([0,1,1,2])
A_ptr = None
A_val = np.array([2.0,1.0,1.0,1.0])
c_l = np.array([1.0,2.0])
c_u = np.array([2.0,2.0])
x_l = np.array([-1.0,-infinity,-infinity])
x_u = np.array([1.0,infinity,2.0])

# allocate internal data and set default options
options = qpb.initialize()

# set some non-default options
options['print_level'] = 0
#print("options:", options)

# load data (and optionally non-default options)
qpb.load(n, m, H_type, H_ne, H_row, H_col, H_ptr,
         A_type, A_ne, A_row, A_col, A_ptr, options)

#  provide starting values (not crucial)

x = np.array([0.0,0.0,0.0])
y = np.array([0.0,0.0])
z = np.array([0.0,0.0,0.0])

# find optimum of qp
#print("solve qp")
x, c, y, z, x_stat, c_stat \
  = qpb.solve_qp(n, m, f, g, H_ne, H_val, A_ne, A_val,
                 c_l, c_u, x_l, x_u, x, y, z)
print(" x:",x)
print(" c:",c)
print(" y:",y)
print(" z:",z)
print(" x_stat:",x_stat)
print(" c_stat:",c_stat)

# get information
inform = qpb.information()
print(" f: %.4f" % inform['obj'])
print('** qpb exit status:', inform['status'])

# deallocate internal data

qpb.terminate()

This example code is available in $GALAHAD/src/qpb/Python/test_qpb.py .