BLLS#

purpose#

The blls package uses a preconditioned project-gradient method to solve a given bound-constrained linear least-squares problem. The aim is to minimize the (regularized) least-squares objective function

\[q(x) = \frac{1}{2} \| A_o x - b\|_W^2 + \frac{1}{2}\sigma \|x\|^2\]
subject to the simple bounds
\[x_l \leq x \leq x_u,\]
where the norms \(\|r\|_W = \sqrt{\sum_{i=1}^o w_i r_i^2}\) and \(\|x\| = \sqrt{\sum_{i=1}^n x_i^2}\), \(A_o\) is a given \(o\) by \(n\) matrix, \(b\), and \(w\) are vectors, \(\sigma \geq 0\) is a scalar, and any of the components of the vectors \(x_l\) or \(x_u\) may be infinite. The method offers the choice of direct and iterative solution of the key regularization subproblems, and is most suitable for problems involving a large number of unknowns \(x\).

See Section 4 of $GALAHAD/doc/blls.pdf for additional details.

terminology#

Any required solution \(x\) necessarily satisfies the primal optimality conditions

\[x_l \leq x \leq x_u,\]
the dual optimality conditions
\[A_o^T W( A_o x - b) + \sigma x = z, \;\; z = z_l + z_u, z_l \geq 0 \;\;\mbox{and}\;\; z_u \leq 0,\]
and the complementary slackness conditions
\[(x -x_l )^{T} z_l = 0 \;\;\mbox{and}\;\;(x -x_u )^{T} z_u = 0,\]
where the vector \(z\) is known as the dual variables for the bounds, and where the vector inequalities hold component-wise.

method#

Projected-gradient methods iterate towards a point that satisfies these optimality conditions by ultimately aiming to satisfy \(A_o^T W ( A_o x - b) + \sigma x = z\), while ensuring that the remaining conditions are satisfied at each stage. Appropriate norms of the amounts by which the optimality conditions fail to be satisfied are known as the primal and dual infeasibility, and the violation of complementary slackness, respectively.

The method is iterative. Each iteration proceeds in two stages. Firstly, a search direction \(s\) from the current estimate of the solution \(x\) is computed. This may be in a scaled steepest-descent direction, or, if the working set of variables on bounds has not changed dramatically, in a direction that provides an approximate minimizer of the objective over a subspace comprising the currently free-variables. The latter is computed either using an appropriate sparse factorization by the package SBLS, or by the conjugate-gradient least-squares (CGLS) method; tt may be necessary to regularize the subproblem very slightly to avoid a ill-posedness. Thereafter, a piecewise linesearch (arc search) is carried out along the arc \(x(\alpha) = P( x + \alpha s)\) for \(\alpha > 0\), where the projection operator is defined component-wise at any feasible point \(v\) to be

\[P_j(v) = \min( \max( x_j, x_j^{l}), x_j^{u});\]
thus this arc bends the search direction into the feasible region. The arc search is performed either exactly, by passing through a set of increasing breakpoints at which it changes direction, or inexactly, by evaluating a sequence of different \(\alpha\) on the arc. All computation is designed to exploit sparsity in \(A_o\).

reference#

Full details are provided in

N. I. M. Gould (2022). ``A projection method for bound-constrained linear least-squares’’. STFC-Rutherford Appleton Laboratory Computational Mathematics Group Internal Report 2023-1 (2023).

matrix storage#

The unsymmetric \(o\) by \(n\) matrix \(A_o\) may be presented and stored in a variety of convenient input formats.

Dense storage format: The matrix \(A_o\) is stored as a compact dense matrix by rows, that is, the values of the entries of each row in turn are stored in order within an appropriate real one-dimensional array. In this case, component \(n \ast i + j\) of the storage array Ao_val will hold the value \(A_{o\,ij}\) for \(0 \leq i \leq o-1\), \(0 \leq j \leq n-1\). The string Ao_type = ‘dense’ should be specified.

Dense by columns storage format: The matrix \(A_o\) is stored as a compact dense matrix by columns, that is, the values of the entries of each column in turn are stored in order within an appropriate real one-dimensional array. In this case, component \(o \ast j + i\) of the storage array Ao_val will hold the value \(A_{o\,ij}\) for \(0 \leq i \leq o-1\), \(0 \leq j \leq n-1\). The string Ao_type = ‘dense_by_columns’ should be specified.

Sparse co-ordinate storage format: Only the nonzero entries of the matrices are stored. For the \(l\)-th entry, \(0 \leq l \leq ne-1\), of \(A_o\), its row index i, column index j and value \(A_{o\,ij}\), \(0 \leq i \leq o-1\), \(0 \leq j \leq n-1\), are stored as the \(l\)-th components of the integer arrays Ao_row and Ao_col and real array Ao_val, respectively, while the number of nonzeros is recorded as Ao_ne = \(ne\). The string Ao_type = ‘coordinate’should be specified.

Sparse row-wise storage format: Again only the nonzero entries are stored, but this time they are ordered so that those in row i appear directly before those in row i+1. For the i-th row of \(A_o\) the i-th component of the integer array Ao_ptr holds the position of the first entry in this row, while A_ptr(o) holds the total number of entries. The column indices j, \(0 \leq j \leq n-1\), and values \(A_{o\,ij}\) of the nonzero entries in the i-th row are stored in components l = Ao_ptr(i), \(\ldots\), Ao_ptr(i+1)-1, \(0 \leq i \leq o-1,\) of the integer array Ao_col, and real array Ao_val, respectively. For sparse matrices, this scheme almost always requires less storage than its predecessor. The string Ao_type = ‘sparse_by_rows’ should be specified.

Sparse column-wise storage format: Once again only the nonzero entries are stored, but this time they are ordered so that those in column j appear directly before those in column j+1. For the j-th column of \(A_o\) the j-th component of the integer array Ao_ptr holds the position of the first entry in this column, while Ao_ptr(n) holds the total number of entries. The row indices i, \(0 \leq i \leq o-1\), and values \(A_{o\,ij}\) of the nonzero entries in the j-th columns are stored in components l = Ao_ptr(j), \(\ldots\), Ao_ptr(j+1)-1, \(0 \leq j \leq n-1\), of the integer array Ao_row, and real array Ao_val, respectively. As before, for sparse matrices, this scheme almost always requires less storage than the co-ordinate format. The string Ao_type = ‘sparse_by_columns’ should be specified.

functions#

blls.initialize()#

Set default option values and initialize private data

Returns:

optionsdict
dictionary containing default control options:
errorint

error and warning diagnostics occur on stream error.

outint

general output occurs on stream out.

print_levelint

the level of output required is specified by print_level. Possible values are

  • <=0

    gives no output,

  • 1

    gives a one-line summary for every iteration.

  • 2

    gives a summary of the inner iteration for each iteration.

  • >=3

    gives increasingly verbose (debugging) output.

start_printint

on which iteration to start printing.

stop_printint

on which iteration to stop printing.

print_gapint

how many iterations between printing.

maxitint

how many iterations to perform (-ve reverts to HUGE(1)-1).

cold_startint

cold_start should be set to 0 if a warm start is required (with variable assigned according to X_stat, see below), and to any other value if the values given in prob.X suffice.

preconditionerint

the preconditioner (scaling) used. Possible values are:

  • 0

    no preconditioner.

  • 1

    a diagonal preconditioner that normalizes the rows of \(A\).

  • anything else

    a preconditioner supplied by the user either via a subroutine call of eval_prec} or via reverse communication.

ratio_cg_vs_sdint

the ratio of how many iterations use CGLS rather than steepest descent.

change_maxint

the maximum number of per-iteration changes in the working set permitted when allowing CGLS rather than steepest descent.

cg_maxitint

how many CG iterations to perform per BLLS iteration (-ve reverts to n+1).

arcsearch_max_stepsint

the maximum number of steps allowed in a piecewise arcsearch (-ve=infini.

sif_file_deviceint

the unit number to write generated SIF file describing the current problem.

weightfloat

the value of the non-negative regularization weight \(\sigma\), i.e., the quadratic objective function \(q(x)\) will be regularized by adding \(1/2 \sigma \|x\|_2^2\); any value of weight smaller than zero will be regarded as zero.

infinityfloat

any bound larger than infinity in modulus will be regarded as infinite.

stop_dfloat

the required accuracy for the dual infeasibility.

identical_bounds_tolfloat

any pair of constraint bounds \((x_l,x_u)\) that are closer than identical_bounds_tol will be reset to the average of their values.

stop_cg_relativefloat

the CG iteration will be stopped as soon as the current norm of the preconditioned gradient is smaller than max( stop_cg_relative * initial preconditioned gradient, stop_cg_absolute).

stop_cg_absolutefloat

see stop_cg_relative.

alpha_maxfloat

the largest permitted arc length during the piecewise line search.

alpha_initialfloat

the initial arc length during the inexact piecewise line search.

alpha_reductionfloat

the arc length reduction factor for the inexact piecewise line search.

arcsearch_acceptance_tolfloat

the required relative reduction during the inexact piecewise line search.

stabilisation_weightfloat

the stabilisation weight added to the search-direction subproblem.

cpu_time_limitfloat

the maximum CPU time allowed (-ve = no limit).

direct_subproblem_solvebool

direct_subproblem_solve is True if the least-squares subproblem is to be solved using a matrix factorization, and False if conjugate gradients are to be preferred.

exact_arc_searchbool

exact_arc_search is True if an exact arc_search is required, and False if an approximation suffices.

advancebool

advance is True if an inexact exact arc_search can increase steps as well as decrease them.

space_criticalbool

if space_critical is True, every effort will be made to use as little space as possible. This may result in longer computation times.

deallocate_error_fatalbool

if deallocate_error_fatal is True, any array/pointer deallocation error will terminate execution. Otherwise, computation will continue.

generate_sif_filebool

if generate_sif_file is True, a SIF file describing the current problem will be generated.

sif_file_namestr

name (max 30 characters) of generated SIF file containing input problem.

prefixstr

all output lines will be prefixed by the string contained in quotes within prefix, e.g. ‘word’ (note the qutoes) will result in the prefix word.

sbls_optionsdict

default control options for SBLS (see sbls.initialize).

convert_optionsdict

default control options for CONVERT (see convert.initialize).

blls.load(n, o, Ao_type, Ao_ne, Ao_row, Ao_col, Ao_ptr_ne, Ao_ptr, options=None)#

Import problem data into internal storage prior to solution.

Parameters:

nint

holds the number of variables.

oint

holds the number of residuals.

Ao_typestring

specifies the unsymmetric storage scheme used for the objective design matrix \(A_o\). It should be one of ‘coordinate’, ‘sparse_by_rows’, ‘sparse_by_columns’, ‘dense’ or ‘dense_by_columns’; lower or upper case variants are allowed.

Ao_neint

holds the number of entries in \(A_o\) in the sparse co-ordinate storage scheme. It need not be set for any of the other schemes.

Ao_rowndarray(Ao_ne)

holds the row indices of \(A_o\) in the sparse co-ordinate and and sparse column-wise storage schemes. It need not be set for any of the other schemes, and in this case can be None.

Ao_colndarray(Ao_ne)

holds the column indices of \(A_o\) in either the sparse co-ordinate, or the sparse row-wise storage scheme. It need not be set for any of the other schemes, and in this case can be None.

Ao_ptr_neint

holds the length of the pointer array if sparse row or column storage scheme is used for \(A_o\). For the sparse row scheme, Ao_ptr_ne should be at least o+1, while for the sparse column scheme, it should be at least n+1, It need not be set when the other schemes are used.

Ao_ptrndarray(Ao_ptr_ne)

holds the starting position of each row of \(A_o\), as well as the total number of entries, in the sparse row-wise storage scheme. By contrast, it holds the starting position of each column of \(A_o\), as well as the total number of entries, in the sparse column-wise storage scheme. It need not be set when the other schemes are used, and in this case can be None.

optionsdict, optional

dictionary of control options (see blls.initialize).

blls.solve_ls(n, o, w, Ao_ne, Ao_val, b, x_l, x_u, x, z)#

Find a solution to the bound-constraind regularized linear least-squares problem involving the least-squares objective function \(q(x)\).

Parameters:

nint

holds the number of variables.

oint

holds the number of residuals.

wndarray(o)

holds the weights \(w\) in the objective function.

Ao_neint

holds the number of entries in the constraint Jacobian \(A_o\).

Ao_valndarray(Ao_ne)

holds the values of the nonzeros in the constraint Jacobian \(A_o\) in the same order as specified in the sparsity pattern in blls.load.

bndarray(o)

holds the values of the observation vector \(b\) in the objective function.

x_lndarray(n)

holds the values of the lower bounds \(x_l\) on the variables. The lower bound on any component of \(x\) that is unbounded from below should be set no larger than minus options.infinity.

x_undarray(n)

holds the values of the upper bounds \(x_l\) on the variables. The upper bound on any component of \(x\) that is unbounded from above should be set no smaller than options.infinity.

xndarray(n)

holds the initial estimate of the minimizer \(x\), if known. This is not crucial, and if no suitable value is known, then any value, such as \(x=0\), suffices and will be adjusted accordingly.

zndarray(n)

holds the initial estimate of the dual variables \(z\) associated with the simple bound constraints, if known. This is not crucial, and if no suitable value is known, then any value, such as \(z=0\), suffices and will be adjusted accordingly.

Returns:

xndarray(n)

holds the values of the approximate minimizer \(x\) after a successful call.

zndarray(n)

holds the values of the dual variables associated with the simple bound constraints.

rndarray(o)

holds the values of the residuals \(r(x) = A_o x - b\) at \(x\).

gndarray(n)

holds the values of the gradient \(g(x) = A_o^T W r(x)\) at \(x\).

x_statndarray(n)

holds the return status for each variable. The i-th component will be negative if the \(i\)-th variable lies on its lower bound, positive if it lies on its upper bound, and zero if it lies between bounds.

[optional] blls.information()

Provide optional output information

Returns:

informdict
dictionary containing output information:
statusint

return status. Possible values are:

  • 0

    The run was successful.

  • -1

    An allocation error occurred. A message indicating the offending array is written on unit options[‘error’], and the returned allocation status and a string containing the name of the offending array are held in inform[‘alloc_status’] and inform[‘bad_alloc’] respectively.

  • -2

    A deallocation error occurred. A message indicating the offending array is written on unit options[‘error’] and the returned allocation status and a string containing the name of the offending array are held in inform[‘alloc_status’] and inform[‘bad_alloc’] respectively.

  • -3

    The restriction n > 0 or o > 0 or requirement that type contains its relevant string ‘dense’, ‘coordinate’ or ‘sparse_by_rows’ has been violated.

  • -4

    The bound constraints are inconsistent.

  • -9

    The analysis phase of the factorization failed; the return status from the factorization package is given by inform[‘factor_status’].

  • -10

    The factorization failed; the return status from the factorization package is given by inform[‘factor_status’].

  • -11

    The solution of a set of linear equations using factors from the factorization package failed; the return status from the factorization package is given by inform[‘factor_status’].

  • -18

    Too many iterations have been performed. This may happen if options[‘maxit’] is too small, but may also be symptomatic of a badly scaled problem.

  • -19

    The CPU time limit has been reached. This may happen if options[‘cpu_time_limit’] is too small, but may also be symptomatic of a badly scaled problem.

alloc_statusint

the status of the last attempted allocation/deallocation.

bad_allocstr

the name of the array for which an allocation/deallocation error occurred.

iterint

number of iterations required.

cg_iterint

number of CG iterations required.

objfloat

current value of the objective function, \(r(x)\).

norm_pgfloat

current value of the Euclidean norm of projected gradient of \(r(x)\).

timedict
dictionary containing timing information:
totalfloat

the total CPU time spent in the package.

analysefloat

the CPU time spent analysing the required matrices prior to factorization.

factorizefloat

the CPU time spent factorizing the required matrices.

solvefloat

the CPU time spent computing the search direction.

sbls_informdict

inform parameters for SBLS (see sbls.information).

convert_informdict

return information from CONVERT (see convert.information).

blls.terminate()#

Deallocate all internal private storage.

example code#

from galahad import blls
import numpy as np
np.set_printoptions(precision=2,suppress=True,floatmode='fixed')
print("\n** python test: blls")

# set parameters
n = 10
o = n + 1
infinity = float("inf")

#  describe A = (  I  ) and b = ( i * e )
#               ( e^T )         ( n + 1 )

Ao_type = 'coordinate'
Ao_ne = 2 * n
Ao_row = np.empty(Ao_ne, int)
Ao_col = np.empty(Ao_ne, int)
Ao_val = np.empty(Ao_ne)
Ao_ptr = None
b = np.empty(o)
b[n] = o
for i in range(n):
  Ao_row[i] = i
  Ao_row[n+i] = o - 1
  Ao_col[i] = i
  Ao_col[n+i] = i
  Ao_val[i] = 1.0
  Ao_val[n+i] = 1.0
  b[i] = i + 1

#  set the weights

w = np.empty(o)
w[0] = 2.0
for i in range(1,o):
  w[i] = 1.0

#  specify the bounds on the variables

x_l = np.empty(n)
x_u = np.empty(n)
x_l[0] = - 1.0
x_u[0] = 1.0
x_l[1] = - infinity
x_u[1] = infinity
for i in range(2,n):
  x_l[i] = - infinity
  x_u[i] = 2.0

# allocate internal data and set default options
options = blls.initialize()

# set some non-default options
options['print_level'] = 0
#print("options:", options)

# load data (and optionally non-default options)
blls.load(n, o, Ao_type, Ao_ne, Ao_row, Ao_col, 0, Ao_ptr, options)

#  provide starting values (not crucial)

x = np.empty(n)
z = np.empty(n)
for i in range(n):
  x[i] = 0.0
  z[i] = 0.0

# find minimizer
#print("\n solve blls")
x, r, z, g, x_stat \
  = blls.solve_ls(n, o, w, Ao_ne, Ao_val, b, x_l, x_u, x, z)
print(" x:",x)
print(" r:",r)
print(" z:",z)
print(" g:",g)
print(" x_stat:",x_stat)

# get information
inform = blls.information()
print(" r: %.4f" % inform['obj'])
print('** blls exit status:', inform['status'])

# deallocate internal data

blls.terminate()

This example code is available in $GALAHAD/src/blls/Python/test_blls.py .