SNLS#

purpose#

The snls package uses a regularization method to solve a given simplex-constrained nonlinear least-squares problem. The aim is to minimize the least-squares objective function

\[f(x) := \frac{1}{2} \sum_{i=0}^{m_r-1} w_i r_i^2(x) \equiv \frac{1}{2} \|r(x)\|^2_W\]
where the variables \(x\) are required to lie within the regular simplex
\[e^T x = 1 \;\;\mbox{and}\;\; x \geq 0,\]
or an intersection of multiple non-overlapping regular simplices
\[e_{\cal C_i}^T x_{\cal C_i}^{} = 1 \;\;\mbox{and}\;\; x_{\cal C_i}^{} \geq 0 \;\;\mbox{for}\;\; i = 0,\ldots,m_c-1, \hspace{10mm} \mbox{(1)}\]
where the non-negative weights \(w\) are given, \(e\) is the vector of ones, the vector \(v_{\cal C}\) is made up of those entries of \(v\) indexed by the set \(\cal C\), the index sets of cohorts \(\cal C_i \subseteq \{0,\ldots,n-1\}\) for which \(\cal C_i \cap \cal C_j = \emptyset\) for \(0 \leq i \neq j \leq m_r-1\), and where the weighted-Euclidean norms is given by \(\|v\|_W^2 = v^T W v\) with \(W = \mbox{diag}(w)\). The method offers the choice of projected-gradient and interior-point solution of the key regularization subproblems, and is most suitable for problems involving a large number of unknowns \(x\). First derivatives of the residual function \(r(x)\) are required, and if second derivatives of the \(r_i(x)\) can be calculated, they may be exploited.

See Section 4 of $GALAHAD/doc/snls.pdf for additional details.

terminology#

The primal optimality conditions (1) and dual optimality conditions

\[J_r^T(x) W r(x) = \sum_{i=0}^{m_c-1} e_{\cal C_i}^{} y_i + z\]
necessarily hold at an optimal point \(x\) for some Lagrange multipliers \(y\) and dual variables \(z \geq 0\), where additionally \(x\) and \(z\) satisfy the complementarity conditions \(x_i z_i = 0\) for \(i=0,\ldots,n-1\).

The algorithm used by the package is iterative. From the current best estimate of the minimizer \(x_k\), a trial improved point \(x_k + s_k\) is sought. The correction \(s_k\) is chosen to improve a model \(m_k(s)\) of the objective function \(f(x_k+s)\) built around \(x_k\). The model is the sum of two basic components, a suitable approximation \(t_k(s)\) of \(f(x_k+s)\), and a regularization term \(\frac{1}{2} \sigma_k \|s\|_2^2\) involving a weight \(\sigma_k\). The weight is adjusted as the algorithm progresses to ensure convergence.

The model \(t_k(s)\) is a truncated Taylor-series approximation, and this relies on being able to compute or estimate derivatives of \(c(x)\). Various models are provided, and each has different derivative requirements. We denote the \(m\) by \(n\) residual Jacobian \(J(x) \equiv \nabla_x c(x)\) as the matrix whose \(i,j\)-th component

\[J(x)_{i,j} := \partial r_i(x) / \partial x_j \;\; \mbox{for $i=0,\ldots,m_r$ and $j=0,\ldots,n-1$.}\]
For a given \(m_r\)-vector \(y\), the weighted-residual Hessian is the sum
\[H(x,y) := \sum_{\ell=0}^{m_r-1} y_{\ell} H_{\ell}(x), \;\; \mbox{where}\;\; H_{\ell}(x)_{i,j} := \partial^2 r_{\ell}(x) / \partial x_i \partial x_j \;\; \mbox{for $i,j=0,\ldots,n-1$}\]
is the Hessian of \(r_\ell(x)\). The models \(t_k(s)\) provided are,

  1. the Gauss-Newton approximation \(\frac{1}{2} \| r(x_k) + J(x_k) s\|^2_W\),

  2. the Newton (second-order Taylor) approximation

    \(f(x_k) + g(x_k)^T s + \frac{1}{2} s^T [ J^T(x_k) W J(x_k) + H(x_k,W r(x_k))] s\)

(although the latter has yet to be implemented).

method#

An adaptive regularization method is used. In this, an improvement to a current estimate of the required minimizer, \(x_k\) is sought by computing a step \(s_k\). The step is chosen to approximately minimize a model \(t_k(s)\) of \(f_{\rho,r}(x_k+s)\) that includes a weighted regularization term \(\frac{\sigma_k}{p} \|s\|_{S_k}^p\) for some specified positive weight \(\sigma_k\). The quality of the resulting step \(s_k\) is assessed by computing the “ratio” \((f(x_k) - f(x_k + s_k))/(t_k(0) - t_k(s_k))\). The step is deemed to have succeeded if the ratio exceeds a given \(\eta_s > 0\), and in this case \(x_{k+1} = x_k + s_k\). Otherwise \(x_{k+1} = x_k\), and the weight is increased by powers of a given increase factor up to a given limit. If the ratio is larger than \(\eta_v \geq \eta_d\), the weight will be decreased by powers of a given decrease factor again up to a given limit. The method will terminate as soon as \(f(x_k)\) or \(\|\nabla_x f(x_k)\|\) is smaller than a specified value.

The step \(s_k\) may be computed either by employing a projected-gradient method to minimize the model within the simplex constraint set \(\cal C(x_k+s)\) using the GALAHAD module slls, or by applying the interior-point method available in the module sllsb to the same subproblem. Experience has shown that it can be beneficial to use the latter method during early iterations, but to switch to the former as the iterates approach convergence.

references#

The generic adaptive cubic regularization method is described in detail in

C. Cartis, N. I. M. Gould and Ph. L. Toint, ‘’Evaluation complexity of algorithms for nonconvex optimization’’ SIAM-MOS Series on Optimization (2022),

and uses ‘’tricks’’ as suggested in

N. I. M. Gould, M. Porcelli and Ph. L. Toint, ‘’Updating the regularization parameter in the adaptive cubic regularization algorithm’’. Computational Optimization and Applications 53(1) (2012) 1–22.

The specific methods employed here are discussed in

N. I. M. Gould et. al., ‘’Nonlinear least-squares over unit simplices’’. Rutherford Appleton Laboratory, Oxfordshire, England (2026) in preparation.

matrix storage#

The unsymmetric \(m_r\) by \(n\) Jacobian matrix \(J_r = J_r(x)\) may be presented and stored in a variety of convenient input formats.

Dense storage format: The matrix \(J_r\) is stored as a compact dense matrix by rows, that is, the values of the entries of each row in turn are stored in order within an appropriate real one-dimensional array. In this case, component \(n \ast i + j\) of the storage array Jr_val will hold the value \(J_{ij}\) for \(0 \leq i \leq m_r-1\), \(0 \leq j \leq n-1\).

Dense by columns storage format: The matrix \(J_r\) is stored as a compact dense matrix by columns, that is, the values of the entries of each column in turn are stored in order within an appropriate real one-dimensional array. In this case, component \(m \ast j + i\) of the storage array Jr_val will hold the value \(J_{ij}\) for \(0 \leq i \leq m_r-1\), \(0 \leq j \leq n-1\).

Sparse co-ordinate storage format: Only the nonzero entries of the matrices are stored. For the \(l\)-th entry, \(0 \leq l \leq ne-1\), of \(J_r\), its row index i, column index j and value \(J_{ij}\), \(0 \leq i \leq m_r-1\), \(0 \leq j \leq n-1\), are stored as the \(l\)-th components of the integer arrays Jr_row and Jr_col and real array Jr_val, respectively, while the number of nonzeros is recorded as Jr_ne = \(ne\).

Sparse row-wise storage format: Again only the nonzero entries are stored, but this time they are ordered so that those in row i appear directly before those in row i+1. For the i-th row of \(J_r\) the i-th component of the integer array Jr_ptr holds the position of the first entry in this row, while Jr_ptr(m) holds the total number of entries. The column indices j, \(0 \leq j \leq n-1\), and values \(J_{ij}\) of the nonzero entries in the i-th row are stored in components l = Jr_ptr(i), \(\ldots\), Jr_ptr(i+1)-1, \(0 \leq i \leq m_r-1\), of the integer array Jr_col, and real array Jr_val, respectively. For sparse matrices, this scheme almost always requires less storage than its predecessor.

Sparse column-wise storage format: Once again only the nonzero entries are stored, but this time they are ordered so that those in column j appear directly before those in column j+1. For the j-th column of \(J_r\) the j-th component of the integer array Jr_ptr holds the position of the first entry in this column, while Jr_ptr(n) holds the total number of entries. The row indices i, \(0 \leq i \leq m_r-1\), and values \(J_{ij}\) of the nonzero entries in the j-th columnsare stored in components l = Jr_ptr(j), \(\ldots\), Jr_ptr(j+1)-1, \(0 \leq j \leq n-1\), of the integer array Jr_row, and real array Jr_val, respectively. As before, for sparse matrices, this scheme almost always requires less storage than the co-ordinate format.

The symmetric \(n\) by \(n\) matrix \(H = H(x,y)\) may be presented and stored in a variety of formats. But crucially symmetry is exploited by only storing values from the lower triangular part (i.e, those entries that lie on or below the leading diagonal).

Dense storage format: The matrix \(H\) is stored as a compact dense matrix by rows, that is, the values of the entries of each row in turn are stored in order within an appropriate real one-dimensional array. Since \(H\) is symmetric, only the lower triangular part (that is the part \(H_{ij}\) for \(0 \leq j \leq i \leq n-1\)) need be held. In this case the lower triangle should be stored by rows, that is component \(i * i / 2 + j\) of the storage array H_val will hold the value \(H_{ij}\) (and, by symmetry, \(H_{ji}\)) for \(0 \leq j \leq i \leq n-1\).

Sparse co-ordinate storage format: Only the nonzero entries of the matrices are stored. For the \(l\)-th entry, \(0 \leq l \leq ne-1\), of \(H\), its row index i, column index j and value \(H_{ij}\), \(0 \leq j \leq i \leq n-1\), are stored as the \(l\)-th components of the integer arrays H_row and H_col and real array H_val, respectively, while the number of nonzeros is recorded as H_ne = \(ne\). Note that only the entries in the lower triangle should be stored.

Sparse row-wise storage format: Again only the nonzero entries are stored, but this time they are ordered so that those in row i appear directly before those in row i+1. For the i-th row of \(H\) the i-th component of the integer array H_ptr holds the position of the first entry in this row, while H_ptr(n) holds the total number of entries. The column indices j, \(0 \leq j \leq i\), and values \(H_{ij}\) of the entries in the i-th row are stored in components l = H_ptr(i), …, H_ptr(i+1)-1 of the integer array H_col, and real array H_val, respectively. Note that as before only the entries in the lower triangle should be stored. For sparse matrices, this scheme almost always requires less storage than its predecessor.

Diagonal storage format: If \(H\) is diagonal (i.e., \(H_{ij} = 0\) for all \(0 \leq i \neq j \leq n-1\)) only the diagonals entries \(H_{ii}\), \(0 \leq i \leq n-1\) need be stored, and the first n components of the array H_val may be used for the purpose.

Multiples of the identity storage format: If \(H\) is a multiple of the identity matrix, (i.e., \(H = \alpha I\) where \(I\) is the n by n identity matrix and \(\alpha\) is a scalar), it suffices to store \(\alpha\) as the first component of H_val.

The identity matrix format: If \(H\) is the identity matrix, no values need be stored.

The zero matrix format: The same is true if \(H\) is the zero matrix.

functions#

snls.initialize()#

Set default option values and initialize private data

Returns:

optionsdict
dictionary containing default control options:
errorint

error and warning diagnostics occur on stream error.

outint

general output occurs on stream out.

print_levelint

the level of output required. Possible values are

  • <= 0

    gives no output.

  • 1

    gives a one-line summary for every iteration.

  • 2

    gives a summary of the inner iteration for each iteration.

  • >=3

    gives increasingly verbose (debugging) output.

start_printint

any printing will start on this iteration.

stop_printint

any printing will stop on this iteration.

print_gapint

the number of iterations between printing.

maxitint

the maximum number of iterations performed.

alive_unitint

removal of the file alive_file from unit alive_unit terminates execution.

alive_filestr

see alive_unit.

jacobian_availableint

is the Jacobian matrix of first derivatives available (\(\geq\) 2), is access only via matrix-vector products (=1) or is it not available (\(\leq\) 0) ?.

subproblem_solverint

the solver used to compute the step. Possible values are

  • 1

    use a projected-gradient method from the GALAHAD module slls.

  • 2

    use an interior-point method from the GALAHAD module sllsb.

  • 3

    use an interior-point method, but switch to a projected-gradient method when sufficient progress has been made, see stop_pg_switch below.

non_monotoneint

non-monotone <= 0 monotone strategy used, anything else non-monotone strategy with this history length used.

weight_update_strategyint

define the weight-update strategy: 1 (basic), 2 (reset to zero when very successful), 3 (imitate TR), 4 (increase lower bound), 5 (GPT).

stop_r_absolutefloat

overall convergence tolerances. The iteration will terminate when \(||c(x)||_2 \leq \max(\) stop_c_absolute, stop_c_relative * \(\|r(x_0)\|_2\) or when the norm of the gradient, \(g(x) = J^T(x) W r(x)\), satisfies \(\|P[x-g(x)]-x\|_2 \leq \max(\) stop_pg_absolute, stop_pg_relative * \(\|P[x_0 - g(x_0)] - x_0\|_2\), or if the norm of the step is less than stop_s, where \(x_0\) is the initial point,

stop_r_relativefloat

see stop_r_absolute.

stop_pg_absolutefloat

see stop_r_absolute.

stop_pg_relativefloat

see stop_r_absolute.

stop_sfloat

see stop_r_absolute.

stop_pg_switchfloat

the step-computation solver will switch from an interior-point method to a projected-gradient one if subproblem_solver = 3 (see above) and \(\|P[x-g(x)]-x\|_2 \leq \max(\) stop_pg_absolute, stop_pg_switch * \(\|P[x_0 - g(x_0)] - x_0\|_2\).

initial_weightfloat

initial value for the regularization weight (-ve means \(1/\|g_0\|)\)).

minimum_weightfloat

minimum permitted regularization weight.

eta_successfulfloat

potential iterate will only be accepted if the actual decrease f - f(x_new) is larger than eta_successful times that predicted by a quadratic model of the decrease. The regularization weight will be decreaed if this relative decrease is greater than eta_very_successful but smaller than eta_too_successful.

eta_very_successfulfloat

see eta_successful.

eta_too_successfulfloat

see eta_successful.

weight_decrease_minfloat

on very successful iterations, the regularization weight will be reduced by the factor weight_decrease but no more than weight_decrease_min while if the iteration is unsucceful, the weight will be increased by a factor weight_increase but no more than weight_increase_max (these are delta_1, delta_2, delta3 and delta_max in Gould, Porcelli and Toint, 2011).

weight_decreasefloat

see weight_decrease_min

weight_increasefloat

see weight_decrease_min

weight_increase_maxfloat

see weight_decrease_min

switch_to_newtonfloat

if newton_acceleration (see below) is true, the Gauss-Newton model will switch to the Newton one as soon as the norm of the projected gradient is smaller than switch_to_newton.

not yet implemented

cpu_time_limitfloat

the maximum CPU time allowed (-ve means infinite).

clock_time_limitfloat

the maximum elapsed clock time allowed (-ve means infinite).

newton_accelerationbool

should available second derivatives be used to accelerate the iteration when possible? not yet implemented

magic_stepbool

allow the user to perform a “magic” step to improve the objective.

print_objbool

print values of the objective/gradient rather than \(\|r\|\) and its gradient.

space_criticalbool

if space_critical is True, every effort will be made to use as little space as possible. This may result in longer computation time.

deallocate_error_fatalbool

if deallocate_error_fatal is True, any array/pointer deallocation error will terminate execution. Otherwise, computation will continue.

prefixstr

all output lines will be prefixed by the string contained in quotes within prefix, e.g. ‘word’ (note the qutoes) will result in the prefix word.

slls_optionsdict

default control options for slls (see slls.initialize).

sllsb_optionsdict

default control options for sllsb (see sllsb.initialize).

snls.load(n, m_r, m_c, J_type, J_ne, J_row, J_col, J_ptr_ne,
J_ptr, cohort, options=None)

Import problem data into internal storage prior to solution.

Parameters:

nint

holds the number of variables.

m_rint

holds the number of residuals.

m_cint

holds the number of cohorts.

J_typestring

specifies the unsymmetric storage scheme used for the Jacobian \(J = J(x)\). It should be one of ‘coordinate’, ‘sparse_by_rows’ or ‘dense’; lower or upper case variants are allowed.

Jr_neint

holds the number of entries in \(J_r\) in the sparse co-ordinate storage scheme. It need not be set for any of the other schemes.

Jr_rowndarray(Jr_ne)

holds the row indices of \(J_r\) in the sparse co-ordinate storage and sparse by columns schemes It need not be set for any of the other schemes, and in this case can be None.

Jr_colndarray(Jr_ne)

holds the column indices of \(J_r\) in either the sparse co-ordinate, or the sparse row-wise storage scheme. It need not be set when the other schemes are used, and in this case can be None.

Jr_ptr_neint

holds the dimension of J_ptr. This should be at least m_r+1 if \(J_r\) is stored in the sparse_by_rows scheme, n+1 for the sparse_by_cols scheme and 0 otherwise.

Jr_ptrndarray(J_ptr_ne)

holds the starting position of each row of \(J_r\), as well as the total number of entries, in the sparse row-wise storage scheme, or the starting position of each column of \(J_r\), as well as the total number of entries, in the sparse column-wise storage scheme. It need not be set when the other schemes are used, and in this case can be None.

cohortndarray(n), optional

specifies which cohort, \(i=0,\ldots,m_c-1\), variable \(x_j\) \(j=0,\ldots,n-1\), lies in, or to a negative number if \(x_j\) is not constrained. If all variables lie in a single cohort, it may be set to None.

optionsdict, optional

dictionary of control options (see snls.initialize).

snls.solve(n, m_r, m_c, x, eval_r, Jr_ne, eval_jr, w)#

Find an approximate local unconstrained minimizer of a given least-squares function subject to non-overlapping simplex constraints using a regularization method.

Parameters:

nint

holds the number of variables.

m_rint

holds the number of residuals.

m_cint

holds the number of cohorts.

xndarray(n)

holds the values of optimization variables \(x\).

eval_rcallable

a user-defined function that must have the signature:

r = eval_r(x)

The components of the residual \(r(x)\) evaluated at \(x\) must be assigned to r.

Jr_neint

holds the number of entries in the Jacobian \(J_r = J_r(x)\).

eval_jrcallable

a user-defined function that must have the signature:

jr = eval_jr(x)

The components of the nonzeros in the Jacobian \(J_r(x)\) of the objective function evaluated at \(x\) must be assigned to jr in the same order as specified in the sparsity pattern in snls.load.

wndarray(m_r), optional

holds the vector of weights \(w\). If w is Null, weights of one will be presumed.

Returns:

xndarray(n)

holds the value of the approximate minimizer \(x\) after a successful call.

yndarray(m_c)

holds the value of the Lagrange multipliers \(y\).

zndarray(n)

holds the value of the dual variables \(z\).

rndarray(m_r)

holds the value of the residuals \(r(x)\).

gndarray(n)

holds the gradient \(\nabla f(x)\) of the objective function.

[optional] snls.information()

Provide optional output information

Returns:

informdict
dictionary containing output information:
statusint

return status. Possible values are:

  • 0

    The run was successful.

  • -1

    An allocation error occurred. A message indicating the offending array is written on unit options[‘error’], and the returned allocation status and a string containing the name of the offending array are held in inform[‘alloc_status’] and inform[‘bad_alloc’] respectively.

  • -2

    A deallocation error occurred. A message indicating the offending array is written on unit options[‘error’] and the returned allocation status and a string containing the name of the offending array are held in inform[‘alloc_status’] and inform[‘bad_alloc’] respectively.

  • -3

    The restriction n > 0 or m_r > 0 or requirement that type contains its relevant string ‘dense’, ‘dense_by_rows’, ‘dense_by_columns’, ‘sparse_by_rows’, ‘sparse_by_columns’, ‘coordinate’ or ‘diagonal’ has been violated.

  • -9

    The analysis phase of the factorization failed; the return status from the factorization package is given by inform[‘factor_status’].

  • -10

    The factorization failed; the return status from the factorization package is given by inform[‘factor_status’].

  • -11

    The solution of a set of linear equations using factors from the factorization package failed; the return status from the factorization package is given by inform[‘factor_status’].

  • -16

    The problem is so ill-conditioned that further progress is impossible.

  • -18

    Too many iterations have been performed. This may happen if options[‘maxit’] is too small, but may also be symptomatic of a badly scaled problem.

  • -19

    The CPU time limit has been reached. This may happen if options[‘cpu_time_limit’] is too small, but may also be symptomatic of a badly scaled problem.

  • -82

    The user has forced termination of the solver by removing the file named options[‘alive_file’] from unit options[‘alive_unit’].

alloc_statusint

the status of the last attempted allocation/deallocation.

bad_allocstr

the name of the array for which an allocation/deallocation error occurred.

bad_evalstr

the name of the user-supplied evaluation routine for which an error occurred.

iterint

the total number of iterations performed.

inner_iterint

the total number of inner iterations performed when finding the step.

r_evalint

the total number of evaluations of the residual function \(r(x)\).

j_evalint

the total number of evaluations of the Jacobian \(J_r(x)\) of \(r(x)\).

objfloat

the value of the objective function \(\frac{1}{2}\|r(x)\|^2_W\) at the best estimate the solution, x, determined by SNLS_solve.

norm_cfloat

the norm of the residual \(\|c(x)\|_W\) at the best estimate of the solution x, determined by SNLS_solve.

norm_gfloat

the norm of the gradient of \(\|r(x)\|_W\) of the objective function at the best estimate, x, of the solution determined by SNLS_solve.

norm_pgfloat

the norm of the projected gradient \(P[x- J_r^T(x) r(x)] - x\) at the best estimate, x, of the solution determined by SNLS_solve.

weightfloat

the final regularization weight used.

timedict
dictionary containing timing information:
totalfloat

the total CPU time spent in the package.

sllsfloat

the CPU time spent in the slls package

sllsbfloat

the CPU time spent in the sllsb package

clock_totalfloat

the total clock time spent in the package.

clock_sllsfloat

the clock time spent in the slls package

clock_sllsbfloat

the clock time spent in the sllsb package

slls_informdict

inform parameters for slls (see slls.information).

sllsb_informdict

inform parameters for sllsb (see sllsb.information).

snls.terminate()#

Deallocate all internal private storage.

example code#

from galahad import snls
import numpy as np
np.set_printoptions(precision=4,suppress=True,floatmode='fixed')
print("\n** python test: snls")

# set parameters
p = 4.0
n = 5
m_r = 4

# set Jacobian sparsity
Jr_type = 'coordinate'
Jr_ne = 8
Jr_row = np.array([0,0,1,1,2,2,3,3])
Jr_col = np.array([0,1,1,2,2,3,3,4])
Jr_ptr = None

# set cohorts
m_c = 2
cohort = np.array([0,1,-1,0,1])

# set weights
w = np.array([1.0,1.0,1.0,1.0])

# allocate internal data and set default options
options = snls.initialize()

# set some non-default options
options['print_level'] = 0
options['jacobian_available'] = 2
options['stop_pg_absolute'] =  0.00001
options['slls_options']['sbls_options']['symmetric_linear_solver'] = 'sytr '
options['slls_options']['sbls_options']['definite_linear_solver'] = 'potr '
#print("options:", options)

# load data (and optionally non-default options)
snls.load(n, m_r, m_c, Jr_type, Jr_ne, Jr_row, Jr_col, 0, Jr_ptr,
          cohort, options)

# define residual function and its Jacobian
def eval_r(x):
    return np.array([x[0]*x[1] - p, x[1]*x[2] - 1.0,
                     x[2]*x[3] - 1.0, x[3]*x[4] - 1.0])
def eval_jr(x):
    return np.array([x[1], x[0], x[2], x[1], x[3], x[2], x[4], x[3]])

# set starting point
x = np.array([0.5,0.5,0.5,0.5,0.5])

# find minimizer
print("\n sparse solve snls")
x, y, z, r, g, x_stat = snls.solve(n, m_r, m_c, x, eval_r, Jr_ne, eval_jr, w)
print(" x:",x)
print(" y:",y)
print(" z:",z)
print(" r:",r)
print(" g:",g)
print(" x_stat:",x_stat)

# get information
inform = snls.information()
#print("inform:", inform)
print(" f: %.4f" % inform['obj'])
print('** snls exit status:', inform['status'])

# deallocate internal data
snls.terminate()

# repeat problem using dense format

# set Jacobian sparsity
Jr_type = 'dense'
Jr_ne = n * m_r
Jr_row = None
Jr_col = None
Jr_ptr = None

# allocate internal data and set default options
options = snls.initialize()

# set some non-default options
options['print_level'] = 0
options['jacobian_available'] = 2
options['stop_pg_absolute'] =  0.00001
options['slls_options']['sbls_options']['symmetric_linear_solver'] = 'sytr '
options['slls_options']['sbls_options']['definite_linear_solver'] = 'potr '
#print("options:", options)

# load data (and optionally non-default options)
snls.load(n, m_r, m_c, Jr_type, Jr_ne, Jr_row, Jr_col, 0, Jr_ptr,
          cohort, options)

# define the dense Jacobian
def eval_jr_dense(x):
    return np.array([x[1], x[0], 0.0,  0.0,  0.0,
                     0.0,  x[2], x[1], 0.0,  0.0,
                     0.0,  0.0,  x[3], x[2], 0.0,
                     0.0,  0.0,  0.0,  x[4], x[3]])

# set starting point
x = np.array([0.5,0.5,0.5,0.5,0.5])

# find minimizer
print("\n dense solve snls")
x, y, z, r, g, x_stat = snls.solve(n, m_r, m_c, x, eval_r, Jr_ne, eval_jr_dense, w)
print(" x:",x)
print(" y:",y)
print(" z:",z)
print(" r:",r)
print(" g:",g)
print(" x_stat:",x_stat)

# get information
inform = snls.information()
#print("inform:", inform)
print(" f: %.4f" % inform['obj'])
print('** snls exit status:', inform['status'])

# deallocate internal data
snls.terminate()

This example code is available in $GALAHAD/src/snls/Python/test_snls.py .