nls_control_type structure#
struct nls_control_type{T} f_indexing::Bool error::Int32 out::Int32 print_level::Int32 start_print::Int32 stop_print::Int32 print_gap::Int32 maxit::Int32 alive_unit::Int32 alive_file::NTuple{31,Cchar} jacobian_available::Int32 hessian_available::Int32 model::Int32 norm::Int32 non_monotone::Int32 weight_update_strategy::Int32 stop_c_absolute::T stop_c_relative::T stop_g_absolute::T stop_g_relative::T stop_s::T power::T initial_weight::T minimum_weight::T initial_inner_weight::T eta_successful::T eta_very_successful::T eta_too_successful::T weight_decrease_min::T weight_decrease::T weight_increase::T weight_increase_max::T reduce_gap::T tiny_gap::T large_root::T switch_to_newton::T cpu_time_limit::T clock_time_limit::T subproblem_direct::Bool renormalize_weight::Bool magic_step::Bool print_obj::Bool space_critical::Bool deallocate_error_fatal::Bool prefix::NTuple{31,Cchar} rqs_control::rqs_control_type{T} glrt_control::glrt_control_type{T} psls_control::psls_control_type{T} bsc_control::bsc_control_type roots_control::roots_control_type{T} subproblem_control::nls_subproblem_control_type{T}
detailed documentation#
control derived type as a Julia structure
components#
Bool f_indexing
use C or Fortran sparse matrix indexing
Int32 error
error and warning diagnostics occur on stream error
Int32 out
general output occurs on stream out
Int32 print_level
the level of output required.
\(\leq\) 0 gives no output,
= 1 gives a one-line summary for every iteration,
= 2 gives a summary of the inner iteration for each iteration,
\(\geq\) 3 gives increasingly verbose (debugging) output
Int32 start_print
any printing will start on this iteration
Int32 stop_print
any printing will stop on this iteration
Int32 print_gap
the number of iterations between printing
Int32 maxit
the maximum number of iterations performed
Int32 alive_unit
removal of the file alive_file from unit alive_unit terminates execution
char alive_file[31]
see alive_unit
Int32 jacobian_available
is the Jacobian matrix of first derivatives available (\(\geq\) 2), is access only via matrix-vector products (=1) or is it not available (\(\leq\) 0) ?
Int32 hessian_available
is the Hessian matrix of second derivatives available (\(\geq\) 2), is access only via matrix-vector products (=1) or is it not available (\(\leq\) 0) ?
Int32 model
the model used.
Possible values are
0 dynamic (not yet implemented)
1 first-order (no Hessian)
2 barely second-order (identity Hessian)
3 Gauss-Newton (\(J^T J\) Hessian)
4 second-order (exact Hessian)
5 Gauss-Newton to Newton transition
6 tensor Gauss-Newton treated as a least-squares model
7 tensor Gauss-Newton treated as a general model
8 tensor Gauss-Newton transition from a least-squares to a general mode
Int32 norm
the regularization norm used.
The norm is defined via \(\|v\|^2 = v^T S v\), and will define the preconditioner used for iterative methods. Possible values for \(S\) are
-3 user’s own regularization norm
-2 \(S\) = limited-memory BFGS matrix (with .PSLS_control.lbfgs_vectors history) (not yet implemented)
-1 identity (= Euclidan two-norm)
0 automatic (not yet implemented)
1 diagonal, \(S\) = diag( max(\(J^TJ\) Hessian, .PSLS_control.min_diagonal ) )
2 diagonal, \(S\) = diag( max( Hessian, .PSLS_control.min_diagonal ) )
3 banded, \(S\) = band( Hessian ) with semi-bandwidth .PSLS_control.semi_bandwidth
4 re-ordered band, P=band(order(A)) with semi-bandwidth .PSLS_control.semi_bandwidth
5 full factorization, \(S\) = Hessian, Schnabel-Eskow modification
6 full factorization, \(S\) = Hessian, GMPS modification (not yet implemented)
7 incomplete factorization of Hessian, Lin-More’
8 incomplete factorization of Hessian, HSL_MI28
9 incomplete factorization of Hessian, Munskgaard (not yet implemented)
10 expanding band of Hessian (not yet implemented)
Int32 non_monotone
non-monotone \(\leq\) 0 monotone strategy used, anything else non-monotone strategy with this history length used
Int32 weight_update_strategy
define the weight-update strategy: 1 (basic), 2 (reset to zero when very successful), 3 (imitate TR), 4 (increase lower bound), 5 (GPT)
T stop_c_absolute
overall convergence tolerances. The iteration will terminate when \(||c(x)||_2 \leq\) MAX( .stop_c_absolute, .stop_c_relative \(* \|c(x_{\mbox{initial}})\|_2\) or when the norm of the gradient, \(g = J^T(x) c(x) / \|c(x)\|_2\), of \|\|c(x)\|\|_2 satisfies \(\|g\|_2 \leq\) MAX( .stop_g_absolute, .stop_g_relative \(* \|g_{\mbox{initial}}\|_2\), or if the step is less than .stop_s
T stop_c_relative
see stop_c_absolute
T stop_g_absolute
see stop_c_absolute
T stop_g_relative
see stop_c_absolute
T stop_s
see stop_c_absolute
T power
the regularization power (<2 => chosen according to the model)
T initial_weight
initial value for the regularization weight (-ve => \(1/\|g_0\|)\))
T minimum_weight
minimum permitted regularization weight
T initial_inner_weight
initial value for the inner regularization weight for tensor GN (-ve => 0)
T eta_successful
a potential iterate will only be accepted if the actual decrease f - f(x_new) is larger than .eta_successful times that predicted by a quadratic model of the decrease. The regularization weight will be decreaed if this relative decrease is greater than .eta_very_successful but smaller than .eta_too_successful
T eta_very_successful
see eta_successful
T eta_too_successful
see eta_successful
T weight_decrease_min
on very successful iterations, the regularization weight will be reduced by the factor .weight_decrease but no more than .weight_decrease_min while if the iteration is unsucceful, the weight will be increased by a factor .weight_increase but no more than .weight_increase_max (these are delta_1, delta_2, delta3 and delta_max in Gould, Porcelli and Toint, 2011)
T weight_decrease
see weight_decrease_min
T weight_increase
see weight_decrease_min
T weight_increase_max
see weight_decrease_min
T reduce_gap
- expert parameters as suggested in Gould, Porcelli and Toint, “Updating the regularization parameter in the adaptive cubic
regularization algorithm”, RAL-TR-2011-007, Rutherford Appleton Laboratory, England (2011), http://epubs.stfc.ac.uk/bitstream/6181/RAL-TR-2011-007.pdf (these are denoted beta, epsilon_chi and alpha_max in the paper)
T tiny_gap
see reduce_gap
T large_root
see reduce_gap
T switch_to_newton
if the Gauss-Newto to Newton model is specified, switch to Newton as soon as the norm of the gradient g is smaller than switch_to_newton
T cpu_time_limit
the maximum CPU time allowed (-ve means infinite)
T clock_time_limit
the maximum elapsed clock time allowed (-ve means infinite)
Bool subproblem_direct
use a direct (factorization) or (preconditioned) iterative method to find the search direction
Bool renormalize_weight
should the weight be renormalized to account for a change in scaling?
Bool magic_step
allow the user to perform a “magic” step to improve the objective
Bool print_obj
print values of the objective/gradient rather than ||c|| and its gradient
Bool space_critical
if .space_critical true, every effort will be made to use as little space as possible. This may result in longer computation time
Bool deallocate_error_fatal
if .deallocate_error_fatal is true, any array/pointer deallocation error will terminate execution. Otherwise, computation will continue
NTuple{31,Cchar} prefix
all output lines will be prefixed by .prefix(2:LEN(TRIM(.prefix))-1) where .prefix contains the required string enclosed in quotes, e.g. “string” or ‘string’
struct rqs_control_type rqs_control
control parameters for RQS
struct glrt_control_type glrt_control
control parameters for GLRT
struct psls_control_type psls_control
control parameters for PSLS
struct bsc_control_type bsc_control
control parameters for BSC
struct roots_control_type roots_control
control parameters for ROOTS
struct nls_subproblem_control_type subproblem_control
control parameters for the step-finding subproblem