::tclopt

Classes

DE

Class implements the Differential Evolution (DE) algorithm to solve global optimization problems over continuous parameter spaces. Typically, it is used to minimize a user-supplied objective function by evolving a population of candidate solutions through mutation, crossover, and selection.

Differential Evolution is a stochastic, population-based optimizer that works well for non-linear, non-differentiable, and multi-modal objective functions. It does not require gradient information and is effective in high-dimensional or rugged search spaces. The user-supplied objective function should take a vector of parameters as input and return a scalar value to be minimized. For example, the objective function might compute the volume of material used in a structure given its geometric parameters, the error rate of a machine learning model, or the energy of a physical system. DE begins by initializing a population of random candidate solutions within given parameter bounds and iteratively refines them by combining members of the population and selecting better solutions over generations.

Simple constraints are placed on parameter values by adding objects of class ::tclopt::Parameter to DE with method ::tclopt::Optimization::addPars. For details of how to specify constraints, please look at the description of ::tclopt::Parameter class. Please note, that order in which we attach parameters objects is the order in which values will be supplied to minimized function, and the order in which resulted will be written to x property of the class.

General advices

  • f is usually between 0.5 and 1 (in rare cases > 1)

  • cr is between 0 and 1 with 0., 0.3, 0.7 and 1. being worth to be tried first

  • To start off np = 10*d is a reasonable choice. Increase np if misconvergence happens.

  • If you increase np, f usually has to be decreased

  • When the DE/best… schemes fail DE/rand… usually works and vice versa

Strategies overview

Naming convention for strategies: x/y/z, where:

  • x - a string which denotes the vector to be perturbed (mutated)

  • y - number of difference vectors taken for perturbation (mutation) of x

  • z - crossover method (exp = exponential, bin = binomial)

Mutation

Combination of x and y gives following mutation function:

best/1:

→    →         →     →
u  = x  + f ⋅ ⎛x   - x  ⎞
 i    b       ⎝ r2    r3⎠

rand/1:

→    →         →     →
u  = x  + f ⋅ ⎛x   - x  ⎞
 i    r1      ⎝ r2    r3⎠

rand-to-best/1 (custom variant):

→    →         →     →           →     →
u  = x  + f ⋅ ⎛x   - x  ⎞ + f ⋅ ⎛x   - x  ⎞
 i    i       ⎝ b     i ⎠       ⎝ r1    r2⎠

best/2:

→    →         →     →     →     →
u  = x  + f ⋅ ⎛x   + x   - x   - x  ⎞
 i    b       ⎝ r1    r2    r3    r4⎠

rand/2:

→    →         →     →     →     →
u  = x  + f ⋅ ⎛x   + x   - x   - x  ⎞
 i    r5      ⎝ r1    r2    r3    r4⎠

x_i - trial vector, x_b - best vector, x_rn - randomly selected individuals from population.

A crossover operation between the new generated mutant vector v_i and the target vector x_i is used to further increase the diversity of the new candidate solution.

Exponential crossover

In exponential crossover, a contiguous block of dimensions is modified, starting from a random index, and continues as long as random values are less than CR. The mutation happens inline during crossover, and wrapping around is supported.

Example (D = 10, n = 3, L = 4):

Parent x_i:         [x0 x1 x2 x3 x4 x5 x6 x7 x8 x9]
Exponential mask:             →  →  →  →
n  n+1 n+2 n+3

Trial u_i:          [x0 x1 x2 v3 v4 v5 v6 x7 x8 x9]
↑ mutated from DE strategy
  • Starts from a random index n ∈ [0, D)

  • Replaces a contiguous block of components (dimension-wise)

  • Continues as long as rand() < CR, up to D components

  • Mutation and crossover are applied together in the code, not as separate stages.

Binomial crossover

In binomial crossover, each dimension has an independent probability CR of being replaced by the mutant vector. At least one dimension is guaranteed to be copied from the mutant (typically by forcing one fixed index to be included).

Example (D = 10):

Parent x_i:         [x0 x1 x2 x3 x4 x5 x6 x7 x8 x9]
Random mask:         ✗  ✓  ✗  ✗  ✓  ✗  ✓  ✗  ✗  ✓
Mutant values:      [v0 v1 v2 v3 v4 v5 v6 v7 v8 v9]

Trial u_i:          [x0 v1 x2 x3 v4 x5 v6 x7 x8 v9]
                         ↑       ↑       ↑     ↑
                    replaced where rand() < CR
  • Applies independent crossover decision for each dimension

  • Starts at a random dimension n, iterates D steps circularly

  • Each dimension is replaced with probability CR

  • Ensures at least one dimension is modified (usually last)

  • Mutation and crossover are applied together in the code, not as separate stages.

Summary of strategies

ID

Base

Difference

XOV

Description

1

best

r2 - r3

exp

Exploitative, may misconverge

2

r1

r2 - r3

exp

Balanced, exploratoryTry e.g. F=0.7 and CR=0.5 first

3

x_i

(best - x_i) + (r1 - r2)

exp

Hybrid: pull + variationTry e.g. F=0.85 and CR=1 first

4

best

(r1 + r2) - (r3 + r4)

exp

Exploratory, best-guided

5

r5

(r1 + r2) - (r3 + r4)

exp

Fully random, robust

6

best

r2 - r3

bin

Same as 1, binomial crossover

7

r1

r2 - r3

bin

Same as 2, binomial crossover

8

x_i

(best - x_i) + (r1 - r2)

bin

Same as 3, binomial crossover

9

best

(r1 + r2) - (r3 + r4)

bin

Same as 4, binomial crossover

10

r5

(r1 + r2) - (r3 + r4)

bin

Same as 5, binomial crossover

Method summary

constructor

Constructor for the class.

[configure]:

Configure properties. See ::oo::configuresupport::configurable.

DuplChecker.duplListCheck

Inherited from DuplChecker.

Optimization.addPars

Inherited from Optimization.

Optimization.getAllPars

Inherited from Optimization.

Optimization.getAllParsNames

Inherited from Optimization.

run

Runs optimization.

Properties

-abstol:

Readable, writable. Absolute tolerance.

-cr:

Readable, writable. Crossing over factor (crossover rate).

-d:

Readable, writable.

-debug:

Readable, writable. Flag enabling printing of debug messages during optimization.

-f:

Readable, writable. Weight factor (mutation rate).

-funct:

Readable, writable. Inherited.

-genmax:

Readable, writable. Maximum number of generations.

-histfreq:

Readable, writable. Period of history saving, saves each N iterations.

-history:

Readable, writable. Flag enabling collecting scalar history and best trajectory.

-initpop:

Readable, writable. Initial population.

-initype:

Readable, writable.

-np:

Readable, writable. Population size.

-pdata:

Readable, writable.

-refresh:

Readable, writable. Output refresh cycle.

-reltol:

Readable, writable. Relative tolerance.

-results:

Readable, writable. Inherited.

-savepop:

Readable, writable. Flag enabling including population snapshots in history.

-seed:

Readable, writable. Random seed.

-strategy:

Readable, writable. Startegy used by the optimizer.

-threshold:

Readable, writable. Objective function threshold that stops optimization.

Superclasses

Optimization

constructor

Creates optimization object that runs optimization using modified Differential Evolution algorithm.

OBJECT constructor -funct value -strategy value -pdata value ?-genmax value? ?-refresh value? ?-np value? ?-f value? ?-cr value? ?-seed value? ?-abstol value? ?-reltol value? ?-debug? ?-random|specified -initpop value? ?-history? ?-histfreq value? ?-savepop?

Parameters

-abstol value:

Absolute tolerance. Controls termination of optimization. Default 1e-6.

-cr value:

Crossing over factor (crossover rate). Controls the probability of mixing components from the target vector and the mutant vector to form a trial vector. It determines how much of the trial vector inherits its components from the mutant vector versus the target vector. A high crossover rate means that more components will come from the mutant vector, promoting exploration of new solutions. Conversely, a low crossover rate results in more components being taken from the target vector, which can help maintain existing solutions and refine them. The typical range for CR is between 0.0 and 1.0. Default is 0.9.

-debug:

Print debug messages during optimization.

-f value:

Weight factor (mutation rate). Controls the amplification of the differential variation between individuals. It is a scaling factor applied to the difference between two randomly selected population vectors before adding the result to a third vector to create a mutant vector (exact mechanism is dependent on selected strategy). The mutation rate influences the algorithm’s ability to explore the search space; a higher value of f increases the diversity of the mutant vectors, leading to broader exploration, while a lower value encourages convergence by making smaller adjustments. The typical range for f is between 0.4 and 1.0, though values outside this range can be used depending on the problem characteristics. Default is 0.9.

-funct value:

Name of the procedure that should be minimized.

-genmax value:

Maximum number of generations. Controls termination of optimization. Default 3000.

-histfreq value:

Save history every N generations. Default is 1.

-history:

Enables collecting scalar history and best trajectory.

-initpop value:

List of lists (matrix) with size np x d, requires -specified.

-np value:

Population size. Represents number of random parameter vector per generation. As a first guess for the value it is recommended to set it from 5 to 10 times the number of parameters. Default is 20.

-pdata value:

List or dictionary that provides private data to funct that is needed to evaluate object (cost) function. Usually it contains x and y values lists, but you can provide any data necessary for function evaluation. Will be passed upon each function evaluation without modification.

-random:

Select population initialization with random values over the individual parameters ranges.

-refresh value:

Output refresh cycle. Represent the frequency of printing debug information to stdout.

-reltol value:

Relative tolerance. Controls termination of optimization. Default 1e-2.

-savepop:

Enables including population snapshots in history (every -histfreq generations), requires -history.

-seed value:

Random seed.

-specified:

Select population initialization with specified population values, requires -initpop.

-strategy value:

Choice of strategy. Possible strategies: best/1/exp rand/1/exp rand-to-best/1/exp best/2/exp rand/2/exp best/1/bin rand/1/bin rand-to-best/1/bin best/2/bin rand/2/bin.

-threshold value:

Objective function threshold that stops optimization.

Return value

object of class

run

Runs optimization.

DEOBJ run

Return value

dictionary containing resulted data

DuplChecker

Method summary

[configure]:

Configure properties. See ::oo::configuresupport::configurable.

duplListCheck

Checks if list contains duplicates.

Subclasses

Optimization

duplListCheck

Checks if list contains duplicates.

DUPLCHECKEROBJ duplListCheck list

Parameters

list:

List to check.

Return value

0 if there are no duplicates and 1 if there are.

GSA

Class implements the Generalized Simulated Annealing (GSA) algorithm to solve global optimization problems over continuous parameter spaces.

Generalized Simulated Annealing (GSA) is an enhanced version of the classical simulated annealing algorithm, rooted in Tsallis statistics. It replaces traditional temperature schedules and perturbation distributions with generalized forms: specifically, it uses a distorted Cauchy–Lorentz visiting distribution controlled by a parameter qv, allowing for more flexible exploration of the solution space. The algorithm introduces artificial “temperatures” that gradually cool, injecting stochasticity to help the search process escape local minima and eventually converge within the basin of a global minimum.

Main source of information is this article.

Simple constraints are placed on parameter values by adding objects of class ::tclopt::Parameter to GSA with method ::tclopt::Optimization::addPars. For details of how to specify constraints, please look at the description of ::tclopt::Parameter class. Please note, that order in which we attach parameters objects is the order in which values will be supplied to minimized function, and the order in which resulted will be written to x property of the class.

General steps of algorithm

1. Inputs & setup

  • Provide: objective proc name, parameter objects, and algorithm controls parameters.

  • Initialize RNG state

2. Choose initial parameter vector x_0

  • If -specified, take each parameter’s -initval.

  • If -random, sample uniformly within bounds: x_i = Unif[low_i​, up_i], i - i’th parameter

3. Estimate initial temperature temp0 (if not provided)

  • Draw -ntrial random vectors uniformly within the box; evaluate objective at each.

  • If -random, sample uniformly within bounds: x_i = Unif[low_i​, up_i], i - i’th parameter

  • Let d be the number of parameters. Compute sample mean and std. dev. of objective values; set:

temp  = stddev({f(x)})
    0

4. Initialize loop state

  • Current point/value:

→       →   →      →  →
x     = x , f    = f ⎛x ⎞
 curr    0   curr    ⎝ 0⎠
  • Best-so-far within the current temperature: copy current to “best”.

5. Outer loop over temperatures (cooling)

  • For outer iteration k=0,1,2,…, temperature is (Tsallis cooling):

             ⎛ (qv - 1)    ⎞
     temp0 ⋅ ⎝2         - 1⎠
T  = ───────────────────────
 k            (qv - 1)
       (1 + t)         - 1

6. Choose inner-iterations at this temperature

  • Inner iteration budget at T_k:

         ⎛                ⎛                  ⎛         ⎛  -d  ⎞⎞⎞⎞
         ⎜                ⎜                  ⎜         ⎜──────⎟⎟⎟⎟
         ⎜                ⎜                  ⎜         ⎝3 - qv⎠⎟⎟⎟
n  = min ⎜maxinniter, max ⎜mininniter, floor ⎜nbase ⋅ T        ⎟⎟⎟
 t       ⎝                ⎝                  ⎝         k       ⎠⎠⎠

where d - number of parameters.

7. Inner loop: propose, clamp, evaluate, accept. For t=1,…, n_t:

  • Visit/perturb each coordinate (distorted Cauchy–Lorentz with qv). Draw u~Unif(0,1). If u>=0.5, sign=1, else sign=-1. Then step is:

                               ____________________
                              ╱        (qv - 1)
             ⎛   1  ⎞        ╱ ⎛   1  ⎞
             ⎜──────⎟       ╱  ⎜──────⎟         - 1
             ⎝3 - qv⎠      ╱   ⎝|2u−1|⎠
Δx = sign ⋅ T         ⋅   ╱    ────────────────────
             k          ╲╱            qv - 1

Apply per coordinate, then clamp with modulo reflection into [low, up].

  • Evaluate candidate and calculate the difference:

   →               →
f ⎛x    ⎞; Δf = f ⎛x    ⎞ - f
  ⎝ cand⎠         ⎝ cand⎠    curr
  • Acceptance rule (generalized qa-Metropolis): if Δf<=0 - accept, else accept with probability:

If qa=1:
        ⎛-Δf ⋅ k⎞
p = exp ⎜───────⎟
        ⎜  T    ⎟
        ⎝   k   ⎠
If qa < 1:
            (1 - qa) ⋅ Δf ⋅ k
    z = 1 - ─────────────────
                   T
                    k

    If z<=0 then p=0, else:

         ⎛   1  ⎞
         ⎜──────⎟
         ⎝1 - qa⎠
    p = z

If qa > 1:
                           ⎛  -1  ⎞
                           ⎜──────⎟
                           ⎝qa - 1⎠
    ⎛    (qa - 1) ⋅ Δf ⋅ k⎞
p = ⎜1 + ─────────────────⎟
    ⎜           T         ⎟
    ⎝            k        ⎠

Accept with probability p.

8. Best-of-temperature recentering

  • Track (x_best, f_best) during inner loop.

  • After finishing n_k iterations, set:

→       →
x     = x
 curr    best
→       →
f     = f
 curr    best
  • Count attempted/accepted moves for diagnostics.

9. Stopping conditions (checked each outer step)

  • If -threshold is set and best value lower or equal to threshold then stop.

  • If k>=maxiter then stop.

  • If T_k<=tmin then stop.

  • If -maxfev is set and total function evals higher or equal to -maxfev then stop.

10. Advance temperature or finish

  • If none of the stops triggered, increment k and repeat.

  • On exit, return: best objective, best x, total evals, temp0, last temp_q (final T_k​), and a human-readable info message.

Description of keys and data in returned dictionary (not including history mode):

objfunc:

Final value of object (cost) function funct

x:

Final vector of parameters.

nfev:

Number of function evalutions.

temp0:

Initial temperature.

tempend:

End temperature.

info:

Convergence information.

niter:

Number of temperature iterations.

History mode

When the -history flag is provided, result also includes the following keys:

Key history - a dictionary with keys (one per -histfreq temperature and after the last iteration):

iter:

Temperature iteration index.

temp:

Current temperature value.

bestf:

Best-so-far (global best) objective value after this iteration.

currf:

Current objective value.

nt:

Number of iterations within current iteration (temperature)

accratio:

Acceptance ratio in current iteration (temperature)

nfev:

Cumulative number of function evaluations at the end of this iteration.

Key besttraj - a dictionary with keys (one per -histfreq temperature and after the last iteration):

iter:

Temperature iteration index.

x:

Parameter vector achieving the best-so-far (global best) objective value after this iteration.

If the -savemoves switch is provided as well, result additionally contains key histmoves with dictionary with keys (one per -histfreq temperature and after the last iteration):

iter:

Temperature iteration index.

moves:

List of dictionaries that contains accepted moves in this temperature.

Each move in the list of moves is a dictionary with keys:

tstep:

Index of step inside of current temperature iteration.

x:

Accepted parameter vector.

fx:

Value of objective function for that accepted parameter vector.

Method summary

constructor

Constructor for the class.

[configure]:

Configure properties. See ::oo::configuresupport::configurable.

DuplChecker.duplListCheck

Inherited from DuplChecker.

Optimization.addPars

Inherited from Optimization.

Optimization.getAllPars

Inherited from Optimization.

Optimization.getAllParsNames

Inherited from Optimization.

run

Runs optimization.

Properties

-debug:

Readable, writable. Flag enabling debug information printing.

-funct:

Readable, writable. Inherited.

-histfreq:

Readable, writable. Period of history saving, saves each N iterations.

-history:

Readable, writable. Flag enabling collecting scalar history and best trajectory.

-initype:

Readable, writable.

-maxfev:

Readable, writable. Maximum number of objective function evaluation.

-maxinniter:

Readable, writable. Maximum number of iterations per temperature.

-maxiter:

Readable, writable. Maximum number of temperature steps.

-mininniter:

Readable, writable. Minimum number of iterations per temperature.

-nbase:

Readable, writable. Base number of iterations within single temperature.

-ntrial:

Readable, writable. Initial number of samples to determine initial temperature temp0 (if not provided).

-pdata:

Readable, writable.

-qa:

Readable, writable. Acceptance distribution parameter.

-qv:

Readable, writable. Visiting distribution parameter.

-refresh:

Readable, writable. Output refresh cycle.

-results:

Readable, writable. Inherited.

-savemoves:

Readable, writable. Flag enabling including accepted moves snapshots in history.

-seed:

Readable, writable. Random seed.

-temp0:

Readable, writable. Initial temperature value.

-threshold:

Readable, writable. Define stopping value - stop when best objective function value is lower than this threshold.

-tmin:

Readable, writable. Define stopping value - stop when temperature is lower than this value.

Superclasses

Optimization

constructor

Creates optimization object that tuns optimization using modified Gegeneralized Simulation Annealing algorithm.

OBJECT constructor -funct value -pdata value ?-maxiter value? ?-mininniter value? ?-maxfev value? ?-seed value? ?-ntrial value? ?-nbase value? ?-qv value? ?-qa value? ?-tmin value? ?-temp0 value? ?-debug? ?-threshold value? ?-random|specified -initpop value? ?-history? ?-histfreq value? ?-savemoves?

Parameters

-debug:

Enables debug information printing.

-funct value:

Name of the procedure that should be minimized.

-histfreq value:

Save history every N generations. Default is 1.

-history:

Enables collecting scalar history and best trajectory.

-maxfev value:

Maximum number of objective function evaluation. Controls termination of optimization if provided.

-maxinniter value:

Maximum number of iterations per temperature, default is 1000.

-maxiter value:

Maximum number of temperature steps. Controls termination of optimization. Default is 5000.

-mininniter value:

Minimum number of iterations per temperature, default is 10.

-nbase value:

Base number of iterations within single temperature, default is 30.

-ntrial value:

Initial number of samples to determine initial temperature temp0 (if not provided), default is 20.

-pdata value:

List or dictionary that provides private data to funct that is needed to evaluate object (cost) function. Usually it contains x and y values lists, but you can provide any data necessary for function evaluation. Will be passed upon each function evaluation without modification.

-qa value:

Acceptance distribution parameter (qa ≠ 1, can be negative). Default -5.0.

-qv value:

Visiting distribution parameter, must satisfy 1 < qv < 3. Default 2.62.

-random:

Random parameter vector initialization.

-refresh value:

Output refresh cycle. Represent the frequency of printing debug information to stdout.

-savemoves:

Enables including accepted moves snapshots in history (every -histfreq generations), requires -history.

-seed value:

Random seed, default is 0.

-specified:

Specified points parameter vector initialization.

-temp0 value:

Initial temperature value; if not given, estimated from ntrial samples.

-threshold value:

Stop when best objective ≤ threshold (optional).

-tmin value:

Stop when temperature ≤ tmin. Default 1e-5.

Return value

object of class

run

Runs optimization.

GSAOBJ run

Return value

dictionary containing resulted data

LBFGS

Class represents optimization object that runs optimization using modified Limited-memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) method written by Jorge Nocedal.

Method summary

constructor

Constructor for the class.

[configure]:

Configure properties. See ::oo::configuresupport::configurable.

DuplChecker.duplListCheck

Inherited from DuplChecker.

Optimization.addPars

Inherited from Optimization.

Optimization.getAllPars

Inherited from Optimization.

Optimization.getAllParsNames

Inherited from Optimization.

run

Runs optimization.

Properties

-condition:

Readable, writable. Condition type to satisfy in backtracking algorithm.

-delta:

Readable, writable. Delta for convergence test.

-dstepmin:

Readable, writable. Minimum absolute step for finite differences.

-dstepscale:

Readable, writable. Multiplier for finite-difference step size.

-epsilon:

Readable, writable. Epsilon for convergence test.

-ftol:

Readable, writable. A parameter to control the accuracy of the linesearch routine.

-funct:

Readable, writable. Inherited.

-gradient:

Readable, writable. Type of gradient calculation algorithm.

-gtol:

Readable, writable. A parameter to control the accuracy of the linesearch routine.

-histfreq:

Readable, writable. Period of history saving, saves each N iterations.

-history:

Readable, writable. Flag enabling collecting scalar history.

-linesearch:

Readable, writable. The linesearch algorithm.

-m:

Readable, writable.

-maxiter:

Readable, writable. The maximum number of iterations.

-maxlinesearch:

Readable, writable. The maximum number of trials for the linesearch.

-maxstep:

Readable, writable. The maximum step of the linesearch routine.

-minstep:

Readable, writable. The minimum step of the linesearch routine.

-orthantwisec:

Readable, writable. Coefficient for the L1 norm of variables.

-orthantwiseend:

Readable, writable. End index for computing L1 norm of the variables.

-orthantwisestart:

Readable, writable. Start index for computing L1 norm of the variables.

-past:

Readable, writable. Distance for delta-based convergence test.

-pdata:

Readable, writable.

-results:

Readable, writable. Inherited.

-wolfe:

Readable, writable. A coefficient for the Wolfe condition.

-xtol:

Readable, writable. The machine precision for floating-point values.

Superclasses

Optimization

constructor

Creates optimization object that runs optimization using modified Limited-memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) method written by Jorge Nocedal.

OBJECT constructor -funct value -pdata value ?-gradient value? ?-dstepmin value? ?-dstepscale value? ?-epsilon value? ?-past value? ?-delta value? ?-maxiter value? ?-linesearch value? ?-maxlinesearch value? ?-minstep value? ?-maxstep value? ?-ftol value? ?-wolfe value? ?-gtol value? ?-xtol value? ?-condition value? ?-orthantwisec value ?-orthantwisestart value? ?-orthantwiseend value?? ?-history? ?-histfreq value?

Parameters

-condition:

Condition type to satisfy in backtracking algorithm, default is wolfe, availiable values: armijo, wolfe, strongwolfe.

-delta:

Delta for convergence test, default is 1e-5.

-dstepmin:

Minimum absolute step for finite differences, default is 1e-12.

-dstepscale:

Multiplier for finite-difference step size, default is 1.0.

-epsilon:

Epsilon for convergence test, default is 1e-5.

-ftol:

A parameter to control the accuracy of the linesearch routine, default is 1e-4.

-funct value:

Name of the procedure that should be minimized.

-gradient value:

Type of gradient calculation algorithm. Possible values: analytic - objective funcion provides gradient itself, forward - numerical forward difference, central - numerical central difference. Default is analytic.

-gtol:

A parameter to control the accuracy of the linesearch routine, default is 0.9.

-histfreq:

Save history every N iterations, default is 1.

-history:

Collect scalar history.

-linesearch:

The linesearch algorithm, default is morethuente, possible values: morethuente and backtracking.

-m value:

The number of corrections to approximate the inverse hessian matrix, default is 6.

-maxiter:

The maximum number of iterations, use for convergence if value provided.

-maxlinesearch:

The maximum number of trials for the linesearch routine, default is 40.

-maxstep:

The maximum step of the linesearch routine, default is 1e20.

-minstep:

The minimum step of the linesearch routine, default is 1e-20.

-orthantwisec:

Coefficient for the L1 norm of variables, providing value enables Orthant-Wise Limited-memory Quasi-Newton (OWL-QN) method. Allows only backtracking -linesearch algorithm.

-orthantwiseend:

End index for computing L1 norm of the variables, requires -orthantwisec.

-orthantwisestart:

Start index for computing L1 norm of the variables, requires -orthantwisec.

-past:

Distance for delta-based convergence test, default is 0.

-pdata value:

List or dictionary that provides private data to funct that is needed to evaluate object (cost) function. Usually it contains x and y values lists, but you can provide any data necessary for function evaluation. Will be passed upon each function evaluation without modification.

-wolfe:

A coefficient for the Wolfe condition, default is 0.9, value must be withhin [ftol, 1.0)

-xtol:

The machine precision for floating-point values, default is 1e-16.

Return value

object of class

run

Runs optimization.

LBFGSOBJ run

Return value

dictionary containing resulted data

Mpfit

Class represents Levenberg-Marquardt optimizator - least-square optimization algorithm. Class uses the Levenberg-Marquardt technique to solve the least-squares problem. In its typical use, it will be used to fit a user-supplied function (the “model”) to user-supplied data points (the “data”) by adjusting a set of parameters. mpfit is based upon MINPACK-1 (LMDIF.F) by More’ and collaborators. The user-supplied function should compute an array of weighted deviations between model and data. In a typical scientific problem the residuals should be weighted so that each deviate has a gaussian sigma of 1.0. If x represents values of the independent variable, y represents a measurement for each value of x, and err represents the error in the measurements, then the deviates could be calculated as follows:

for {set i 0} {$i<$m} {incr i} {
    lset deviates $i [expr {([lindex $y $i] - [f [lindex $x $i]])/[lindex $err $i]}]
}

where m is the number of data points, and where f is the function representing the model evaluated at x. If ERR are the 1-sigma uncertainties in Y, then the sum of deviates squared will be the total chi-squared value, which mpfit will seek to minimize. Simple constraints are placed on parameter values by adding objects of class ::tclopt::ParameterMpfit to mpfit with method ::tclopt::Optimization::addPars, where other parameter-specific options can be set. For details of how to specify constraints, please look at the description of ::tclopt::ParameterMpfit class. Please note, that order in which we attach parameters objects is the order in which values will be supplied to minimized function, and the order in which resulted will be written to X property of the class. Example of user defined function (using linear equation t=a+b*x):

proc f {xall pdata args} {
    set x [dget $pdata x]
    set y [dget $pdata y]
    set ey [dget $pdata ey]
    foreach xVal $x yVal $y eyVal $ey {
        set f [= {[@ $xall 0]+[@ $xall 1]*$xVal}]
        lappend fval [= {($yVal-$f)/$eyVal}]
    }
    return [dcreate fvec $fval]
}

where xall is list of initial parameters values, pdata - dictionary that contains x, y and ey lists with length m. It returns dictionary with residuals values. Alternative form of function f could also provide analytical derivatives:

proc quadfunc {xall pdata args} {
    set x [dget $pdata x]
    set y [dget $pdata y]
    set ey [dget $pdata ey]
    foreach xVal $x yVal $y eyVal $ey {
        lappend fvec [= {($yVal-[@ $xall 0]-[@ $xall 1]*$xVal-[@ $xall 2]*$xVal*$xVal)/$eyVal}]
    }
    if {[@ $args 0]!=""} {
        set derivs [@ $args 0]
        foreach deriv $derivs {
            if {$deriv==0} {
                foreach xVal $x yVal $y eyVal $ey {
                    lappend dvec [= {-1/$eyVal}]
                }
            }
            if {$deriv==1} {
                foreach xVal $x yVal $y eyVal $ey {
                    lappend dvec [= {(-$xVal)/$eyVal}]
                }
            }
            if {$deriv==2} {
                foreach xVal $x yVal $y eyVal $ey {
                    lappend dvec [= {(-$xVal*$xVal)/$eyVal}]
                }
            }
        }
        return [dcreate fvec $fvec dvec $dvec]
    } else {
        return [dcreate fvec $fvec]
    }
}

The first element of the args list is a list specifying the ordinal numbers of the parameters for which we need to calculate the analytical derivative. In this case, the returned dvec list contains the derivative at each x point for each specified parameter, following the same order as in the input list. For example, if the input list is {0, 2} and the number m of x points is 3, the dvec list will look like this:

⎛⎛df ⎞   ⎛df ⎞   ⎛df ⎞   ⎛df ⎞   ⎛df ⎞   ⎛df ⎞  ⎞
⎜⎜───⎟   ⎜───⎟   ⎜───⎟   ⎜───⎟   ⎜───⎟   ⎜───⎟  ⎟
⎜⎝dp0⎠   ⎝dp0⎠   ⎝dp0⎠   ⎝dp2⎠   ⎝dp2⎠   ⎝dp2⎠  ⎟
⎝     x0      x1      x2      x0      x1      x2⎠

Description of keys and data in returned dictionary:

bestnorm:

Final chi^2.

orignorm:

Starting value of chi^2.

status:

Fitting status code.

niter:

Number of iterations.

nfev:

Number of function evaluations.

npar:

Total number of parameters.

nfree:

Number of free parameters.

npegged:

Number of pegged parameters.

nfunc:

Number of residuals (= num. of data points)

resid:

List of final residuals.

xerror:

Final parameter uncertainties (1-sigma), in the order of elements in Pars property dictionary.

x:

Final parameters values list in the order of elements in Pars property dictionary.

debug:

String with derivatives debugging output, and general debug messages if switch -debug is provided.

covar:

Final parameters covariance matrix. You can also access result dictionary with [my configure -results].

Method summary

constructor

Constructor for the class.

[configure]:

Configure properties. See ::oo::configuresupport::configurable.

DuplChecker.duplListCheck

Inherited from DuplChecker.

Optimization.addPars

Inherited from Optimization.

Optimization.getAllPars

Inherited from Optimization.

Optimization.getAllParsNames

Inherited from Optimization.

run

Runs optimization.

Properties

-covtol:

Readable, writable. Maximum number of iterations.

-debug:

Readable, writable. Flag enabling printing of debug messages during optimization into stdout.

-epsfcn:

Readable, writable. Finite derivative step size.

-ftol:

Readable, writable. Algorithm terminating value, measures the relative error desired in the sum of squares.

-funct:

Readable, writable. Inherited.

-gtol:

Readable, writable. Algorithm terminating value, measures measures the orthogonality desired between the function vector and the columns of the Jacobian.

-histfreq:

Readable, writable. Period of history saving, saves each N iterations.

-history:

Readable, writable. Flag enabling collecting scalar history.

-m:

Readable, writable. Number of data points.

-maxfev:

Readable, writable. Algorithm terminating value, represents the maximum number of objective function evaluation.

-maxiter:

Readable, writable.

-nofinitecheck:

Readable, writable. Flag enabling check for infinite quantities.

-pdata:

Readable, writable.

-refresh:

Readable, writable. Output refresh cycle.

-results:

Readable, writable. Inherited.

-stepfactor:

Readable, writable. Value determining the initial step bound.

-xtol:

Readable, writable. Algorithm terminating value, measures the relative error desired in the approximate solution.

Superclasses

Optimization

constructor

Creates optimization object that does least squares fitting using modified Levenberg-Marquardt algorithm.

OBJECT constructor -funct value -m value -pdata value ?-ftol value? ?-xtol value? ?-gtol value? ?-stepfactor value? ?-covtol value? ?-maxiter value? ?-maxfev value? ?-epsfcn value? ?-nofinitecheck? ?-refresh value? ?-debug? ?-history? ?-histfreq value?

Parameters

-covtol value:

Range tolerance for covariance calculation. Value must be of the type float more than zero, default is 1e-14.

-debug:

Print debug messages during optimization.

-epsfcn value:

Finite derivative step size. Value must be of the type float more than zero, default is 2.2204460e-16.

-ftol value:

Control termination of mpfit. Termination occurs when both the actual and predicted relative reductions in the sum of squares are at most ftol. Therefore, ftol measures the relative error desired in the sum of squares. Value must be of the type float more than zero, default is 1e-10.

-funct value:

Name of the procedure that should be minimized.

-gtol value:

Control termination of mpfit. Termination occurs when the cosine of the angle between fvec and any column of the jacobian is at most gtol in absolute value. Therefore, gtol measures the orthogonality desired between the function vector and the columns of the jacobian. Value must be of the type float more than zero, default is 1e-10.

-histfreq:

Save history every N iterations, default is 1.

-history:

Collect scalar history.

-m value:

Number of data points.

-maxfev value:

Control termination of mpfit. Termination occurs when the number of calls to funct is at least maxfev by the end of an iteration. Value must be the positive integer, default is 0. If it equals to 0, number of evaluations is not restricted.

-maxiter value:

Maximum number of iterations. If maxiter equal to 0, then basic error checking is done, and parameter errors/covariances are estimated based on input arameter values, but no fitting iterations are done. Value must be the positive integer, default is 200.

-nofinitecheck:

Enables check for infinite quantities, default is off.

-pdata value:

List or dictionary that provides private data to funct that is needed to evaluate residuals. Usually it contains x and y values lists, but you can provide any data necessary for function residuals evaluation. Will be passed upon each function evaluation without modification.

-refresh value:

Output refresh cycle. Represent the frequency of printing debug information to stdout.

-stepfactor value:

Used in determining the initial step bound. This bound is set to the product of factor and the euclidean norm of diag*x if nonzero, or else to factor itself. In most cases factor should lie in the interval (.1,100.). 100. is a generally recommended value. Value must be of the type float more than zero, default is 100.

-xtol value:

Control termination of mpfit. Termination occurs when the relative error between two consecutive iterates is at most xtol. Therefore, xtol measures the relative error desired in the approximate solution. Value must be of the type float more than zero, default is 1e-10.

Return value

object of class

run

Runs optimization.

MPFITOBJ run

Return value

dictionary containing resulted data

Optimization

Class is the abstract class representing any optimizer of the package.

Method summary

[configure]:

Configure properties. See ::oo::configuresupport::configurable.

addPars

Not documented.

DuplChecker.duplListCheck

Inherited from DuplChecker.

getAllPars

Gets references of all parameters objects.

getAllParsNames

Gets names of all parameters.

Properties

-funct:

Readable, writable. Name of the cost (objective) function called by the optimizer.

-pdata:

Readable, writable. Data that is passed to cost function with auxilary data needed for its evaluation.

-results:

Readable, writable. Contains dictionary with the results of optimization.

Mixins

DuplChecker

Subclasses

Mpfit, DE, GSA, LBFGS

addPars

OPTIMIZATIONOBJ addPars ?args?

Parameters

getAllPars

Gets references of all parameters objects.

OPTIMIZATIONOBJ getAllPars ?args?

Parameters

Return value

list of elements names

getAllParsNames

Gets names of all parameters.

OPTIMIZATIONOBJ getAllParsNames ?args?

Parameters

Return value

list of elements names

Parameter

Class represents basic parameter of optimization.

Example of building 4 parameters with different constraints:

set par0 [::tclopt::Parameter new a 1.0 -lowlim 0.0]
set par1 [::tclopt::Parameter new b 2.0]
set par2 [::tclopt::Parameter new c 0.0]
set par3 [::tclopt::Parameter new d 0.1 -lowlim -0.3 -uplim 0.2]

Method summary

constructor

Constructor for the class.

[configure]:

Configure properties. See ::oo::configuresupport::configurable.

Properties

-initval:

Readable, writable. Initial value of the parameter.

-lowlim:

Readable, writable. Lower limit of the parameter.

-name:

Readable, writable. Name of the parameter.

-uplim:

Readable, writable. Upper limit of the parameter.

Subclasses

ParameterMpfit

constructor

Creates parameter object.

OBJECT constructor value value ?-fixed? ?-lowlim value? ?-uplim value? ?-step value? ?-relstep value? ?-side value? ?-debugder -debugreltol value -debugabstol value?

Parameters

-lowlim value:

Specify lower limit for parameter, must be lower than upper limit if upper limit is provided, optional.

-uplim value:

Specify upper limit for parameter, must be higher than lower limit if lower limit is provided, optional.

initval:

Initial value of parameter.

name:

Name of the parameter.

ParameterMpfit

Class represents basic parameter used by optimizator class ::tclopt::Mpfit.

Example of building 4 parameters with different constraints:

set par0 [ParameterMpfit new a 1.0 -fixed -side both]
set par1 [ParameterMpfit new b 2.0]
set par2 [ParameterMpfit new c 0.0 -fixed]
set par3 [ParameterMpfit new d 0.1 -lowlim -0.3 -uplim 0.2]

Method summary

constructor

Constructor for the class.

[configure]:

Configure properties. See ::oo::configuresupport::configurable.

Properties

-debugder:

Readable, writable. Flag enabling console debug logging of user-computed derivatives.

-derivabstol:

Readable, writable.

-derivreltol:

Readable, writable.

-fixed:

Readable, writable. Flag for fixing parameter value during the optimization.

-initval:

Readable, writable. Inherited.

-lowlim:

Readable, writable. Inherited.

-name:

Readable, writable. Inherited.

-relstep:

Readable, writable. The relative step size to be used in calculating the numerical derivatives.

-side:

Readable, writable. The sidedness of the finite difference when computing numerical derivatives.

-step:

Readable, writable. The step size to be used in calculating the numerical derivatives.

-uplim:

Readable, writable. Inherited.

Superclasses

Parameter

constructor

Creates parameter object for ::tclopt::Mpfit class.

OBJECT constructor value value ?-fixed? ?-lowlim value? ?-uplim value? ?-step value? ?-relstep value? ?-side value? ?-debugder -debugreltol value -debugabstol value?

Parameters

-debugabstol value:

Absolute error that controls printing of derivatives comparison if absolute error exceeds this value. Requires -debugder and -debugreltol.

-debugder:

Switch to enable console debug logging of user-computed derivatives, as described above. Note that when debugging is enabled, then -side should be set to auto, right, left or both, depending on which numerical derivative you wish to compare to. Requires -debugreltol and -debugabstol values.

-debugreltol value:

Relative error that controls printing of derivatives comparison if relative error exceeds this value. Requires -debugder and -debugabstol.

-fixed:

Specify that parameter is fixed during optimization, optional.

-lowlim value:

Specify lower limit for parameter, must be lower than upper limit if upper limit is provided, optional.

-relstep value:

The relative step size to be used in calculating the numerical derivatives. This number is the fractional size of the step, compared to the parameter value. This value supercedes the -step setting. If the parameter is zero, then a default step size is chosen.

-side value:

The sidedness of the finite difference when computing numerical derivatives. This field can take four values: auto : one-sided derivative computed automatically, right : one-sided derivative (f(x+h)-f(x))/h, left : one-sided derivative (f(x)-f(x-h))/h, both : two-sided derivative (f(x+h)-f(x-h))/(2*h), an : user-computed explicit derivatives, where h is the -step parameter described above. The “automatic” one-sided derivative method will chose a direction for the finite difference which does not violate any constraints. The other methods do not perform this check. The two-sided method is in principle more precise, but requires twice as many function evaluations. Default is auto.

-step value:

The step size to be used in calculating the numerical derivatives. If set to zero, then the step size is computed automatically, optional.

-uplim value:

Specify upper limit for parameter, must be higher than lower limit if lower limit is provided, optional.

initval:

Initial value of parameter.

name:

Name of the parameter.


Copyright (c) George Yashin