Tcl wrapper for C optimization procedures (v0.21)

::tcloptTop, Main, Index

ClassesTop, Main, Index

DE [::tclopt]Top, Main, Index

Method summary
constructorConstructor for the class.
addParsSee Optimization.addPars
configureConfigure properties.
duplListCheckSee DuplChecker.duplListCheck
getAllParsSee Optimization.getAllPars
getAllParsNamesSee Optimization.getAllParsNames
runRuns optimization.
Properties
-abstolReadable, writable.
-crReadable, writable.
-dReadable, writable.
-debugReadable, writable.
-fReadable, writable.
-functReadable, writable.
-genmaxReadable, writable.
-histfreqReadable, writable.
-historyReadable, writable.
-initpopReadable, writable.
-initypeReadable, writable.
-npReadable, writable.
-pdataReadable, writable.
-refreshReadable, writable.
-reltolReadable, writable.
-resultsReadable, writable.
-savepopReadable, writable.
-seedReadable, writable.
-strategyReadable, writable.
-thresholdReadable, writable.
Superclasses

Optimization

constructor [::tclopt::DE]DE, Top, Main, Index

Creates optimization object that tuns optimization using modified Differential Evolution algorithm.

DE create OBJNAME ?args?
DE new ?args?
Parameters
-abstol valueAbsolute tolerance. Controls termination of optimization. Default 1e-6.
-cr valueCrossing over factor (crossover rate). Controls the probability of mixing components from the target vector and the mutant vector to form a trial vector. It determines how much of the trial vector inherits its components from the mutant vector versus the target vector. A high crossover rate means that more components will come from the mutant vector, promoting exploration of new solutions. Conversely, a low crossover rate results in more components being taken from the target vector, which can help maintain existing solutions and refine them. The typical range for CR is between 0.0 and 1.0. Default is 0.9.
-debugPrint debug messages during optimization.
-f valueWeight factor (mutation rate). Controls the amplification of the differential variation between individuals. It is a scaling factor applied to the difference between two randomly selected population vectors before adding the result to a third vector to create a mutant vector (exact mechanism is dependent on selected strategy). The mutation rate influences the algorithm's ability to explore the search space; a higher value of f increases the diversity of the mutant vectors, leading to broader exploration, while a lower value encourages convergence by making smaller adjustments. The typical range for f is between 0.4 and 1.0, though values outside this range can be used depending on the problem characteristics. Default is 0.9.
-funct valueName of the procedure that should be minimized.
-genmax valueMaximum number of generations. Controls termination of optimization. Default 3000.
-histfreq valueSave history every N generations. Default is 1.
-historyEnables collecting scalar history and best trajectory.
-initpop valueList of lists (matrix) with size np x d, requires -specified.
-np valuePopulation size. Represents number of random parameter vector per generation. As a first guess for the value it is recommended to set it from 5 to 10 times the number of parameters. Default is 20.
-pdata valueList or dictionary that provides private data to funct that is needed to evaluate object (cost) function. Usually it contains x and y values lists, but you can provide any data necessary for function evaluation. Will be passed upon each function evaluation without modification.
-randomSelect population initialization with random values over the individual parameters ranges.
-refresh valueOutput refresh cycle. Represent the frequency of printing debug information to stdout.
-reltol valueRelative tolerance. Controls termination of optimization. Default 1e-2.
-savepopEnables including population snapshots in history (every -histfreq generations), requires -history.
-seed valueRandom seed.
-specifiedSelect population initialization with specified population values, requires -initpop.
-strategy valueChoice of strategy. Possible strategies: best/1/exp rand/1/exp rand-to-best/1/exp best/2/exp rand/2/exp best/1/bin rand/1/bin rand-to-best/1/bin best/2/bin rand/2/bin.
-threshold valueObjective function threshold that stops optimization.
Description

Class implements the Differential Evolution (DE) algorithm to solve global optimization problems over continuous parameter spaces. Typically, it is used to minimize a user-supplied objective function by evolving a population of candidate solutions through mutation, crossover, and selection.

Differential Evolution is a stochastic, population-based optimizer that works well for non-linear, non-differentiable, and multi-modal objective functions. It does not require gradient information and is effective in high-dimensional or rugged search spaces. The user-supplied objective function should take a vector of parameters as input and return a scalar value to be minimized. For example, the objective function might compute the volume of material used in a structure given its geometric parameters, the error rate of a machine learning model, or the energy of a physical system. DE begins by initializing a population of random candidate solutions within given parameter bounds and iteratively refines them by combining members of the population and selecting better solutions over generations.

Simple constraints are placed on parameter values by adding objects of class ::tclopt::Parameter to DE with method ::tclopt::Optimization::addPars. For details of how to specify constraints, please look at the description of ::tclopt::Parameter class. Please note, that order in which we attach parameters objects is the order in which values will be supplied to minimized function, and the order in which resulted will be written to x property of the class.

General advicesDE, Top, Main, Index

Strategies overviewDE, Top, Main, Index

Naming convention for strategies: x/y/z, where:

Mutation

Combination of x and y gives following mutation function:

best/1:

 →    →         →     →
 u  = x  + f ⋅ ⎛x   - x  ⎞
  i    b       ⎝ r2    r3⎠

rand/1:

 →    →         →     →
 u  = x  + f ⋅ ⎛x   - x  ⎞
  i    r1      ⎝ r2    r3⎠

rand-to-best/1 (custom variant):

 →    →         →     →           →     →
 u  = x  + f ⋅ ⎛x   - x  ⎞ + f ⋅ ⎛x   - x  ⎞
  i    i       ⎝ b     i ⎠       ⎝ r1    r2⎠

best/2:

 →    →         →     →     →     →
 u  = x  + f ⋅ ⎛x   + x   - x   - x  ⎞
  i    b       ⎝ r1    r2    r3    r4⎠

rand/2:

 →    →         →     →     →     →
 u  = x  + f ⋅ ⎛x   + x   - x   - x  ⎞
  i    r5      ⎝ r1    r2    r3    r4⎠
x_iTrial vector, x_b - best vector, x_rn - randomly selected individuals from population.

A crossover operation between the new generated mutant vector v_i and the target vector x_i is used to further increase the diversity of the new candidate solution.

Exponential crossover

In exponential crossover, a contiguous block of dimensions is modified, starting from a random index, and continues as long as random values are less than CR. The mutation happens inline during crossover, and wrapping around is supported.

Example (D = 10, n = 3, L = 4):

Parent x_i:         [x0 x1 x2 x3 x4 x5 x6 x7 x8 x9]
Exponential mask:             →  →  →  →
                             n  n+1 n+2 n+3

Trial u_i:          [x0 x1 x2 v3 v4 v5 v6 x7 x8 x9]
                            ↑ mutated from DE strategy
Binomial crossover

In binomial crossover, each dimension has an independent probability CR of being replaced by the mutant vector. At least one dimension is guaranteed to be copied from the mutant (typically by forcing one fixed index to be included).

Example (D = 10):

Parent x_i:         [x0 x1 x2 x3 x4 x5 x6 x7 x8 x9]
Random mask:         ✗  ✓  ✗  ✗  ✓  ✗  ✓  ✗  ✗  ✓
Mutant values:      [v0 v1 v2 v3 v4 v5 v6 v7 v8 v9]

Trial u_i:          [x0 v1 x2 x3 v4 x5 v6 x7 x8 v9]
                         ↑       ↑       ↑     ↑
                   replaced where rand() < CR
Summary of strategies

IDBaseDifferenceXOVDescription
1bestr2-r3expExploitative, may misconv
2r1r2-r3expBalanced, exploratory
Try e.g. F=0.7 and CR=0.5 first
3x_i(best-x_i)+(r1-r2)expHybrid: pull + variation
Try e.g. F=0.85 and CR=1 first
4best(r1+r2)-(r3+r4)expExploratory, best-guided
5r5(r1+r2)-(r3+r4)expFully random, robust
6bestr2-r3binSame as 1, binomial crossover
7r1r2-r3binSame as 2, binomial crossover
8x_i(best-x_i)+(r1-r2)binSame as 3, binomial crossover
9best(r1+r2)-(r3+r4)binSame as 4, binomial crossover
10r5(r1+r2)-(r3+r4)binSame as 5, binomial crossover

See more information in techreport

Description of keys and data in returned dictionary:

objfuncFinal value of object (cost) function funct
xFinal vector of parameters.
generationNumber of generations.
strategyStrategy used for optimization.
stdStandard deviation of final population.

You can also access result dictionary with [my configure -results].

History modeDE, Top, Main, Index

When the -history flag is provided, result also includes the following keys:

Key history - a dictionary with keys (one per -histfreq generation):

genGeneration index.
bestfBest-so-far objective value after this generation.
meanMean objective value across the current population.
stdStandard deviation of objective values in the current population.
nfevCumulative number of function evaluations at the end of this generation.

Key besttraj - a dictionary with keys (one per -histfreq generation):

genGeneration index.
xParameter vector achieving the best-so-far objective value after this generation.

If the -savepop switch is provided as well, result additionally contains key pophistory with dictionary with keys (one per -histfreq generation):

genGeneration index.
popList of population vectors for this generation (length = -np; each vector length = d)
costList of objective values aligned with pop (length = -np)
Return value

object of class

method constructor {args} {

    # Creates optimization object that tuns optimization using modified Differential Evolution algorithm.
    #  -funct value - name of the procedure that should be minimized
    #  -strategy value - choice of strategy. Possible strategies: best/1/exp rand/1/exp rand-to-best/1/exp
    #    best/2/exp rand/2/exp best/1/bin rand/1/bin rand-to-best/1/bin best/2/bin rand/2/bin.
    #  -pdata value - list or dictionary that provides private data to funct that is needed to evaluate object
    #    (cost) function. Usually it contains x and y values lists, but you can provide any data necessary for
    #    function evaluation.  Will be passed upon each function evaluation without modification.
    #  -genmax value - maximum number of generations. Controls termination of optimization. Default 3000.
    #  -refresh value - output refresh cycle. Represent the frequency of printing debug information to stdout.
    #  -np value - population size. Represents number of random parameter vector per generation. As a first guess
    #    for the value it is recommended to set it from 5 to 10 times the number of parameters. Default is 20.
    #  -f value - weight factor (mutation rate). Controls the amplification of the differential variation between
    #    individuals. It is a scaling factor applied to the difference between two randomly selected population
    #    vectors before adding the result to a third vector to create a mutant vector (exact mechanism is dependent
    #    on selected strategy). The mutation rate influences the algorithm's ability to explore the search space; a
    #    higher value of `f` increases the diversity of the mutant vectors, leading to broader exploration, while a
    #    lower value encourages convergence by making smaller adjustments. The typical range for `f` is between 0.4
    #    and 1.0, though values outside this range can be used depending on the problem characteristics. Default is
    #    0.9.
    #  -cr value - crossing over factor (crossover rate). Controls the probability of mixing components from the
    #    target vector and the mutant vector to form a trial vector. It determines how much of the trial vector
    #    inherits its components from the mutant vector versus the target vector. A high crossover rate means that
    #    more components will come from the mutant vector, promoting exploration of new solutions. Conversely, a low
    #    crossover rate results in more components being taken from the target vector, which can help maintain
    #    existing solutions and refine them. The typical range for CR is between 0.0 and 1.0. Default is 0.9.
    #  -seed value - random seed.
    #  -abstol value - absolute tolerance. Controls termination of optimization. Default 1e-6.
    #  -reltol value - relative tolerance. Controls termination of optimization. Default 1e-2.
    #  -debug - print debug messages during optimization.
    #  -threshold value - objective function threshold that stops optimization
    #  -random - select population initialization with random values over the individual parameters ranges.
    #  -specified - select population initialization with specified population values, requires `-initpop`.
    #  -initpop value - list of lists (matrix) with size np x d, requires `-specified`.
    #  -history - enables collecting scalar history and best trajectory
    #  -histfreq value - save history every N generations. Default is 1.
    #  -savepop - enables including population snapshots in history (every `-histfreq` generations), requires
    #    `-history`.
    # Returns: object of class
    #
    # Class implements the Differential Evolution (DE) algorithm to solve global optimization problems over
    # continuous parameter spaces. Typically, it is used to minimize a user-supplied objective function by evolving
    # a population of candidate solutions through mutation, crossover, and selection.
    #
    # Differential Evolution is a stochastic, population-based optimizer that works well for non-linear,
    # non-differentiable, and multi-modal objective functions. It does not require gradient information and is
    # effective in high-dimensional or rugged search spaces. The user-supplied objective function should take a
    # vector of parameters as input and return a scalar value to be minimized. For example, the objective function
    # might compute the volume of material used in a structure given its geometric parameters, the error rate of a
    # machine learning model, or the energy of a physical system. DE begins by initializing a population of random
    # candidate solutions within given parameter bounds and iteratively refines them by combining members of the
    # population and selecting better solutions over generations.
    #
    # Simple constraints are placed on parameter values by adding objects of class [::tclopt::Parameter] to DE with
    # method [::tclopt::Optimization::addPars]. For details of how to specify constraints, please look at the
    # description of [::tclopt::Parameter] class. Please note, that order in which we attach parameters objects is
    # the order in which values will be supplied to minimized function, and the order in which resulted will be
    # written to `x` property of the class.
    #
    # #### General advices
    # - f is usually between 0.5 and 1 (in rare cases > 1)
    # - cr is between 0 and 1 with 0., 0.3, 0.7 and 1. being worth to be tried first
    # - To start off np = 10*d is a reasonable choice. Increase np if misconvergence happens.
    # - If you increase np, f usually has to be decreased
    # - When the DE/best... schemes fail DE/rand... usually works and vice versa
    #
    # #### Strategies overview
    # Naming convention for strategies: x/y/z, where:
    #  - x - a string which denotes the vector to be perturbed (mutated)
    #  - y - number of difference vectors taken for perturbation (mutation) of x
    #  - z - crossover method (exp = exponential, bin = binomial)
    #
    # ##### Mutation
    # Combination of x and y gives following mutation function:
    #
    # best/1:
    #```
    #  →    →         →     →
    #  u  = x  + f ⋅ ⎛x   - x  ⎞
    #   i    b       ⎝ r2    r3⎠
    #```
    #
    # rand/1:
    #```
    #  →    →         →     →
    #  u  = x  + f ⋅ ⎛x   - x  ⎞
    #   i    r1      ⎝ r2    r3⎠
    #```
    #
    # rand-to-best/1 (custom variant):
    #```
    #  →    →         →     →           →     →
    #  u  = x  + f ⋅ ⎛x   - x  ⎞ + f ⋅ ⎛x   - x  ⎞
    #   i    i       ⎝ b     i ⎠       ⎝ r1    r2⎠
    #```
    #
    # best/2:
    #```
    #  →    →         →     →     →     →
    #  u  = x  + f ⋅ ⎛x   + x   - x   - x  ⎞
    #   i    b       ⎝ r1    r2    r3    r4⎠
    #```
    #
    # rand/2:
    #```
    #  →    →         →     →     →     →
    #  u  = x  + f ⋅ ⎛x   + x   - x   - x  ⎞
    #   i    r5      ⎝ r1    r2    r3    r4⎠
    #```
    #
    # x_i - trial vector, x_b - best vector, x_rn - randomly selected individuals from population.
    #
    # A crossover operation between the new generated mutant vector v_i and the target vector x_i is used to further
    # increase the diversity of the new candidate solution.
    #
    # ##### Exponential crossover
    # In exponential crossover, a contiguous block of dimensions is modified, starting from a random index, and
    # continues as long as random values are less than CR. The mutation happens inline during crossover, and
    # wrapping around is supported.
    #
    #```
    #Example (D = 10, n = 3, L = 4):
    #
    #Parent x_i:         [x0 x1 x2 x3 x4 x5 x6 x7 x8 x9]
    #Exponential mask:             →  →  →  →
    #                              n  n+1 n+2 n+3
    #
    #Trial u_i:          [x0 x1 x2 v3 v4 v5 v6 x7 x8 x9]
    #                             ↑ mutated from DE strategy
    #```
    #
    # - Starts from a random index n ∈ \[0, D)
    # - Replaces a contiguous block of components (dimension-wise)
    # - Continues as long as rand() < CR, up to D components
    # - Mutation and crossover are applied together in the code, not as separate stages.
    #
    # ##### Binomial crossover
    # In binomial crossover, each dimension has an independent probability CR of being replaced by the mutant
    # vector. At least one dimension is guaranteed to be copied from the mutant (typically by forcing one fixed
    # index to be included).
    #
    #```
    #Example (D = 10):
    #
    #Parent x_i:         [x0 x1 x2 x3 x4 x5 x6 x7 x8 x9]
    #Random mask:         ✗  ✓  ✗  ✗  ✓  ✗  ✓  ✗  ✗  ✓
    #Mutant values:      [v0 v1 v2 v3 v4 v5 v6 v7 v8 v9]
    #
    #Trial u_i:          [x0 v1 x2 x3 v4 x5 v6 x7 x8 v9]
    #                          ↑       ↑       ↑     ↑
    #                    replaced where rand() < CR
    #```
    #
    # - Applies independent crossover decision for each dimension
    # - Starts at a random dimension n, iterates D steps circularly
    # - Each dimension is replaced with probability CR
    # - Ensures at least one dimension is modified (usually last)
    # - Mutation and crossover are applied together in the code, not as separate stages.
    #
    # ##### Summary of strategies

    #ruff include html
    # <div style="ruff_bd"> <table class="ruff_deflist"> <tbody>
    # <tr><th>ID</th><th>Base</th><th>Difference</th><th>XOV</th><th>Description</th></tr>
    # <tr><td>1</td><td>best</td><td>r2-r3</td><td>exp</td><td>Exploitative, may misconv</td></tr>
    # <tr><td>2</td><td>r1</td><td>r2-r3</td><td>exp</td><td>Balanced, exploratory<br>Try e.g. F=0.7 and CR=0.5
    # first</td></tr> <tr><td>3</td><td>x_i</td><td>(best-x_i)+(r1-r2)</td><td>exp</td><td>Hybrid: pull +
    # variation<br>Try e.g. F=0.85 and CR=1 first</td></tr>
    # <tr><td>4</td><td>best</td><td>(r1+r2)-(r3+r4)</td><td>exp</td><td>Exploratory, best-guided</td></tr>
    # <tr><td>5</td><td>r5</td><td>(r1+r2)-(r3+r4)</td><td>exp</td><td>Fully random, robust</td></tr>
    # <tr><td>6</td><td>best</td><td>r2-r3</td><td>bin</td><td>Same as 1, binomial crossover</td></tr>
    # <tr><td>7</td><td>r1</td><td>r2-r3</td><td>bin</td><td>Same as 2, binomial crossover</td></tr>
    # <tr><td>8</td><td>x_i</td><td>(best-x_i)+(r1-r2)</td><td>bin</td><td>Same as 3, binomial crossover</td></tr>
    # <tr><td>9</td><td>best</td><td>(r1+r2)-(r3+r4)</td><td>bin</td><td>Same as 4, binomial crossover</td></tr>
    # <tr><td>10</td><td>r5</td><td>(r1+r2)-(r3+r4)</td><td>bin</td><td>Same as 5, binomial crossover</td></tr>
    # </tbody> </table> </div>

    #ruff include nroff
    #```
    # ┌────┬──────────┬─────────────────────────────┬─────┬──────────────────────────────────┐
    # │ ID │  Base    │ Difference                  │ XOV │ Description                      │
    # ├────┼──────────┼─────────────────────────────┼─────┼──────────────────────────────────┤
    # │ 1  │ best     │ r2 - r3                     │ exp │ Exploitative, may misconverge    │
    # │ 2  │ r1       │ r2 - r3                     │ exp │ Balanced, exploratory            │
    # │    │          │                             │     │ Try e.g. F=0.7 and CR=0.5 first  │
    # │ 3  │ x_i      │ (best - x_i) + (r1 - r2)    │ exp │ Hybrid: pull + variation         │
    # │    │          │                             │     │ Try e.g  F=0.85 and CR=1 first   │
    # │ 4  │ best     │ (r1 + r2) - (r3 + r4)       │ exp │ Exploratory, best-guided         │
    # │ 5  │ r5       │ (r1 + r2) - (r3 + r4)       │ exp │ Fully random, robust             │
    # │ 6  │ best     │ r2 - r3                     │ bin │ Same as 1, binomial crossover    │
    # │ 7  │ r1       │ r2 - r3                     │ bin │ Same as 2, binomial crossover    │
    # │ 8  │ x_i      │ (best - x_i) + (r1 - r2)    │ bin │ Same as 3, binomial crossover    │
    # │ 9  │ best     │ (r1 + r2) - (r3 + r4)       │ bin │ Same as 4, binomial crossover    │
    # │ 10 │ r5       │ (r1 + r2) - (r3 + r4)       │ bin │ Same as 5, binomial crossover    │
    # └────┴──────────┴─────────────────────────────┴─────┴──────────────────────────────────┘
    #```

    #ruff include markdown
    # | ID | Base  | Difference                         | XOV | Description                         |
    # |----|-------|------------------------------------|-----|-------------------------------------|
    # | 1  | `best`| `r2 - r3`                          | exp | Exploitative, may misconverge       |
    # | 2  | `r1`  | `r2 - r3`                          | exp | Balanced, exploratory<br>Try e.g. F=0.7 and CR=0.5 first |
    # | 3  | `x_i` | `(best - x_i) + (r1 - r2)`         | exp | Hybrid: pull + variation<br>Try e.g. F=0.85 and CR=1 first |
    # | 4  | `best`| `(r1 + r2) - (r3 + r4)`            | exp | Exploratory, best-guided            |
    # | 5  | `r5`  | `(r1 + r2) - (r3 + r4)`            | exp | Fully random, robust                |
    # | 6  | `best`| `r2 - r3`                          | bin | Same as 1, binomial crossover       |
    # | 7  | `r1`  | `r2 - r3`                          | bin | Same as 2, binomial crossover       |
    # | 8  | `x_i` | `(best - x_i) + (r1 - r2)`         | bin | Same as 3, binomial crossover       |
    # | 9  | `best`| `(r1 + r2) - (r3 + r4)`            | bin | Same as 4, binomial crossover       |
    # | 10 | `r5`  | `(r1 + r2) - (r3 + r4)`            | bin | Same as 5, binomial crossover       |

    #ruff
    # See more information in [techreport](http://mirror.krakadikt.com/2004-11-13-genetic-algorithms/www.icsi.berkeley.edu/%257Estorn/deshort1.ps)
    #
    # Description of keys and data in returned dictionary:
    #   objfunc - final value of object (cost) function `funct`
    #   x - final vector of parameters
    #   generation - number of generations
    #   strategy - strategy used for optimization
    #   std - standard deviation of final population
    # You can also access result dictionary with `[my configure -results]`.
    #
    # #### History mode
    #
    # When the `-history` flag is provided, `result` also includes the following keys:
    #
    # Key `history` \- a dictionary with keys (one per `-histfreq` generation):
    #   gen - generation index
    #   bestf - best-so-far objective value after this generation
    #   mean - mean objective value across the current population
    #   std - standard deviation of objective values in the current population
    #   nfev - cumulative number of function evaluations at the end of this generation
    #
    # Key `besttraj` \- a dictionary with keys (one per `-histfreq` generation):
    #   gen - generation index
    #   x - parameter vector achieving the best-so-far objective value after this generation
    #
    # If the `-savepop` switch is provided as well, `result` additionally contains key `pophistory` with dictionary
    # with keys (one per `-histfreq` generation):
    #   gen - generation index
    #   pop - list of population vectors for this generation (length = `-np`; each vector length = d)
    #   cost - list of objective values aligned with `pop` (length = `-np`)

    # Synopsis: -funct value -strategy value -pdata value ?-genmax value? ?-refresh value? ?-np value?
    #   ?-f value? ?-cr value? ?-seed value? ?-abstol value? ?-reltol value? ?-debug?
    #   ?-random|specified -initpop value? ?-history? ?-histfreq value? ?-savepop?
    set arguments [argparse -inline -help {Creates optimization object that does Differential Evolution optimization. For more detailed description please see documentation} {
        {-funct= -required -help {Name of the procedure that should be minimized}}
        {-strategy= -required -help {Choice of strategy}}
        {-pdata= -default {} -help {List or dictionary that provides private data to funct that is needed to evaluate residuals. Usually it contains x and y values lists, but you can provide any data necessary for function residuals evaluation. Will be passed upon each function evaluation without modification}}
        {-genmax= -default 3000 -help {Maximum number of generations}}
        {-refresh= -default 100 -help {Output refresh cycle}}
        {-np= -default 20 -help {Population size}}
        {-f= -default 0.9 -help {Weight factor (mutation rate)}}
        {-cr= -default 0.9 -help {Crossing over factor (crossover rate)}}
        {-seed= -default 1 -help {Random seed}}
        {-abstol= -default 1e-6 -help {Absolute tolerance}}
        {-reltol= -default 0.01 -help {Relative tolerance}}
        {-debug -boolean -help {Print debug information}}
        {-threshold= -help {Objective function threshold that stops optimization}}
        {-random -key initype -default random -help {Random population initialization}}
        {-specified -key initype -value specified -help {Specified points population initialization}}
        {-initpop= -require specified -reciprocal -help {Specified initial population}}
        {-history -boolean -help {Collect scalar history and best trajectory}}
        {-histfreq= -default 1 -help {Save history every N generations}}
        {-savepop -boolean -require history -help {Include population snapshots in history (every -histfreq generations)}}
    }]
    dict for {elName elValue} $arguments {
        if {$elName ni {threshold}} {
            my configure -$elName $elValue
        }
    }
    if {[dict exists $arguments threshold]} {
        set threshold [dict get $arguments threshold]
    }
}

run [::tclopt::DE]DE, Top, Main, Index

Runs optimization.

DEOBJ run
Return value

dictionary containing resulted data

method run {} {

            # Runs optimization.
            # Returns: dictionary containing resulted data

    ### Initialize random number generator
            ::tclopt::NewPointers idum int
            ::tclopt::intp_assign $idum [= {-$seed}]
            set nfeval 0 ;# reset number of function evaluations
    ### Initialization
            set pars [dvalues [my getAllPars]]
            set d [llength $pars]
            for {set j 0} {$j<$d} {incr j} {
                set par [@ $pars $j]
                lappend lowlims [$par configure -lowlim]
                lappend uplims [$par configure -uplim]
            }
            if {$initype eq {specified}} {
                if {[llength $initpop]!=$np} {
                    return -code error "Number of specified initial parameters populations should be equal to populations size '$np' but '[llength $initpop]' was provided"
                } else {
                    foreach paramSet $initpop {
                        if {[llength $paramSet]!=$d} {
                            return -code error "Number of specified initial  parameters per population should be equal to number of parameters '$d' but '[llength $paramSet]' was provided"
                        }
                    }
                }
            }
            for {set i 0} {$i<$np} {incr i} {
                # spread initial population members
                for {set j 0} {$j<$d} {incr j} {
                    set par [@ $pars $j]
                    if {$initype eq {specified}} {
                        set value [@ $initpop $i $j]
                        if {($value<[$par configure -lowlim]) || ($value>[$par configure -uplim])} {
                            return -code error "Value '$value' from specified initial population is outside provided lower '[$par configure -lowlim]' or upper limit '$value>[$par configure -uplim]' for parameter '[$par configure -name]'"
                        }
                        lappend cj [@ $initpop $i $j]
                    } elseif {$initype eq {random}} {
                        lappend cj [= {[$par configure -lowlim]+[::tclopt::rnd_uni $idum]* ([$par configure -uplim]-[$par configure -lowlim])}]
                    }
                }
                lappend c $cj
                lappend cost [$funct $cj $pdata]
                incr nfeval
                unset cj
            }
            set cmin [@ $cost 0]
            set imin 0
            for {set i 1} {$i<$np} {incr i} {
                if {[@ $cost $i]<$cmin} {
                    set cmin [@ $cost $i]
                    set imin $i
                }
            }
            set best [@ $c $imin] ;# save best member ever
            set bestit [@ $c $imin] ;# save best member of generation
            set pold $c
            for {set i 0} {$i<$np} {incr i} {
                for {set j 0} {$j<$d} {incr j} {
                    lappend pnewj 0.0
                }
                lappend pnew $pnewj
                unset pnewj
            }
    ### Iteration loop
            set gen 0 ;# generation counter reset
            set histScalar {} ;# list of dicts: {gen ... bestf ... mean ... std ... nfev ...}
            set histBestx {} ;# list of dicts: {gen ... x {...}}
            set histPop {} ;# list of dicts (optional): {gen ... pop {...} cost {...}}
            while true {
                incr gen
                set imin 0
    ####  Start of loop through ensemble
                for {set i 0} {$i<$np} {incr i} {
                    do {
                        set r1 [= {int([::tclopt::rnd_uni $idum]*$np)}]
                    } while {$r1==$i}
                    do {
                        set r2 [= {int([::tclopt::rnd_uni $idum]*$np)}]
                    } while {($r2==$i) || ($r2==$r1)}
                    do {
                        set r3 [= {int([::tclopt::rnd_uni $idum]*$np)}]
                    } while {($r3==$i) || ($r3==$r1) || ($r3==$r2)}
                    do {
                        set r4 [= {int([::tclopt::rnd_uni $idum]*$np)}]
                    } while {($r4==$i) || ($r4==$r1) || ($r4==$r2) || ($r4==$r3)}
                    do {
                        set r5 [= {int([::tclopt::rnd_uni $idum]*$np)}]
                    } while {($r5==$i) || ($r5==$r1) || ($r5==$r2) || ($r5==$r3) || ($r5==$r4)}
    ####  Choice of strategy
    #####   best/1/exp
                    if {$strategy eq {best/1/exp}} {
                        set tmp [@ $pold $i]
                        set n [= {int([::tclopt::rnd_uni $idum]*$d)}]
                        set l 0
                        do {
                            set tmpValue [= {[@ $bestit $n]+$f*([@ $pold $r2 $n]-[@ $pold $r3 $n])}]
                            set lowLim [@ $lowlims $n]
                            set upLim [@ $uplims $n]
                            lset tmp $n [my ReflectScalarMod $tmpValue $lowLim $upLim]
                            set n [= {($n+1)%$d}]
                            incr l
                        } while {([::tclopt::rnd_uni $idum]<$cr) && ($l<$d)}
    #####   rand/1/exp
                    } elseif {$strategy eq {rand/1/exp}} {
                        set tmp [@ $pold $i]
                        set n [= {int([::tclopt::rnd_uni $idum]*$d)}]
                        set l 0
                        do {
                            set tmpValue [= {[@ $pold $r1 $n]+$f*([@ $pold $r2 $n]-[@ $pold $r3 $n])}]
                            set lowLim [@ $lowlims $n]
                            set upLim [@ $uplims $n]
                            lset tmp $n [my ReflectScalarMod $tmpValue $lowLim $upLim]
                            set n [= {($n+1)%$d}]
                            incr l
                        } while {([::tclopt::rnd_uni $idum]<$cr) && ($l<$d)}
    #####   rand-to-best/1/exp
                    } elseif {$strategy eq {rand-to-best/1/exp}} {
                        set tmp [@ $pold $i]
                        set n [= {int([::tclopt::rnd_uni $idum]*$d)}]
                        set l 0
                        do {
                            set tmpValue [= {[@ $tmp $n]+$f*([@ $bestit $n]-[@ $tmp $n])+$f*([@ $pold $r1 $n]- [@ $pold $r2 $n])}]
                            set lowLim [@ $lowlims $n]
                            set upLim [@ $uplims $n]
                            lset tmp $n [my ReflectScalarMod $tmpValue $lowLim $upLim]
                            set n [= {($n+1)%$d}]
                            incr l
                        } while {([::tclopt::rnd_uni $idum]<$cr) && ($l<$d)}
    #####   best/2/exp
                    } elseif {$strategy eq {best/2/exp}} {
                        set tmp [@ $pold $i]
                        set n [= {int([::tclopt::rnd_uni $idum]*$d)}]
                        set l 0
                        do {
                            set tmpValue [= {[@ $bestit $n]+([@ $pold $r1 $n]+[@ $pold $r2 $n]-[@ $pold $r3 $n]- [@ $pold $r4 $n])*$f}]
                            set lowLim [@ $lowlims $n]
                            set upLim [@ $uplims $n]
                            lset tmp $n [my ReflectScalarMod $tmpValue $lowLim $upLim]
                            set n [= {($n+1)%$d}]
                            incr l
                        } while {([::tclopt::rnd_uni $idum]<$cr) && ($l<$d)}
    #####   rand/2/exp
                    } elseif {$strategy eq {rand/2/exp}} {
                        set tmp [@ $pold $i]
                        set n [= {int([::tclopt::rnd_uni $idum]*$d)}]
                        set l 0
                        do {
                            set tmpValue [= {[@ $pold $r5 $n]+([@ $pold $r1 $n]+[@ $pold $r2 $n]-[@ $pold $r3 $n]- [@ $pold $r4 $n])*$f}]
                            set lowLim [@ $lowlims $n]
                            set upLim [@ $uplims $n]
                            lset tmp $n [my ReflectScalarMod $tmpValue $lowLim $upLim]
                            set n [= {($n+1)%$d}]
                            incr l
                        } while {([::tclopt::rnd_uni $idum]<$cr) && ($l<$d)}
    #####   best/1/bin
                    } elseif {$strategy eq {best/1/bin}} {
                        set tmp [@ $pold $i]
                        set n [= {int([::tclopt::rnd_uni $idum]*$d)}]
                        set l 0
                        # perform D binomial trials
                        for {set l 0} {$l<$d} {incr l} {
                            if {([::tclopt::rnd_uni $idum]<$cr) || ($l==($d-1))} {
                                set tmpValue [= {[@ $bestit $n]+$f*([@ $pold $r2 $n]-[@ $pold $r3 $n])}]
                                set lowLim [@ $lowlims $n]
                                set upLim [@ $uplims $n]
                                lset tmp $n [my ReflectScalarMod $tmpValue $lowLim $upLim]
                            }
                            set n [= {($n+1)%$d}]
                        }
    #####   rand/1/bin
                    } elseif {$strategy eq {rand/1/bin}} {
                        set tmp [@ $pold $i]
                        set n [= {int([::tclopt::rnd_uni $idum]*$d)}]
                        set l 0
                        # perform D binomial trials
                        for {set l 0} {$l<$d} {incr l} {
                            if {([::tclopt::rnd_uni $idum]<$cr) || ($l==($d-1))} {
                                set tmpValue [= {[@ $pold $r1 $n]+$f*([@ $pold $r2 $n]-[@ $pold $r3 $n])}]
                                set lowLim [@ $lowlims $n]
                                set upLim [@ $uplims $n]
                                lset tmp $n [my ReflectScalarMod $tmpValue $lowLim $upLim]
                            }
                            set n [= {($n+1)%$d}]
                        }
    #####   rand-to-best/1/bin
                    } elseif {$strategy eq {rand-to-best/1/bin}} {
                        set tmp [@ $pold $i]
                        set n [= {int([::tclopt::rnd_uni $idum]*$d)}]
                        set l 0
                        # perform D binomial trials
                        for {set l 0} {$l<$d} {incr l} {
                            if {([::tclopt::rnd_uni $idum]<$cr) || ($l==($d-1))} {
                                set tmpValue [= {[@ $tmp $n]+$f*([@ $bestit $n]-[@ $tmp $n])+$f*([@ $pold $r1 $n]- [@ $pold $r2 $n])}]
                                set lowLim [@ $lowlims $n]
                                set upLim [@ $uplims $n]
                                lset tmp $n [my ReflectScalarMod $tmpValue $lowLim $upLim]
                            }
                            set n [= {($n+1)%$d}]
                        }
    #####   best/2/bin
                    } elseif {$strategy eq {best/2/bin}} {
                        set tmp [@ $pold $i]
                        set n [= {int([::tclopt::rnd_uni $idum]*$d)}]
                        set l 0
                        # perform D binomial trials
                        for {set l 0} {$l<$d} {incr l} {
                            if {([::tclopt::rnd_uni $idum]<$cr) || ($l==($d-1))} {
                                set tmpValue [= {[@ $bestit $n]+$f*([@ $pold $r1 $n]+[@ $pold $r2 $n]-[@ $pold $r3 $n]- [@ $pold $r4 $n])*$f}]
                                set lowLim [@ $lowlims $n]
                                set upLim [@ $uplims $n]
                                lset tmp $n [my ReflectScalarMod $tmpValue $lowLim $upLim]
                            }
                            set n [= {($n+1)%$d}]
                        }
    #####   rand/2/bin
                    } elseif {$strategy eq {rand/2/bin}} {
                        set tmp [@ $pold $i]
                        set n [= {int([::tclopt::rnd_uni $idum]*$d)}]
                        set l 0
                        # perform D binomial trials
                        for {set l 0} {$l<$d} {incr l} {
                            if {([::tclopt::rnd_uni $idum]<$cr) || ($l==($d-1))} {
                                set tmpValue [= {[@ $pold $r5 $n]+$f*([@ $pold $r1 $n]+[@ $pold $r2 $n]-[@ $pold $r3 $n]- [@ $pold $r4 $n])*$f}]
                                set lowLim [@ $lowlims $n]
                                set upLim [@ $uplims $n]
                                lset tmp $n [my ReflectScalarMod $tmpValue $lowLim $upLim]
                            }
                            set n [= {($n+1)%$d}]
                        }
                    }
    ####  Test how good this choice really was
                    set trial_cost [$funct $tmp $pdata]
                    incr nfeval
                    # improved objective function value ?
                    if {$trial_cost<=[@ $cost $i]} {
                        lset cost $i $trial_cost
                        lset pnew $i $tmp
                        if {$trial_cost<$cmin} {
                            set cmin $trial_cost
                            set imin $i
                            set best $tmp
                        }
                    } else {
                        lset pnew $i [@ $pold $i] ;# replace target with old value
                    }
                }
                set bestit $best ;# Save best population member of current iteration
                set pold $pnew
    ### Compute the energy variance (just for monitoring purposes)
                set cmean 0.0 ;# compute the mean value first
                for {set j 0} {$j<$np} {incr j} {
                    set cmean [= {$cmean+[@ $cost $j]}]
                }
                set cmean [= {$cmean/$np}]
                set cvar 0.0 ;# compute the variance
                for {set j 0} {$j<$np} {incr j} {
                    set cvar [= {$cvar+([@ $cost $j]-$cmean)*([@ $cost $j]-$cmean)}]
                }
                set stddev [= {sqrt($cvar/($np-1))}]
                if {($gen%$refresh==1) && $debug} {
                    puts [format "Best-so-far cost funct. value=%-15.10g" $cmin]
                    for {set j 0} {$j<$d} {incr j} {
                        set par [@ $pars $j]
                        puts [format "Parameter %s=%-15.10g" [$par configure -name] [@ $best $j]]
                    }
                    puts [format "Generation=%d  NFEs=%ld   Strategy: %s" $gen $nfeval $strategy]
                    puts [format "NP=%d F=%-4.2g CR=%-4.2g std=%-10.5g" $np $f $cr $stddev]
                }
                if {$history && ($gen%$histfreq)==0} {
                    # Scalars for convergence plots
                    lappend histScalar [dcreate gen $gen bestf $cmin mean $cmean std $stddev nfev $nfeval]
                    # Best-so-far trajectory (parameters)
                    lappend histBestx [dcreate gen $gen x $best]
                    # Optional: full population snapshot (can be large)
                    if {$savepop} {
                        # Note: 'pold' is the current population; 'cost' is the cost per member
                        lappend histPop [dcreate gen $gen pop $pold cost $cost]
                    }
                }
                if {[info exists threshold]} {
                    if {$cmin<=$threshold} {
                        set info "Optimization stopped due to reaching threshold of objective function '$threshold'"
                        break
                    }
                }
                if {($stddev<=[= {$abstol+$reltol*abs($cmean)}])} {
                    set info "Optimization stopped due to crossing threshold 'abstol+reltol*abs(mean)=[= {$abstol+$reltol*abs($cmean)}]' of objective function population member standard deviation"
                    break
                }
                if {$gen>=$genmax} {
                    set info "Optimization stopped due to reaching maximum number of generations '$genmax'"
                    break
                }
            }
            ::tclopt::DeletePointers $idum int
            if {$history} {
                set results [dcreate objfunc $cmin x $best generation $gen nfev $nfeval strategy $strategy std $stddev info $info history $histScalar besttraj $histBestx pophistory $histPop]
            } else {
                set results [dcreate objfunc $cmin x $best generation $gen nfev $nfeval strategy $strategy std $stddev info $info]
            }
            return $results
}

DuplChecker [::tclopt]Top, Main, Index

Method summary
configureConfigure properties.
duplListCheckChecks if list contains duplicates.
Subclasses

Optimization

duplListCheck [::tclopt::DuplChecker]DuplChecker, Top, Main, Index

Checks if list contains duplicates.

DUPLCHECKEROBJ duplListCheck list
Parameters
listList to check.
Return value

0 if there are no duplicates and 1 if there are.

method duplListCheck {list} {

    # Checks if list contains duplicates.
    #  list - list to check
    # Returns: 0 if there are no duplicates and 1 if there are.
    set itemDup {}
    set new {}
    foreach item $list {
        if {[lsearch $new $item] < 0} {
            lappend new $item
        } else {
            set itemDup $item
            break
        }
    }
    return $itemDup
}

GSA [::tclopt]Top, Main, Index

Method summary
constructorConstructor for the class.
addParsSee Optimization.addPars
configureConfigure properties.
duplListCheckSee DuplChecker.duplListCheck
getAllParsSee Optimization.getAllPars
getAllParsNamesSee Optimization.getAllParsNames
runNot documented.
Properties
-debugReadable, writable.
-functReadable, writable.
-initypeReadable, writable.
-maxfevReadable, writable.
-maxinniterReadable, writable.
-maxiterReadable, writable.
-mininniterReadable, writable.
-nbaseReadable, writable.
-ntrialReadable, writable.
-pdataReadable, writable.
-qaReadable, writable.
-qvReadable, writable.
-resultsReadable, writable.
-seedReadable, writable.
-temp0Readable, writable.
-thresholdReadable, writable.
-tminReadable, writable.
Superclasses

Optimization

constructor [::tclopt::GSA]GSA, Top, Main, Index

Creates optimization object that tuns optimization using modified Gegeneralized Simulation Annealing algorithm.

OBJECT constructor -funct value -pdata value ?-maxiter value? ?-mininniter value? ?-maxfev value? ?-seed value? ?-ntrial value? ?-nbase value? ?-qv value? ?-qa value? ?-tmin value? ?-temp0 value? ?-debug? ?-threshold value? ?-random|specified -initpop value?
Parameters
-debugEnables debug information printing.
-funct valueName of the procedure that should be minimized.
-maxfev valueMaximum number of objective function evaluation. Controls termination of optimization if provided.
-maxinniter valueMaximum number of iterations per temperature, default is 1000.
-maxiter valueMaximum number of temperature steps. Controls termination of optimization. Default is 5000.
-mininniter valueMinimum number of iterations per temperature, default is 10.
-nbase valueBase number of iterations within single temperature, default is 30.
-ntrial valueInitial number of samples to determine initial temperature temp0 (if not provided), default is 20.
-pdata valueList or dictionary that provides private data to funct that is needed to evaluate object (cost) function. Usually it contains x and y values lists, but you can provide any data necessary for function evaluation. Will be passed upon each function evaluation without modification.
-qa valueAcceptance distribution parameter (qa ≠ 1, can be negative). Default -5.0.
-qv valueVisiting distribution parameter, must satisfy 1 < qv < 3. Default 2.62.
-randomRandom parameter vector initialization.
-seed valueRandom seed, default is 0.
-specifiedSpecified points parameter vector initialization.
-temp0 valueInitial temperature value; if not given, estimated from ntrial samples.
-threshold valueStop when best objective ≤ threshold (optional).
-tmin valueStop when temperature ≤ tmin. Default 1e-5.
Description

Class implements the Generalized Simulated Annealing (GSA) algorithm to solve global optimization problems over continuous parameter spaces.

Generalized Simulated Annealing (GSA) is an enhanced version of the classical simulated annealing algorithm, rooted in Tsallis statistics. It replaces traditional temperature schedules and perturbation distributions with generalized forms: specifically, it uses a distorted Cauchy–Lorentz visiting distribution controlled by a parameter qv, allowing for more flexible exploration of the solution space. The algorithm introduces artificial “temperatures” that gradually cool, injecting stochasticity to help the search process escape local minima and eventually converge within the basin of a global minimum.

Main source of information is this article.

Simple constraints are placed on parameter values by adding objects of class ::tclopt::Parameter to GSA with method ::tclopt::Optimization::addPars. For details of how to specify constraints, please look at the description of ::tclopt::Parameter class. Please note, that order in which we attach parameters objects is the order in which values will be supplied to minimized function, and the order in which resulted will be written to x property of the class.

General steps of algorithmGSA, Top, Main, Index

1. Inputs & setup
2. Choose initial parameter vector x_0
3. Estimate initial temperature temp0 (if not provided)
temp  = stddev({f(x)})
    0
4. Initialize loop state
 →       →   →      →  →
 x     = x , f    = f ⎛x ⎞
  curr    0   curr    ⎝ 0⎠
5. Outer loop over temperatures (cooling)
             ⎛ (qv - 1)    ⎞
     temp0 ⋅ ⎝2         - 1⎠
T  = ───────────────────────
 k            (qv - 1)
       (1 + t)         - 1
6. Choose inner-iterations at this temperature
         ⎛                ⎛                  ⎛         ⎛  -d  ⎞⎞⎞⎞
         ⎜                ⎜                  ⎜         ⎜──────⎟⎟⎟⎟
         ⎜                ⎜                  ⎜         ⎝3 - qv⎠⎟⎟⎟
n  = min ⎜maxinniter, max ⎜mininniter, floor ⎜nbase ⋅ T        ⎟⎟⎟
 t       ⎝                ⎝                  ⎝         k       ⎠⎠⎠

where d - number of parameters.

7. Inner loop: propose, clamp, evaluate, accept. For t=1,..., n_t:
                               ____________________
                              ╱        (qv - 1)
             ⎛   1  ⎞        ╱ ⎛   1  ⎞
             ⎜──────⎟       ╱  ⎜──────⎟         - 1
             ⎝3 - qv⎠      ╱   ⎝|2u−1|⎠
Δx = sign ⋅ T         ⋅   ╱    ────────────────────
             k          ╲╱            qv - 1

Apply per coordinate, then clamp with modulo reflection into [low, up].

   →               →
f ⎛x    ⎞; Δf = f ⎛x    ⎞ - f
  ⎝ cand⎠         ⎝ cand⎠    curr
If qa=1:
        ⎛-Δf ⋅ k⎞
p = exp ⎜───────⎟
        ⎜  T    ⎟
        ⎝   k   ⎠
If qa < 1:
            (1 - qa) ⋅ Δf ⋅ k
    z = 1 - ─────────────────
                   T
                    k

    If z<=0 then p=0, else:

         ⎛   1  ⎞
         ⎜──────⎟
         ⎝1 - qa⎠
    p = z

If qa > 1:
                           ⎛  -1  ⎞
                           ⎜──────⎟
                           ⎝qa - 1⎠
    ⎛    (qa - 1) ⋅ Δf ⋅ k⎞
p = ⎜1 + ─────────────────⎟
    ⎜           T         ⎟
    ⎝            k        ⎠

Accept with probability p.

8. Best-of-temperature recentering
→       →
x     = x
 curr    best
→       →
f     = f
 curr    best
9. Stopping conditions (checked each outer step)
10. Advance temperature or finish
Return value

object of class

method constructor {args} {

    # Creates optimization object that tuns optimization using modified Gegeneralized Simulation Annealing
    # algorithm.
    #  -funct value - name of the procedure that should be minimized
    #  -pdata value - list or dictionary that provides private data to funct that is needed to evaluate object
    #    (cost) function. Usually it contains x and y values lists, but you can provide any data necessary for
    #    function evaluation.  Will be passed upon each function evaluation without modification.
    #  -maxiter value - maximum number of temperature steps. Controls termination of optimization. Default is 5000
    #  -mininniter value - minimum number of iterations per temperature, default is 10
    #  -maxinniter value - maximum number of iterations per temperature, default is 1000
    #  -maxfev value - maximum number of objective function evaluation. Controls termination of optimization if
    #    provided.
    #  -seed value - random seed, default is 0
    #  -ntrial value - initial number of samples to determine initial temperature `temp0` (if not provided), default
    #    is 20
    #  -nbase value - base number of iterations within single temperature, default is 30
    #  -qv value - visiting distribution parameter, must satisfy 1 < qv < 3. Default 2.62.
    #  -qa value - acceptance distribution parameter (qa ≠ 1, can be negative). Default -5.0.
    #  -tmin value - stop when temperature ≤ tmin. Default 1e-5.
    #  -temp0 value - initial temperature value; if not given, estimated from ntrial samples.
    #  -debug - enables debug information printing
    #  -threshold value - stop when best objective ≤ threshold (optional).
    #  -random - random parameter vector initialization
    #  -specified - specified points parameter vector initialization
    # Returns: object of class
    #
    # Class implements the Generalized Simulated Annealing (GSA) algorithm to solve global optimization problems
    # over continuous parameter spaces.
    #
    # Generalized Simulated Annealing (GSA) is an enhanced version of the classical simulated annealing algorithm,
    # rooted in Tsallis statistics. It replaces traditional temperature schedules and perturbation distributions
    # with generalized forms: specifically, it uses a distorted Cauchy–Lorentz visiting distribution controlled by a
    # parameter `qv​`, allowing for more flexible exploration of the solution space. The algorithm introduces
    # artificial “temperatures” that gradually cool, injecting stochasticity to help the search process escape local
    # minima and eventually converge within the basin of a global minimum.
    #
    # Main source of information is [this article](https://journal.r-project.org/archive/2013/RJ-2013-002/RJ-2013-002.pdf).
    #
    # Simple constraints are placed on parameter values by adding objects of class [::tclopt::Parameter] to GSA with
    # method [::tclopt::Optimization::addPars]. For details of how to specify constraints, please look at the
    # description of [::tclopt::Parameter] class. Please note, that order in which we attach parameters objects is
    # the order in which values will be supplied to minimized function, and the order in which resulted will be
    # written to `x` property of the class.
    #
    # #### General steps of algorithm
    # ##### 1. Inputs & setup
    # - Provide: objective proc name, parameter objects, and algorithm controls parameters.
    # - Initialize RNG state
    # ##### 2. Choose initial parameter vector x_0
    # - If `-specified`, take each parameter’s `-initval`.
    # - If `-random`, sample uniformly within bounds: x_i = Unif[low_i​, up_i], i - i'th parameter
    # ##### 3. Estimate initial temperature temp0 (if not provided)
    # - Draw `-ntrial` random vectors uniformly within the box; evaluate objective at each.
    # - If `-random`, sample uniformly within bounds: x_i = Unif[low_i​, up_i], i - i'th parameter
    # - Let d be the number of parameters. Compute sample mean and std. dev. of objective values; set:
    #```
    # temp  = stddev({f(x)})
    #     0
    #```
    #
    # ##### 4. Initialize loop state
    # - Current point/value:
    #```
    #  →       →   →      →  →
    #  x     = x , f    = f ⎛x ⎞
    #   curr    0   curr    ⎝ 0⎠
    #```
    # - Best-so-far within the current temperature: copy current to “best”.
    # ##### 5. Outer loop over temperatures (cooling)
    # - For outer iteration k=0,1,2,…, temperature is (Tsallis cooling):
    #```
    #              ⎛ (qv - 1)    ⎞
    #      temp0 ⋅ ⎝2         - 1⎠
    # T  = ───────────────────────
    #  k            (qv - 1)
    #        (1 + t)         - 1
    #```
    # ##### 6. Choose inner-iterations at this temperature
    # - Inner iteration budget at T_k:
    #```
    #          ⎛                ⎛                  ⎛         ⎛  -d  ⎞⎞⎞⎞
    #          ⎜                ⎜                  ⎜         ⎜──────⎟⎟⎟⎟
    #          ⎜                ⎜                  ⎜         ⎝3 - qv⎠⎟⎟⎟
    # n  = min ⎜maxinniter, max ⎜mininniter, floor ⎜nbase ⋅ T        ⎟⎟⎟
    #  t       ⎝                ⎝                  ⎝         k       ⎠⎠⎠
    #```
    # where d \- number of parameters.
    # ##### 7. Inner loop: propose, clamp, evaluate, accept. For t=1,..., n_t:
    # - Visit/perturb each coordinate (distorted Cauchy–Lorentz with qv). Draw u~Unif(0,1). If u>=0.5, sign=1,
    #   else sign=-1. Then step is:
    #```
    #                                ____________________
    #                               ╱        (qv - 1)
    #              ⎛   1  ⎞        ╱ ⎛   1  ⎞
    #              ⎜──────⎟       ╱  ⎜──────⎟         - 1
    #              ⎝3 - qv⎠      ╱   ⎝|2u−1|⎠
    # Δx = sign ⋅ T         ⋅   ╱    ────────────────────
    #              k          ╲╱            qv - 1
    #```
    #   Apply per coordinate, then clamp with modulo reflection into [low, up].
    # - Evaluate candidate and calculate the difference:
    #```
    #    →               →
    # f ⎛x    ⎞; Δf = f ⎛x    ⎞ - f
    #   ⎝ cand⎠         ⎝ cand⎠    curr
    #```
    # - Acceptance rule (generalized qa-Metropolis): if Δf<=0 - accept, else accept with probability:
    #
    #```
    # If qa=1:
    #         ⎛-Δf ⋅ k⎞
    # p = exp ⎜───────⎟
    #         ⎜  T    ⎟
    #         ⎝   k   ⎠
    # If qa < 1:
    #             (1 - qa) ⋅ Δf ⋅ k
    #     z = 1 - ─────────────────
    #                    T
    #                     k
    #
    #     If z<=0 then p=0, else:
    #
    #          ⎛   1  ⎞
    #          ⎜──────⎟
    #          ⎝1 - qa⎠
    #     p = z
    #
    # If qa > 1:
    #                            ⎛  -1  ⎞
    #                            ⎜──────⎟
    #                            ⎝qa - 1⎠
    #     ⎛    (qa - 1) ⋅ Δf ⋅ k⎞
    # p = ⎜1 + ─────────────────⎟
    #     ⎜           T         ⎟
    #     ⎝            k        ⎠
    #```
    # Accept with probability p.
    # ##### 8. Best-of-temperature recentering
    # - Track (x_best, f_best) during inner loop.
    # - After finishing n_k iterations, set:
    #```
    # →       →
    # x     = x
    #  curr    best
    # →       →
    # f     = f
    #  curr    best
    #```
    # - Count attempted/accepted moves for diagnostics.
    # ##### 9. Stopping conditions (checked each outer step)
    # - If `-threshold` is set and best value lower or equal to threshold then stop.
    # - If k>=maxiter then stop.
    # - If T_k<=tmin then stop.
    # - If `-maxfev` is set and total function evals higher or equal to `-maxfev` then stop.
    # ##### 10. Advance temperature or finish
    # - If none of the stops triggered, increment k and repeat.
    # - On exit, return: best objective, best `x`, total evals, `temp0`, last `temp_q` (final T_k​), and a
    #   human-readable `info` message.
    #
    # Synopsis: -funct value -pdata value ?-maxiter value? ?-mininniter value? ?-maxfev value? ?-seed value?
    #   ?-ntrial value? ?-nbase value? ?-qv value? ?-qa value? ?-tmin value? ?-temp0 value? ?-debug? ?-threshold
    #   value? ?-random|specified -initpop value?
    set arguments [argparse -inline -help {Creates optimization object that does General annealing simulation optimization. For more detailed description please see documentation} {
        {-funct= -required -help {Name of the procedure that should be minimized}}
        {-pdata= -default {} -help {List or dictionary that provides private data to funct that is needed to evaluate residuals. Usually it contains x and y values lists, but you can provide any data necessary for function residuals evaluation. Will be passed upon each function evaluation without modification}}
        {-maxiter= -default 5000 -help {Maximum number of temperature steps}}
        {-mininniter= -default 10 -help {Minimum number of iterations per temperature}}
        {-maxinniter= -default 1000 -help {Maximum number of iterations per temperature}}
        {-maxfev= -help {Maximum number of objective function evaluation}}
        {-seed= -default 1 -help {Random seed}}
        {-ntrial= -default 20 -help {Initial number of samples to determine initial temperature}}
        {-nbase= -default 30 -help {Base number of iterations within single temperature}}
        {-qv= -default 2.62 -help {Visiting distribution parameter}}
        {-qa= -default -5.0 -help {Parameter defining shape of the acceptance probability distribution}}
        {-tmin= -default 1e-5 -help {Lowest temperature value}}
        {-temp0= -help {Initial temperature value}}
        {-debug -boolean -help {Print debug information}}
        {-threshold= -help {Objective function threshold that stops optimization}}
        {-random -key initype -default random -help {Random parameter vector initialization}}
        {-specified -key initype -value specified -help {Specified points parameter vector initialization}}
    }]
    if {[dict exists $arguments r]} {
        set r [dict get $arguments r]
    }
    if {[dict exists $arguments temp0]} {
        set temp0 [dict get $arguments temp0]
    }
    if {[dict exists $arguments threshold]} {
        set threshold [dict get $arguments threshold]
    }
    if {[dict exists $arguments maxfev]} {
        set maxfev [dict get $arguments maxfev]
    }
    dict for {elName elValue} $arguments {
        if {$elName ni {temp0 threshold maxfev}} {
            my configure -$elName $elValue
        }
    }
}

run [::tclopt::GSA]GSA, Top, Main, Index

GSAOBJ run
method run {} {

            ::tclopt::NewPointers idum int
            ::tclopt::intp_assign $idum [= {-$seed}]
            set nfeval 0 ;# reset number of function evaluations
            set pars [dvalues [my getAllPars]]
            set d [llength $pars]
    ### Set inital parameter vector
            if {$initype eq {specified}} {
                foreach par $pars {
                    lappend xinit [$par configure -initval]
                }
            } else {
                foreach par $pars {
                    set l [$par configure -lowlim]
                    set u [$par configure -uplim]
                    lappend xinit [= {$l+[::tclopt::rnd_uni $idum]*($u-$l)}]
                }
            }
    ### Estimate initial temperature
            if {![info exists temp0]} {
                for {set i 0} {$i < $ntrial} {incr i} {
                    set xvec {}
                    foreach par $pars {
                        set l [$par configure -lowlim]
                        set u [$par configure -uplim]
                        lappend xvec [= {$l+[::tclopt::rnd_uni $idum]*($u-$l)}]
                    }
                    lappend values [$funct $xvec $pdata]
                }
                # calculate mean
                set sum 0.0
                foreach value $values {
                    set sum [= {$sum+$value}]
                }
                set mean [= {$sum/double([llength $values])}]
                # Compute standard deviation
                set sumsq 0.0
                foreach value $values {
                    set sumsq [= {$sumsq+($value-$mean)*($value-$mean)}]
                }
                set stddev [= {sqrt($sumsq/double($ntrial))}]
                set temp0 $stddev
            }
    ### Start of the outer loop (cooling)
            set niter 1
            set xVecCurr $xinit
            set functCurrVal [$funct $xinit $pdata]
            set xGlobalBest $xinit
            set fGlobalBest $functCurrVal
            incr nfev
            while true {
                set tempq [= {$temp0*(pow(2.0, $qv-1.0)-1.0)/(pow(1.0+$niter, $qv-1.0)-1.0)}]
    ####  Calculate number of inner iterations within temperature
                set logNt [= {min(log(double($maxinniter)), max(log(double($mininniter)), log(double($nbase))-double($d)/(3.0-$qv)*log(max($tempq, $tmin))))}]
                set nt [= {int(exp($logNt))}]
    ####  Start inner loop within the temperature
                set accepted 0
                set attempted 0
                set xVecBest $xVecCurr
                set functBestVal $functCurrVal
                for {set nti 0} {$nti<$nt} {incr nti} {
    #####   Pertrub initial vector
                    set xVecCandidate {}
                    foreach par $pars i [lseq 0 to [= {[llength $pars]-1}]] {
                        set lowlim [$par configure -lowlim]
                        set uplim [$par configure -uplim]
                        set u [::tclopt::rnd_uni $idum]
                        set sign [= {$u<0.5 ? -1.0 : 1.0}]
                        set u [= {abs(2.0*$u-1.0)}]
                        if {$u<1e-16} {
                            set u 1e-16
                        }
                        set dx [= {$sign*pow($tempq, 1.0/(3.0-$qv))*sqrt((pow(1.0/$u, $qv-1.0)-1.0)/($qv-1.0))}]
                        set xnew [= {[@ $xVecCurr $i]+$dx}]
                        set xnew [my ReflectScalarMod $xnew $lowlim $uplim]
                        lappend xVecCandidate $xnew
                    }
                    incr attempted
                    set functCandidateVal [$funct $xVecCandidate $pdata]
                    if {$functCandidateVal<$fGlobalBest} {
                        set fGlobalBest $functCandidateVal
                        set xGlobalBest $xVecCandidate
                    }
                    incr nfev
                    # Acceptance temperature (optional speed-up)
                    set Ta [= {$tempq/max(1,$niter)}]
                    set deltaFunct [= {$functCandidateVal-$functCurrVal}]
    #####   Check accept or not the new solution
                    if {$deltaFunct <= 0.0} {
                        # Always accept downhill
                        set functCurrVal $functCandidateVal
                        set xVecCurr $xVecCandidate
                        incr accepted
                    } else {
                        # Compute acceptance probability for any qa
                        if {[= {abs($qa-1.0) < 1e-12}]} {
                            # qa -> 1 : classical Metropolis
                            set prob [= {exp(-$deltaFunct/$Ta)}]
                        } elseif {$qa < 1.0} {
                            # qa < 1 : cutoff if bracket <= 0
                            set z [= {1.0 - (1.0 - $qa)*$deltaFunct/$Ta}]
                            if {[= {$z <= 0.0}]} {
                                set prob 0.0
                            } else {
                                set prob [= {pow($z, 1.0/(1.0 - $qa))}]
                            }
                        } else {
                            # qa > 1 : heavy-tailed acceptance
                            set prob [= {pow(1.0 + ($qa - 1.0)*$deltaFunct/$Ta, -1.0/($qa - 1.0))}]
                        }
                        if {$prob > 1.0} { set prob 1.0 }
                        set u [::tclopt::rnd_uni $idum]
                        if {$u < $prob} {
                            set functCurrVal $functCandidateVal
                            set xVecCurr $xVecCandidate
                            incr accepted
                        }
                    }
                    if {$functCurrVal < $functBestVal} {
                        set functBestVal $functCurrVal
                        set xVecBest $xVecCurr
                    }
                }
                set xVecCurr $xVecBest
                set functCurrVal $functBestVal
                if {$attempted > 0} {
                    set ratio [= {double($accepted)/$attempted}]
                } else {
                    set ratio 0.0
                }
                if {[info exists threshold]} {
                    if {$fGlobalBest<=$threshold} {
                        set info "Optimization stopped due to reaching threshold of objective function '$threshold'"
                        break
                    }
                }
                if {$niter>=$maxiter} {
                    set info "Optimization stopped due to reaching maximum number of iterations '$maxiter'"
                    break
                }
                if {$tempq<=$tmin} {
                    set info "Optimization stopped due to reaching minimum temperature '$tmin'"
                    break
                }
                if {[info exists maxfev]} {
                    if {$nfev>=$maxfev} {
                        set info "Optimization stopped due to reaching maximum number of objective functions evaluation '$maxfev'"
                        break
                    }
                }
                incr niter
            }
    ### Save result
            set results [dcreate objfunc $fGlobalBest x $xGlobalBest nfev $nfev temp0 $temp0 tempend $tempq info $info niter $niter]
            return $results
}

Mpfit [::tclopt]Top, Main, Index

Method summary
constructorConstructor for the class.
addParsSee Optimization.addPars
configureConfigure properties.
duplListCheckSee DuplChecker.duplListCheck
getAllParsSee Optimization.getAllPars
getAllParsNamesSee Optimization.getAllParsNames
runRuns optimization.
Properties
-covtolReadable, writable.
-epsfcnReadable, writable.
-ftolReadable, writable.
-functReadable, writable.
-gtolReadable, writable.
-mReadable, writable.
-maxfevReadable, writable.
-maxiterReadable, writable.
-nofinitecheckReadable, writable.
-pdataReadable, writable.
-resultsReadable, writable.
-stepfactorReadable, writable.
-xtolReadable, writable.
Superclasses

Optimization

constructor [::tclopt::Mpfit]Mpfit, Top, Main, Index

Creates optimization object that does least squares fitting using modified Levenberg-Marquardt algorithm.

OBJECT constructor -funct value -m value -pdata value ?-ftol value? ?-xtol value? ?-gtol value? ?-stepfactor value? ?-covtol value? ?-maxiter value? ?-maxfev value? ?-epsfcn value? ?-nofinitecheck?
Parameters
-covtol valueRange tolerance for covariance calculation. Value must be of the type float more than zero, default is 1e-14.
-epsfcn valueFinite derivative step size. Value must be of the type float more than zero, default is 2.2204460e-16.
-ftol valueControl termination of mpfit. Termination occurs when both the actual and predicted relative reductions in the sum of squares are at most ftol. Therefore, ftol measures the relative error desired in the sum of squares. Value must be of the type float more than zero, default is 1e-10.
-funct valueName of the procedure that should be minimized.
-gtol valueControl termination of mpfit. Termination occurs when the cosine of the angle between fvec and any column of the jacobian is at most gtol in absolute value. Therefore, gtol measures the orthogonality desired between the function vector and the columns of the jacobian. Value must be of the type float more than zero, default is 1e-10.
-m valueNumber of data points.
-maxfev valueControl termination of mpfit. Termination occurs when the number of calls to funct is at least maxfev by the end of an iteration. Value must be the positive integer, default is 0. If it equals to 0, number of evaluations is not restricted.
-maxiter valueMaximum number of iterations. If maxiter equal to 0, then basic error checking is done, and parameter errors/covariances are estimated based on input arameter values, but no fitting iterations are done. Value must be the positive integer, default is 200.
-nofinitecheckEnables check for infinite quantities, default is off.
-pdata valueList or dictionary that provides private data to funct that is needed to evaluate residuals. Usually it contains x and y values lists, but you can provide any data necessary for function residuals evaluation. Will be passed upon each function evaluation without modification.
-stepfactor valueUsed in determining the initial step bound. This bound is set to the product of factor and the euclidean norm of diag*x if nonzero, or else to factor itself. In most cases factor should lie in the interval (.1,100.). 100. is a generally recommended value. Value must be of the type float more than zero, default is 100.
-xtol valueControl termination of mpfit. Termination occurs when the relative error between two consecutive iterates is at most xtol. Therefore, xtol measures the relative error desired in the approximate solution. Value must be of the type float more than zero, default is 1e-10.
Description

Class uses the Levenberg-Marquardt technique to solve the least-squares problem. In its typical use, it will be used to fit a user-supplied function (the "model") to user-supplied data points (the "data") by adjusting a set of parameters. mpfit is based upon MINPACK-1 (LMDIF.F) by More' and collaborators. The user-supplied function should compute an array of weighted deviations between model and data. In a typical scientific problem the residuals should be weighted so that each deviate has a gaussian sigma of 1.0. If x represents values of the independent variable, y represents a measurement for each value of x, and err represents the error in the measurements, then the deviates could be calculated as follows:

for {set i 0} {$i<$m} {incr i} {
    lset deviates $i [expr {([lindex $y $i] - [f [lindex $x $i]])/[lindex $err $i]}]
}

where m is the number of data points, and where f is the function representing the model evaluated at x. If ERR are the 1-sigma uncertainties in Y, then the sum of deviates squared will be the total chi-squared value, which mpfit will seek to minimize. Simple constraints are placed on parameter values by adding objects of class ::tclopt::ParameterMpfit to mpfit with method ::tclopt::Optimization::addPars, where other parameter-specific options can be set. For details of how to specify constraints, please look at the description of ::tclopt::ParameterMpfit class. Please note, that order in which we attach parameters objects is the order in which values will be supplied to minimized function, and the order in which resulted will be written to X property of the class. Example of user defined function (using linear equation t=a+b*x):

proc f {xall pdata args} {
    set x [dget $pdata x]
    set y [dget $pdata y]
    set ey [dget $pdata ey]
    foreach xVal $x yVal $y eyVal $ey {
        set f [= {[@ $xall 0]+[@ $xall 1]*$xVal}]
        lappend fval [= {($yVal-$f)/$eyVal}]
    }
    return [dcreate fvec $fval]
}

where xall is list of initial parameters values, pdata - dictionary that contains x, y and ey lists with length m. It returns dictionary with residuals values. Alternative form of function f could also provide analytical derivatives:

proc quadfunc {xall pdata args} {
    set x [dget $pdata x]
    set y [dget $pdata y]
    set ey [dget $pdata ey]
    foreach xVal $x yVal $y eyVal $ey {
        lappend fvec [= {($yVal-[@ $xall 0]-[@ $xall 1]*$xVal-[@ $xall 2]*$xVal*$xVal)/$eyVal}]
    }
    if {[@ $args 0]!=""} {
        set derivs [@ $args 0]
        foreach deriv $derivs {
            if {$deriv==0} {
                foreach xVal $x yVal $y eyVal $ey {
                    lappend dvec [= {-1/$eyVal}]
                }
            }
            if {$deriv==1} {
                foreach xVal $x yVal $y eyVal $ey {
                    lappend dvec [= {(-$xVal)/$eyVal}]
                }
            }
            if {$deriv==2} {
                foreach xVal $x yVal $y eyVal $ey {
                    lappend dvec [= {(-$xVal*$xVal)/$eyVal}]
                }
            }
        }
        return [dcreate fvec $fvec dvec $dvec]
    } else {
        return [dcreate fvec $fvec]
    }
}

The first element of the args list is a list specifying the ordinal numbers of the parameters for which we need to calculate the analytical derivative. In this case, the returned dvec list contains the derivative at each x point for each specified parameter, following the same order as in the input list. For example, if the input list is {0, 2} and the number m of x points is 3, the dvec list will look like this:

⎛⎛df ⎞   ⎛df ⎞   ⎛df ⎞   ⎛df ⎞   ⎛df ⎞   ⎛df ⎞  ⎞
⎜⎜───⎟   ⎜───⎟   ⎜───⎟   ⎜───⎟   ⎜───⎟   ⎜───⎟  ⎟
⎜⎝dp0⎠   ⎝dp0⎠   ⎝dp0⎠   ⎝dp2⎠   ⎝dp2⎠   ⎝dp2⎠  ⎟
⎝     x0      x1      x2      x0      x1      x2⎠

Description of keys and data in returned dictionary:

bestnormFinal chi^2.
orignormStarting value of chi^2.
statusFitting status code.
niterNumber of iterations.
nfevNumber of function evaluations.
nparTotal number of parameters.
nfreeNumber of free parameters.
npeggedNumber of pegged parameters.
nfuncNumber of residuals (= num. of data points)
residList of final residuals.
xerrorFinal parameter uncertainties (1-sigma), in the order of elements in Pars property dictionary.
xFinal parameters values list in the order of elements in Pars property dictionary.
debugString with derivatives debugging output.
covarFinal parameters covariance matrix.

You can also access result dictionary with [my configure -results].

Return value

object of class

method constructor {args} {

    # Creates optimization object that does least squares fitting using modified Levenberg-Marquardt algorithm.
    #  -funct value - name of the procedure that should be minimized
    #  -m value - number of data points
    #  -pdata value - list or dictionary that provides private data to funct that is needed to evaluate
    #    residuals. Usually it contains x and y values lists, but you can provide any data necessary for function
    #    residuals evaluation.  Will be passed upon each function evaluation without modification.
    #  -ftol value - control termination of mpfit. Termination occurs when both the actual and predicted relative
    #    reductions in the sum of squares are at most ftol. Therefore, ftol measures the relative error desired
    #    in the sum of squares. Value must be of the type float more than zero, default is 1e-10.
    #  -xtol value - control termination of mpfit. Termination occurs when the relative error between two
    #    consecutive iterates is at most xtol. Therefore, xtol measures the relative error desired in the
    #    approximate solution.  Value must be of the type float more than zero, default is 1e-10.
    #  -gtol value - control termination of mpfit. Termination occurs when the cosine of the angle between fvec and
    #    any column of the jacobian is at most gtol in absolute value. Therefore, gtol measures the orthogonality
    #    desired between the function vector and the columns of the jacobian. Value must be of the type float more
    #    than zero, default is 1e-10.
    #  -maxfev value - control termination of mpfit. Termination occurs when the number of calls to funct is at
    #    least maxfev by the end of an iteration. Value must be the positive integer, default is 0. If it equals to
    #    0, number of evaluations is not restricted.
    #  -stepfactor value - used in determining the initial step bound. This bound is set to the product of factor
    #    and the euclidean norm of diag*x if nonzero, or else to factor itself. In most cases factor should lie in
    #    the interval (.1,100.). 100. is a generally recommended value. Value must be of the type float more than
    #    zero, default is 100.
    #  -covtol value - range tolerance for covariance calculation. Value must be of the type float more than zero,
    #    default is 1e-14.
    #  -maxiter value - maximum number of iterations. If maxiter equal to 0, then basic error checking is done, and
    #    parameter errors/covariances are estimated based on input arameter values, but no fitting iterations are
    #    done. Value must be the positive integer, default is 200.
    #  -epsfcn value - finite derivative step size. Value must be of the type float more than zero, default is
    #    2.2204460e-16.
    #  -nofinitecheck - enables check for infinite quantities, default is off.
    # Returns: object of class
    #
    # Class uses the Levenberg-Marquardt technique to solve the least-squares problem. In its typical use, it will
    # be used to fit a user-supplied function (the "model") to user-supplied data points (the "data") by adjusting a
    # set of parameters. mpfit is based upon MINPACK-1 (LMDIF.F) by More' and collaborators.
    # The user-supplied function should compute an array of weighted deviations between model and data. In a typical
    # scientific problem the residuals should be weighted so that each deviate has a gaussian sigma of 1.0. If x
    # represents values of the independent variable, y represents a measurement for each value of x, and err
    # represents the error in the measurements, then the deviates could be calculated as follows:
    # ```
    # for {set i 0} {$i<$m} {incr i} {
    #     lset deviates $i [expr {([lindex $y $i] - [f [lindex $x $i]])/[lindex $err $i]}]
    # }
    # ```
    # where m is the number of data points, and where f is the function representing the model evaluated at x. If ERR
    # are the 1-sigma uncertainties in Y, then the sum of deviates squared will be the total chi-squared value, which
    # mpfit will seek to minimize.
    # Simple constraints are placed on parameter values by adding objects of class [::tclopt::ParameterMpfit] to
    # mpfit with method [::tclopt::Optimization::addPars], where other parameter-specific options can be set.
    # For details of how to specify constraints, please look at the
    # description of [::tclopt::ParameterMpfit] class. Please note, that order in which we attach parameters objects
    # is the order in which values will be supplied to minimized function, and the order in which resulted will
    # be written to X property of the class.
    # Example of user defined function (using linear equation t=a+b*x):
    # ```
    # proc f {xall pdata args} {
    #     set x [dget $pdata x]
    #     set y [dget $pdata y]
    #     set ey [dget $pdata ey]
    #     foreach xVal $x yVal $y eyVal $ey {
    #         set f [= {[@ $xall 0]+[@ $xall 1]*$xVal}]
    #         lappend fval [= {($yVal-$f)/$eyVal}]
    #     }
    #     return [dcreate fvec $fval]
    # }
    # ```
    # where xall is list of initial parameters values, pdata - dictionary that contains x, y and ey lists with
    # length m. It returns dictionary with residuals values.
    # Alternative form of function f could also provide analytical derivatives:
    # ```
    # proc quadfunc {xall pdata args} {
    #     set x [dget $pdata x]
    #     set y [dget $pdata y]
    #     set ey [dget $pdata ey]
    #     foreach xVal $x yVal $y eyVal $ey {
    #         lappend fvec [= {($yVal-[@ $xall 0]-[@ $xall 1]*$xVal-[@ $xall 2]*$xVal*$xVal)/$eyVal}]
    #     }
    #     if {[@ $args 0]!=""} {
    #         set derivs [@ $args 0]
    #         foreach deriv $derivs {
    #             if {$deriv==0} {
    #                 foreach xVal $x yVal $y eyVal $ey {
    #                     lappend dvec [= {-1/$eyVal}]
    #                 }
    #             }
    #             if {$deriv==1} {
    #                 foreach xVal $x yVal $y eyVal $ey {
    #                     lappend dvec [= {(-$xVal)/$eyVal}]
    #                 }
    #             }
    #             if {$deriv==2} {
    #                 foreach xVal $x yVal $y eyVal $ey {
    #                     lappend dvec [= {(-$xVal*$xVal)/$eyVal}]
    #                 }
    #             }
    #         }
    #         return [dcreate fvec $fvec dvec $dvec]
    #     } else {
    #         return [dcreate fvec $fvec]
    #     }
    # }
    # ```
    # The first element of the `args` list is a list specifying the ordinal numbers of the parameters for which we
    # need to calculate the analytical derivative. In this case, the returned `dvec` list contains the derivative at
    # each x point for each specified parameter, following the same order as in the input list. For example, if the
    # input list is {0, 2} and the number m of x points is 3, the `dvec` list will look like this:
    # ```
    # ⎛⎛df ⎞   ⎛df ⎞   ⎛df ⎞   ⎛df ⎞   ⎛df ⎞   ⎛df ⎞  ⎞
    # ⎜⎜───⎟   ⎜───⎟   ⎜───⎟   ⎜───⎟   ⎜───⎟   ⎜───⎟  ⎟
    # ⎜⎝dp0⎠   ⎝dp0⎠   ⎝dp0⎠   ⎝dp2⎠   ⎝dp2⎠   ⎝dp2⎠  ⎟
    # ⎝     x0      x1      x2      x0      x1      x2⎠
    # ```
    #
    # Description of keys and data in returned dictionary:
    #   bestnorm - final chi^2
    #   orignorm - starting value of chi^2
    #   status - fitting status code
    #   niter - number of iterations
    #   nfev - number of function evaluations
    #   npar - total number of parameters
    #   nfree - number of free parameters
    #   npegged - number of pegged parameters
    #   nfunc - number of residuals (= num. of data points)
    #   resid - list of final residuals
    #   xerror - final parameter uncertainties (1-sigma), in the order of elements in `Pars` property dictionary.
    #   x - final parameters values list in the order of elements in `Pars` property dictionary.
    #   debug - string with derivatives debugging output
    #   covar - final parameters covariance matrix.
    # You can also access result dictionary with `[my configure -results]`.
    #
    # Synopsis: -funct value -m value -pdata value ?-ftol value? ?-xtol value? ?-gtol value? ?-stepfactor value?
    #   ?-covtol value? ?-maxiter value? ?-maxfev value? ?-epsfcn value? ?-nofinitecheck?
    set arguments [argparse -inline -help {Creates optimization object that does least squares fitting using modified  Levenberg-Marquardt algorithm. For more detailed description please see documentation} {
        {-funct= -required -help {Name of the procedure that should be minimized}}
        {-m= -required -help {Number of data points}}
        {-pdata= -default {} -help {List or dictionary that provides private data to funct that is needed to evaluate residuals. Usually it contains x and y values lists, but you can provide any data necessary for function residuals evaluation. Will be passed upon each function evaluation without modification}}
        {-ftol= -default 1e-10 -help {Control termination of mpfit. Termination occurs when both the actual and predicted relative reductions in the sum of squares are at most ftol}}
        {-xtol= -default 1e-10 -help {Control termination of mpfit. Termination occurs when the relative error between two consecutive iterates is at most xtol}}
        {-gtol= -default 1e-10 -help {Control termination of mpfit. Termination occurs when the cosine of the angle between fvec and any column of the jacobian is at most gtol in absolute value}}
        {-stepfactor= -default 100 -help {Used in determining the initial step bound. This bound is set to the product of factor and the euclidean norm of diag*x if nonzero, or else to factor itself}}
        {-covtol= -default 1e-14 -help {Range tolerance for covariance calculation}}
        {-maxiter= -default 200 -help {Maximum number of iterations}}
        {-maxfev= -default 0 -help {Control termination of mpfit. Termination occurs when the number of calls to funct is at least maxfev by the end of an iteration. If it equals to 0, number of evaluations is not restricted}}
        {-epsfcn= -default 2.2204460e-16 -help {Finite derivative step size}}
        {-nofinitecheck -boolean -help {Enable check for infinite quantities}}
    }]
    dict for {elName elValue} $arguments {
        my configure -$elName $elValue
    }
}

run [::tclopt::Mpfit]Mpfit, Top, Main, Index

Runs optimization.

MPFITOBJ run
Return value

dictionary containing resulted data

method run {} {

    # Runs optimization.
    # Returns: dictionary containing resulted data
    set sideMap [dict create auto 0 right 1 left -1 both 2 an 3]
    if {![info exists Pars]} {
        return -code error {At least one parameter must be attached to optimizer before call to run}
    }
    set pars [dvalues [my getAllPars]]
    set npar [llength $pars]
    foreach par $pars {
        lappend xall [$par configure -initval]
    }
    set qanylim false
    set npegged 0
    set nfev 0
    set info 0
    set fnorm -1.0
    set fnorm1 -1.0
    set xnorm -1.0
    set delta 0.0
    foreach par $pars {
        if {[$par configure -fixed]} {
            lappend pfixed true
        } else {
            lappend pfixed false
        }
        if {[catch {$par configure -step}]} {
            lappend step 0
        } else {
            lappend step [$par configure -step]
        }
        if {[catch {$par configure -relstep}]} {
            lappend dstep 0
        } else {
            lappend dstep [$par configure -relstep]
        }
        lappend mpside [dict get $sideMap [$par configure -side]]
        if {![$par configure -debugder]} {
            lappend ddebug 0
            lappend ddrtol 0
            lappend ddatol 0
        } else {
            lappend ddebug 1
            lappend ddrtol [$par configure -derivreltol]
            lappend ddatol [$par configure -derivabstol]
        }
    }
    # Finish up the free parameters
    set nfree 0
    for {set i 0} {$i<$npar} {incr i} {
        lappend ifree 0
    }
    set j 0
    for {set i 0} {$i<$npar} {incr i} {
        if {![@ $pfixed $i]} {
            incr nfree
            lset ifree $j $i
            incr j
        }
    }
    if {$nfree==0} {
        return -code error {All parameters are fixed, optimization is not possible}
    }
    # allocate lists with zeros
    for {set i 0} {$i<$npar} {incr i} {
        lappend qllim 0
        lappend qulim 0
        lappend llim 0
        lappend ulim 0
    }
    for {set i 0} {$i<$nfree} {incr i} {
        set par [@ $pars [@ $ifree $i]]
        if {[catch {$par configure -lowlim}]} {
            lset qllim $i 0
            lset llim $i 0
        } else {
            lset qllim $i 1
            lset llim $i [$par configure -lowlim]
        }
        if {[catch {$par configure -uplim}]} {
            lset qulim $i 0
            lset ulim $i 0
        } else {
            lset qulim $i 1
            lset ulim $i [$par configure -uplim]
        }
        if {li($qllim,$i) || li($qulim,$i)} {
            set qanylim true
        }
    }
    if {$m<$nfree} {
        return -code error "Degree of freedom check failed because of 'm=$m>=n=$nfree'"
    }
    # allocate temporary storage
    for {set i 0} {$i<$npar} {incr i} {
        lappend diag 0
    }
    for {set i 0} {$i<$m} {incr i} {
        lappend wa4 0
    }
    set ldfjac $m
    # Evaluate user function with initial parameter values
    set fvec [dget [$funct $xall $pdata] fvec]
    if {[llength $fvec]!=$m} {
        return -code error "Length of list '[llength $fvec]' returned from the function is less than m '$m' value"
    }
    incr nfev
    set fnorm [my Enorm $fvec]
    #puts $fnorm
    set orignorm [= {$fnorm*$fnorm}]
    # Make a new copy
    set xnew $xall
    # Transfer free parameters to 'x'
    for {set i 0} {$i<$nfree} {incr i} {
        lappend x [@ $xall [@ $ifree $i]]
    }
    # Initialize Levelberg-Marquardt parameter and iteration counter
    set par 0.0
    set iter 1
    for {set i 0} {$i<$nfree} {incr i} {
        lappend qtf 0
    }
    # Beginning of the outer loop
    while {true} {
        # puts "beginning of the outer loop"
        for {set i 0} {$i<$nfree} {incr i} {
            lset xnew [@ $ifree $i] [@ $x $i]
        }
        # Calculate the jacobian matrix
        set fdjac2Data [my Fdjac2 $funct $ifree $nfree $xnew $fvec $ldfjac $epsfcn $pdata $nfev $step $dstep $mpside $qulim $ulim $ddebug $ddrtol $ddatol]
        set fjac [dget $fdjac2Data fjac]
        set nfev [dget $fdjac2Data nfev]
        if {[dexist $fdjac2Data debug]} {
            lappend debugOutput {*}[dget $fdjac2Data debug]
        }
        # Determine if any of the parameters are pegged at the limits
        if {$qanylim} {
            for {set j 0} {$j<$nfree} {incr j} {
                set lpegged [= {li($qllim,$j) && (li($x,$j)==li($llim,$j))}]
                set upegged [= {li($qulim,$j) && (li($x,$j)==li($ulim,$j))}]
                set sum 0
                # If the parameter is pegged at a limit, compute the gradient direction
                if {$lpegged || $upegged} {
                    set ij [= {$j*$ldfjac}]
                    for {set i 0} {$i<$m} {incr i} {
                        set sum [= {$sum+li($fvec,$i)*li($fjac,$ij)}]
                        incr ij
                    }
                }
                # If pegged at lower limit and gradient is toward negative then reset gradient to zero
                if {$lpegged && ($sum>0)} {
                    set ij [= {$j*$ldfjac}]
                    for {set i 0} {$i<$m} {incr i} {
                        lset fjac $ij 0
                        incr ij
                    }
                }
                # If pegged at upper limit and gradient is toward positive then reset gradient to zero
                if {$upegged && ($sum<0)} {
                    set ij [= {$j*$ldfjac}]
                    for {set i 0} {$i<$m} {incr i} {
                        lset fjac $ij 0
                        incr ij
                    }
                }
            }
        }
        # Compute the QR factorization of the jacobian
        set qfracData [my Qfrac $nfree $fjac $ldfjac 1 $nfree]
        set ipvt [dget $qfracData ipvt]
        set fjac [dget $qfracData a]
        set wa1 [dget $qfracData rdiag]
        set wa2 [dget $qfracData acnorm]
        set wa3 [dget $qfracData wa]
        # on the first iteration and if mode is 1, scale according to the norms of the columns of the initial
        # jacobian.
        if {$iter==1} {
            for {set j 0} {$j<$nfree} {incr j} {
                lset diag [@ $ifree $j] [@ $wa2 $j]
                if {li($wa2,$j)==0.0} {
                    lset diag [@ $ifree $j] 1.0
                }
            }
            # on the first iteration, calculate the norm of the scaled x and initialize the step bound delta.
            for {set j 0} {$j<$nfree} {incr j} {
                lset wa3 $j [= {li($diag,li($ifree,$j))*li($x,$j)}]
            }
            set xnorm [my Enorm $wa3]
            set delta [= {$stepfactor*$xnorm}]
            if {$delta==0.0} {
                set delta $stepfactor
            }
        }
        # form (q transpose)*fvec and store the first n components in qtf
        for {set i 0} {$i<$m} {incr i} {
            lset wa4 $i [@ $fvec $i]
        }
        set jj 0
        for {set j 0} {$j<$nfree} {incr j} {
            set temp3 [@ $fjac $jj]
            if {$temp3!=0.0} {
                set sum 0.0
                set ij $jj
                for {set i $j} {$i<$m} {incr i} {
                    set sum [= {$sum+li($fjac,$ij)*li($wa4,$i)}]
                    incr ij; # fjac[i+m*j]
                }
                set temp [= {-$sum/$temp3}]
                set ij $jj
                for {set i $j} {$i<$m} {incr i} {
                    lset wa4 $i [= {li($wa4,$i)+li($fjac,$ij)*$temp}]
                    incr ij; # fjac[i+m*j]
                }
            }
            lset fjac $jj [@ $wa1 $j]
            incr jj [= {$m+1}]; # fjac[j+m*j]"
            lset qtf $j [@ $wa4 $j]
        }
        # (From this point on, only the square matrix, consisting of the triangle of R, is needed.)
        if {$nofinitecheck} {
            # Check for overflow.  This should be a cheap test here since FJAC
            # has been reduced to a (small) square matrix, and the test is O(N^2).
            set off 0
            set nonfinite 0
            for {set j 0} {$j<$nfree} {incr j} {
                for {set i 0} {$i<$nfree} {incr i} {
                    if {li($fjac,$off+$i)>$::tclopt::MP_GIANT} {
                        set nonfinite 1
                    }
                }
                incr off $ldfjac
            }
            if {$nonfinite} {
                return -code error {Overflow occured during finite check}
            }
        }
        # compute the norm of the scaled gradient.
        set gnorm 0.0
        if {$fnorm!=0.0} {
            set jj 0
            for {set j 0} {$j<$nfree} {incr j} {
                set l [@ $ipvt $j]
                if {li($wa2,$l)!=0.0} {
                    set sum 0.0
                    set ij $jj
                    for {set i 0} {$i<=$j} {incr i} {
                        set sum [= {$sum+li($fjac,$ij)*li($qtf,$i)/$fnorm}]
                        incr ij; # fjac[i+m*j]
                    }
                    set gnorm [my Dmax1 $gnorm [= {abs($sum/li($wa2,$l))}]]
                }
                incr jj $m
            }
        }
        # test for convergence of the gradient norm.
        if {$gnorm<=$gtol} {
            set info {Convergence in orthogonality}
        }
        if {$info!=0} {
            break
        }
        if {$maxiter==0} {
            set info {Maximum number of iterations reached}
            break
        }
        # rescale
        for {set j 0} {$j<$nfree} {incr j} {
            lset diag [@ $ifree $j] [my Dmax1 [@ $diag [@ $ifree $j]] [@ $wa2 $j]]
        }
        # beginning of the inner loop.
        while {true} {
            # determine the levenberg-marquardt parameter.
            set lmparData [my Lmpar $nfree $fjac $ldfjac $ipvt $ifree $diag $qtf $delta $par]
            set fjac [dget $lmparData r]
            set par [dget $lmparData par]
            set wa1 [dget $lmparData x]
            set wa2 [dget $lmparData sdiag]
            set wa3 [dget $lmparData wa1]
            # store the direction p and x + p. calculate the norm of p.
            for {set j 0} {$j<$nfree} {incr j} {
                lset wa1 $j [= {-li($wa1,$j)}]
            }
            set alpha 1.0
            if {!$qanylim} {
                # No parameter limits, so just move to new position WA2
                for {set j 0} {$j<$nfree} {incr j} {
                    lset wa2 $j [= {li($x,$j)+li($wa1,$j)}]
                }
            } else {
                # Respect the limits.  If a step were to go out of bounds, then
                # we should take a step in the same direction but shorter distance.
                # The step should take us right to the limit in that case.
                for {set j 0} {$j<$nfree} {incr j} {
                    set lpegged [= {li($qllim,$j) && (li($x,$j)<=li($llim,$j))}]
                    set upegged [= {li($qulim,$j) && (li($x,$j)>=li($ulim,$j))}]
                    set dwa1 [= {abs(li($wa1,$j))>$::tclopt::MP_MACHEP0}]
                    if {$lpegged && (li($wa1,$j)<0)} {
                        lset wa1 $j 0
                    }
                    if {$upegged && (li($wa1,$j)>0)} {
                        lset wa1 $j 0
                    }
                    if {$dwa1 && li($qllim,$j) && ((li($x,$j)+li($wa1,$j))<li($llim,$j))} {
                        set alpha [my Dmin1 $alpha [= {(li($llim,$j)-li($x,$j))/li($wa1,$j)}]]
                    }
                    if {$dwa1 && li($qulim,$j) && ((li($x,$j)+li($wa1,$j))>li($ulim,$j))} {
                        set alpha [my Dmin1 $alpha [= {(li($ulim,$j)-li($x,$j))/li($wa1,$j)}]]
                    }
                }
                # Scale the resulting vector, advance to the next position
                for {set j 0} {$j<$nfree} {incr j} {
                    lset wa1 $j [= {li($wa1,$j)*$alpha}]
                    lset wa2 $j [= {li($x,$j)+li($wa1,$j)}]
                    # Adjust the output values.  If the step put us exactly on a boundary, make sure it is exact.
                    set sgnu [= {(li($ulim,$j)>=0) ? 1 : -1}]
                    set sgnl [= {(li($llim,$j)>=0) ? 1 : -1}]
                    set ulim1 [= {li($ulim,$j)*(1-$sgnu*$::tclopt::MP_MACHEP0)- ((li($ulim,$j)==0) ? $::tclopt::MP_MACHEP0 : 0)}]
                    set llim1 [= {li($llim,$j)*(1+$sgnl*$::tclopt::MP_MACHEP0)+ ((li($llim,$j)==0) ? $::tclopt::MP_MACHEP0 : 0)}]
                    if {li($qulim,$j) && (li($wa2,$j)>=$ulim1)} {
                        lset wa2 $j [@ $ulim $j]
                    }
                    if {li($qllim,$j) && (li($wa2,$j)<=$llim1)} {
                        lset wa2 $j [@ $llim $j]
                    }
                }
            }
            for {set j 0} {$j<$nfree} {incr j} {
                lset wa3 $j [= {li($diag,li($ifree,$j))*li($wa1,$j)}]
            }
            set pnorm [my Enorm $wa3]
            # on the first iteration, adjust the initial step bound.
            if {$iter==1} {
                set delta [my Dmin1 $delta $pnorm]
            }
            # evaluate the function at x + p and calculate its norm.
            for {set i 0} {$i<$nfree} {incr i} {
                lset xnew [@ $ifree $i] [@ $wa2 $i]
            }
            set functData [$funct $xnew $pdata]
            set wa4 [dget $functData fvec]
            incr nfev
            set fnorm1 [my Enorm $wa4]
            # compute the scaled actual reduction.
            set actred -1.0
            if {[= {0.1*$fnorm1}]<$fnorm} {
                set temp [= {$fnorm1/$fnorm}]
                set actred [= {1.0-$temp*$temp}]
            }
            # compute the scaled predicted reduction and the scaled directional derivative.
            set jj 0
            for {set j 0} {$j<$nfree} {incr j} {
                lset wa3 $j 0.0
                set l [@ $ipvt $j]
                set temp [@ $wa1 $l]
                set ij $jj
                for {set i 0} {$i<=$j} {incr i} {
                    lset wa3 $i [= {li($wa3,$i)+li($fjac,$ij)*$temp}]
                    incr ij
                }
                incr jj $m
            }
            # Remember, alpha is the fraction of the full LM step actually taken
            set temp1 [= {[my Enorm $wa3]*$alpha/$fnorm}]
            set temp2 [= {sqrt($par*$alpha)*$pnorm/$fnorm}]
            set prered [= {$temp1*$temp1+($temp2*$temp2)/0.5}]
            set dirder [= {-($temp1*$temp1+$temp2*$temp2)}]
            # compute the ratio of the actual to the predicted reduction
            set ratio 0.0
            if {$prered!=0.0} {
                set ratio [= {$actred/$prered}]
            }
            # update the step bound.
            if {$ratio<=0.25} {
                if {$actred>=0.0} {
                    set temp 0.5
                } else {
                    set temp [= {0.5*$dirder/($dirder+0.5*$actred)}]
                }
                if {((0.1*$fnorm1)>=$fnorm) || ($temp<0.1)} {
                    set temp 0.1
                }
                set delta [= {$temp*[my Dmin1 $delta [= {$pnorm/0.1}]]}]
                set par [= {$par/$temp}]
            } else {
                if {($par==0.0) || ($ratio>=0.75)} {
                    set delta [= {$pnorm/0.5}]
                    set par [= {0.5*$par}]
                }
            }
            # test for successful iteration.
            if {$ratio>=1e-4} {
                # successful iteration. update x, fvec, and their norms.
                for {set j 0} {$j<$nfree} {incr j} {
                    lset x $j [@ $wa2 $j]
                    #puts $x
                    lset wa2 $j [= {li($diag,li($ifree,$j))*li($x,$j)}]
                }
                for {set i 0} {$i<$m} {incr i} {
                    lset fvec $i [@ $wa4 $i]
                }
                set xnorm [my Enorm $wa2]
                set fnorm $fnorm1
                incr iter
            }
            # tests for convergence.
            if {(abs($actred)<=$ftol) && ($prered<=$ftol) && (0.5*$ratio<=1.0)} {
                set info {Convergence in chi-square value}
            }
            if {$delta<=($xtol*$xnorm)} {
                set info {Convergence in parameter value}
            }
            if {(abs($actred)<=$ftol) && ($prered<=$ftol) && (0.5*$ratio<=1.0) && ($info==2)} {
                set info {Both convergence in parameter value and convergence in chi-square value hold}
            }
            if {$info!=0} {
                break
            }
            # tests for termination and stringent tolerances.
            if {($maxfev>0) && ($nfev>=$maxfev)} {
                # Too many function evaluations
                set info {Maximum number of function evaluations reached}
            }
            if {$iter>=$maxiter} {
                # Too many iterations
                set info {Maximum number of iterations reached}
            }
            if {(abs($actred)<=$::tclopt::MP_MACHEP0) && ($prered<=$::tclopt::MP_MACHEP0) && (0.5*$ratio<=1.0)} {
                set info {ftol is too small, no further improvement}
            }
            if {$delta<=$::tclopt::MP_MACHEP0*$xnorm} {
                set info {xtol is too small, no further improvement}
            }
            if {$gnorm<=$::tclopt::MP_MACHEP0} {
                set info {gtol is too small, no further improvement}
            }
            if {$info!=0} {
                break
            }
            # end of the inner loop. repeat if iteration unsuccessful.
            if {$ratio<1e-4} {
                continue
            } else {
                break
            }
        }
        if {$info!=0} {
            break
        }
    }
    # termination, either normal or user imposed.
    for {set i 0} {$i<$nfree} {incr i} {
        lset xall [@ $ifree $i] [@ $x $i]
    }
    if {$info>0} {
        set functData [$funct $xall $pdata]
        set fvec [dget $functData fvec]
        incr nfev
    }
    # Compute number of pegged parameters
    set npegged 0
    for {set i 0} {$i<$npar} {incr i} {
        set par [@ $pars $i]
        if {[catch {$par configure -lowlim}]} {
            set parsLim0 0
        } else {
            set parsLim0 [$par configure -lowlim]
        }
        if {[catch {$par configure -uplim}]} {
            set parsLim1 0
        } else {
            set parsLim1 [$par configure -uplim]
        }
        if {($parsLim0 && ($parsLim0==li($xall,$i))) || ($parsLim1 && ($parsLim1==li($xall,$i)))} {
            incr npegged
        }
    }
    # Compute and return the covariance matrix and/or parameter errors
    set covarData [my Covar $nfree $fjac $ldfjac $ipvt $covtol]
    set fjac [dget $covarData r]
    set wa2 [dget $covarData wa]
    for {set j 0} {$j<$npar*$npar} {incr j} {
        lappend covar 0.0
    }
    # Transfer the covariance array
    for {set j 0} {$j<$nfree} {incr j} {
        for {set i 0} {$i<$nfree} {incr i} {
            lset covar [= {int(li($ifree,$j)*$npar+li($ifree,$i))}] [@ $fjac [= {int($j*$ldfjac+$i)}]]
        }
    }
    for {set j 0} {$j<$npar} {incr j} {
        lappend xerror 0.0
    }
    for {set j 0} {$j<$nfree} {incr j} {
        set cc [@ $fjac [= {int($j*$ldfjac+$j)}]]
        if {$cc>0} {
            lset xerror [@ $ifree $j] [= {sqrt($cc)}]
        }
    }
    set bestnorm [= {[my Dmax1 $fnorm $fnorm1]**2}]
    for {set j 0} {$j<$m} {incr j} {
        lappend resid [@ $fvec $j]
    }
    if {[info exists debugOutput]} {
        set debugOutput [join $debugOutput \n]
    } else {
        set debugOutput {}
    }
    set resDict [dcreate bestnorm $bestnorm orignorm $orignorm status $info niter $iter nfev $nfev npar $npar nfree $nfree npegged $npegged nfunc $m resid $resid xerror $xerror x $xall debug $debugOutput covar $covar]
    set results $resDict
    return $resDict
}

Optimization [::tclopt]Top, Main, Index

Method summary
addParsNot documented.
configureConfigure properties.
duplListCheckSee DuplChecker.duplListCheck
getAllParsGets references of all parameters objects.
getAllParsNamesGets names of all parameters.
Properties
-functReadable, writable.
-pdataReadable, writable.
-resultsReadable, writable.
Mixins

DuplChecker

Subclasses

Mpfit, DE, GSA

addPars [::tclopt::Optimization]Optimization, Top, Main, Index

OPTIMIZATIONOBJ addPars ?args?
Parameters
method addPars {args} {

    argparse -help {Attaches parameters to optimizer object} {
        {params -catchall -help {References to objects of class '::tclopt::ParameterMpfit'}}
    }
    foreach arg $params {
        set argClass [info object class $arg]
        if {$argClass ni {::tclopt::ParameterMpfit ::tclopt::Parameter}} {
            return -code error "Only ::tclopt::ParameterMpfit or ::tclopt::Parameter could be added to optimizer, '$argClass' was provided"
        }
        lappend parsNamesList [$arg configure -name]
    }
    if {[info exists Pars]} {
        lappend parsNamesList {*}[my getAllParsNames]
    }
    set dup [my duplListCheck $parsNamesList]
    if {$dup ne {}} {
        return -code error "Optimizer already contains parameter with name '$dup'"
    }
    foreach arg $params {
        set parName [$arg configure -name]
        dict append Pars $parName $arg
    }
    return
}

getAllPars [::tclopt::Optimization]Optimization, Top, Main, Index

Gets references of all parameters objects.

OPTIMIZATIONOBJ getAllPars ?args?
Parameters
Return value

list of elements names

method getAllPars {args} {

    # Gets references of all parameters objects.
    # Returns: list of elements names
    argparse -help {Gets references of all parameters objects. Returns: list of references} {}
    if {![info exists Pars]} {
        return -code error {There are no parameters attached to optimizer}
    } else {
        return $Pars
    }
}

getAllParsNames [::tclopt::Optimization]Optimization, Top, Main, Index

Gets names of all parameters.

OPTIMIZATIONOBJ getAllParsNames ?args?
Parameters
Return value

list of elements names

method getAllParsNames {args} {

    # Gets names of all parameters.
    # Returns: list of elements names
    argparse -help {Gets names of all parameters. Returns: list of elements names} {}
    if {![info exists Pars]} {
        return -code error {There are no parameters attached to optimizer}
    } else {
        return [dkeys $Pars]
    }
}

Parameter [::tclopt]Top, Main, Index

Method summary
constructorConstructor for the class.
configureConfigure properties.
Properties
-initvalReadable, writable.
-lowlimReadable, writable.
-nameReadable, writable.
-uplimReadable, writable.
Subclasses

ParameterMpfit

constructor [::tclopt::Parameter]Parameter, Top, Main, Index

Creates parameter object.

OBJECT constructor value value ?-fixed? ?-lowlim value? ?-uplim value? ?-step value? ?-relstep value? ?-side value? ?-debugder -debugreltol value -debugabstol value?
Parameters
-lowlim valueSpecify lower limit for parameter, must be lower than upper limit if upper limit is provided, optional.
-uplim valueSpecify upper limit for parameter, must be higher than lower limit if lower limit is provided, optional.
initvalInitial value of parameter.
nameName of the parameter.
Description

Example of building 4 parameters with different constraints:

set par0 [::tclopt::Parameter new a 1.0 -lowlim 0.0]
set par1 [::tclopt::Parameter new b 2.0]
set par2 [::tclopt::Parameter new c 0.0]
set par3 [::tclopt::Parameter new d 0.1 -lowlim -0.3 -uplim 0.2]
method constructor {args} {

    # Creates parameter object.
    #  name - name of the parameter
    #  initval - initial value of parameter
    #  -lowlim value - specify lower limit for parameter, must be lower than upper limit if upper limit is provided,
    #    optional
    #  -uplim value - specify upper limit for parameter, must be higher than lower limit if lower limit is provided,
    #    optional
    # Synopsis: value value ?-fixed? ?-lowlim value? ?-uplim value? ?-step value? ?-relstep value? ?-side value?
    #   ?-debugder -debugreltol value -debugabstol value?
    #
    # Example of building 4 parameters with different constraints:
    # ```
    # set par0 [::tclopt::Parameter new a 1.0 -lowlim 0.0]
    # set par1 [::tclopt::Parameter new b 2.0]
    # set par2 [::tclopt::Parameter new c 0.0]
    # set par3 [::tclopt::Parameter new d 0.1 -lowlim -0.3 -uplim 0.2]
    # ```
    argparse -pfirst {
        {-lowlim= -help {Specify lower limit for parameter, must be lower than upper limit if upper limit is provided}}
        {-uplim= -help {Specify upper limit for parameter, must be higher than lower limit if lower limit is provided}}
        {name -help {Name of the parameter}}
        {initval -help {Initial value of parameter}}
    }
    my configure -initval $initval -name $name
    if {[info exists lowlim]} {
        if {$lowlim>$initval} {
            return -code error {Initial value must be higher than the lower limit}
        }
        my configure -lowlim $lowlim
    }
    if {[info exists uplim]} {
        if {[info exists lowlim]} {
            if {$lowlim>=$uplim} {
                return -code error {Lower limit must be lower than the upper limit}
            }
        }
        if {$uplim<$initval} {
            return -code error {Initial value must be lower than the upper limit}
        }
        my configure -uplim $uplim
    }
}

ParameterMpfit [::tclopt]Top, Main, Index

Method summary
constructorConstructor for the class.
configureConfigure properties.
Properties
-debugderReadable, writable.
-derivabstolReadable, writable.
-derivreltolReadable, writable.
-fixedReadable, writable.
-initvalReadable, writable.
-lowlimReadable, writable.
-nameReadable, writable.
-relstepReadable, writable.
-sideReadable, writable.
-stepReadable, writable.
-uplimReadable, writable.
Superclasses

Parameter

constructor [::tclopt::ParameterMpfit]ParameterMpfit, Top, Main, Index

Creates parameter object for ::tclopt::Mpfit class.

OBJECT constructor value value ?-fixed? ?-lowlim value? ?-uplim value? ?-step value? ?-relstep value? ?-side value? ?-debugder -debugreltol value -debugabstol value?
Parameters
-debugabstol valueAbsolute error that controls printing of derivatives comparison if absolute error exceeds this value. Requires -debugder and -debugreltol.
-debugderSwitch to enable console debug logging of user-computed derivatives, as described above. Note that when debugging is enabled, then -side should be set to auto, right, left or both, depending on which numerical derivative you wish to compare to. Requires -debugreltol and -debugabstol values.
-debugreltol valueRelative error that controls printing of derivatives comparison if relative error exceeds this value. Requires -debugder and -debugabstol.
-fixedSpecify that parameter is fixed during optimization, optional.
-lowlim valueSpecify lower limit for parameter, must be lower than upper limit if upper limit is provided, optional.
-relstep valueThe relative step size to be used in calculating the numerical derivatives. This number is the fractional size of the step, compared to the parameter value. This value supercedes the -step setting. If the parameter is zero, then a default step size is chosen.
-side valueThe sidedness of the finite difference when computing numerical derivatives. This field can take four values: auto : one-sided derivative computed automatically, right : one-sided derivative (f(x+h)-f(x))/h, left : one-sided derivative (f(x)-f(x-h))/h, both : two-sided derivative (f(x+h)-f(x-h))/(2*h), an : user-computed explicit derivatives, where h is the -step parameter described above. The "automatic" one-sided derivative method will chose a direction for the finite difference which does not violate any constraints. The other methods do not perform this check. The two-sided method is in principle more precise, but requires twice as many function evaluations. Default is auto.
-step valueThe step size to be used in calculating the numerical derivatives. If set to zero, then the step size is computed automatically, optional.
-uplim valueSpecify upper limit for parameter, must be higher than lower limit if lower limit is provided, optional.
initvalInitial value of parameter.
nameName of the parameter.
Description

Example of building 4 parameters with different constraints:

set par0 [ParameterMpfit new a 1.0 -fixed -side both]
set par1 [ParameterMpfit new b 2.0]
set par2 [ParameterMpfit new c 0.0 -fixed]
set par3 [ParameterMpfit new d 0.1 -lowlim -0.3 -uplim 0.2]
method constructor {args} {

    # Creates parameter object for [::tclopt::Mpfit] class.
    #  name - name of the parameter
    #  initval - initial value of parameter
    #  -fixed - specify that parameter is fixed during optimization, optional
    #  -lowlim value - specify lower limit for parameter, must be lower than upper limit if upper limit is provided,
    #    optional
    #  -uplim value - specify upper limit for parameter, must be higher than lower limit if lower limit is provided,
    #    optional
    #  -step value - the step size to be used in calculating the numerical derivatives.  If set to zero, then the
    #    step size is computed automatically, optional
    #  -relstep value - the *relative* step size to be used in calculating the numerical derivatives. This number is
    #    the fractional size of the step, compared to the parameter value. This value supercedes the `-step` setting.
    #    If the parameter is zero, then a default step size is chosen.
    #  -side value - the sidedness of the finite difference when computing numerical derivatives. This field can
    #    take four values: auto : one-sided derivative computed automatically, right : one-sided derivative
    #    (f(x+h)-f(x))/h, left : one-sided derivative (f(x)-f(x-h))/h, both : two-sided derivative
    #    (f(x+h)-f(x-h))/(2*h), an : user-computed explicit derivatives, where h is the `-step` parameter described
    #    above. The "automatic" one-sided derivative method will chose a direction for the finite difference which
    #    does not violate any constraints. The other methods do not perform this check. The two-sided method is in
    #    principle more precise, but requires twice as many function evaluations. Default is auto.
    #  -debugder - switch to enable console debug logging of user-computed derivatives, as described above. Note
    #    that when debugging is enabled, then -side should be set to auto, right, left or both, depending on which
    #    numerical derivative you wish to compare to. Requires -debugreltol and -debugabstol values.
    #  -debugreltol value - relative error that controls printing of derivatives comparison if relative error
    #    exceeds this value. Requires -debugder and -debugabstol.
    #  -debugabstol value - absolute error that controls printing of derivatives comparison if absolute error
    #    exceeds this value. Requires -debugder and -debugreltol.
    # Synopsis: value value ?-fixed? ?-lowlim value? ?-uplim value? ?-step value? ?-relstep value? ?-side value?
    #   ?-debugder -debugreltol value -debugabstol value?
    #
    # Example of building 4 parameters with different constraints:
    # ```
    # set par0 [ParameterMpfit new a 1.0 -fixed -side both]
    # set par1 [ParameterMpfit new b 2.0]
    # set par2 [ParameterMpfit new c 0.0 -fixed]
    # set par3 [ParameterMpfit new d 0.1 -lowlim -0.3 -uplim 0.2]
    # ```
    set arguments [argparse -inline -pfirst -help {Creates parameter object for '::tclopt::Mpfit' class} {
        {-fixed -boolean -help {Specify that parameter is fixed during optimization}}
        {-lowlim= -help {Specify lower limit for parameter, must be lower than upper limit if upper limit is provided}}
        {-uplim= -help {Specify upper limit for parameter, must be higher than lower limit if lower limit is provided}}
        {-step= -help {The step size to be used in calculating the numerical derivatives.  If set to zero, then the size is computed automatically step}}
        {-relstep= -help {The *relative* step size to be used in calculating the numerical derivatives. This number is the fractional size of the step, compared to the parameter value. This value supercedes the -step setting. If the parameter is zero, then a default step size is chosen}}
        {-side= -enum {auto right left both an} -default auto -help {The sidedness of the finite difference when computing numerical derivatives. This field can take four values: auto : one-sided derivative computed automatically, right : one-sided derivative (f(x+h)-f(x))/h, left : one-sided derivative (f(x)-f(x-h))/h, both : two-sided derivative (f(x+h)-f(x-h))/(2*h), an : user-computed explicit derivatives, where h is the -step parameter described above. The "automatic" one-sided derivative method will chose a direction for the finite difference which does not violate any constraints. The other methods do not perform this check. The two-sided method is in principle more precise, but requires twice as many function evaluations. Default is auto}}
        {-debugder -boolean -require {debugreltol debugabstol} -help {Switch to enable console debug logging of user-computed derivatives, as described above. Note that when debugging is enabled, then -side should be set to auto, right, left or both, depending on which numerical derivative you wish to compare to}}
        {-debugreltol= -help {Relative error that controls printing of derivatives comparison if relative error exceeds this value}}
        {-debugabstol= -help {Absolute error that controls printing of derivatives comparison if absolute error exceeds this value}}
        {name -help {Name of the parameter}}
        {initval -help {Initial value of parameter}}
    }]
    my configure -side [dget $arguments side]
    if {[dexist $arguments step]} {
        my configure -step [dget $arguments step]
    }
    if {[dexist $arguments relstep]} {
        my configure -relstep [dget $arguments relstep]
    }
    my configure -debugder [dget $arguments debugder]
    if {[dget $arguments debugder]} {
        my configure -derivreltol [dget $arguments debugreltol]
        my configure -derivabstol [dget $arguments debugabstol]
    }
    set params {}
    if {[dget $arguments fixed]} {
        set fixed 1
    } else {
        set fixed 0
    }
    dict for {paramName value} $arguments {
        if {$paramName ni {name initval side step relstep debugder debugreltol debugabstol fixed}} {
            lappend params -$paramName $value
        }
    }
    next [dget $arguments name] [dget $arguments initval] {*}$params
}