::tcloptTop, Main, Index
ClassesTop, Main, Index
DuplChecker [::tclopt]Top, Main, Index
Method summary
configure | Configure properties. |
duplListCheck | Checks if list contains duplicates. |
Subclasses
duplListCheck [::tclopt::DuplChecker]DuplChecker, Top, Main, Index
Checks if list contains duplicates.
Details
Parameters
list | List to check. |
Return value
0 if there are no duplicates and 1 if there are.
method duplListCheck {list} { # Checks if list contains duplicates. # list - list to check # Returns: 0 if there are no duplicates and 1 if there are. set itemDup {} set new {} foreach item $list { if {[lsearch $new $item] < 0} { lappend new $item } else { set itemDup $item break } } return $itemDup }
Mpfit [::tclopt]Top, Main, Index
Method summary
constructor | Constructor for the class. |
addPars | Not documented. |
configure | Configure properties. |
duplListCheck | See DuplChecker.duplListCheck |
getAllPars | Gets references of all parameters objects. |
getAllParsNames | Gets names of all parameters. |
run | Runs optimization. |
Properties
Readable: -covtol
, -epsfcn
, -ftol
, -funct
, -gtol
, -m
, -maxfev
, -maxiter
, -nofinitecheck
, -pdata
, -results
, -stepfactor
, -xtol
Writable: -covtol
, -epsfcn
, -ftol
, -funct
, -gtol
, -m
, -maxfev
, -maxiter
, -nofinitecheck
, -pdata
, -results
, -stepfactor
, -xtol
Mixins
constructor [::tclopt::Mpfit]Mpfit, Top, Main, Index
Creates optimization object that does least squares fitting using modified Levenberg-Marquardt algorithm.
Details
Parameters
-covtol | Range tolerance for covariance calculation. Value must be of the type float more than zero, default is 1e-14. |
-epsfcn | Finite derivative step size. Value must be of the type float more than zero, default is 2.2204460e-16. |
-ftol | Control termination of mpfit. Termination occurs when both the actual and predicted relative reductions in the sum of squares are at most ftol. Therefore, ftol measures the relative error desired in the sum of squares. Value must be of the type float more than zero, default is 1e-10. |
-funct | Name of the procedure that should be minimized. |
-gtol | Control termination of mpfit. Termination occurs when the cosine of the angle between fvec and any column of the jacobian is at most gtol in absolute value. Therefore, gtol measures the orthogonality desired between the function vector and the columns of the jacobian. Value must be of the type float more than zero, default is 1e-10. |
-m | Number of data points. |
-maxfev | Control termination of mpfit. Termination occurs when the number of calls to funct is at least maxfev by the end of an iteration. Value must be the positive integer, default is 0. If it equals to 0, number of evaluations is not restricted. |
-maxiter | Maximum number of iterations. If maxiter equal to 0, then basic error checking is done, and parameter errors/covariances are estimated based on input arameter values, but no fitting iterations are done. Value must be the positive integer, default is 200. |
-nofinitecheck | Enable check for infinite quantities, default is off. |
-pdata | List or dictionary that provides private data to funct that is needed to evaluate residuals. Usually it contains x and y values lists, but you can provide any data necessary for function residuals evaluation. Will be passed upon each function evaluation without modification. |
-stepfactor | Used in determining the initial step bound. This bound is set to the product of factor and the euclidean norm of diag*x if nonzero, or else to factor itself. In most cases factor should lie in the interval (.1,100.). 100. is a generally recommended value. Value must be of the type float more than zero, default is 100. |
-xtol | Control termination of mpfit. Termination occurs when the relative error between two consecutive iterates is at most xtol. Therefore, xtol measures the relative error desired in the approximate solution. Value must be of the type float more than zero, default is 1e-10. |
Description
Class uses the Levenberg-Marquardt technique to solve the least-squares problem. In its typical use, it will be used to fit a user-supplied function (the "model") to user-supplied data points (the "data") by adjusting a set of parameters. mpfit is based upon MINPACK-1 (LMDIF.F) by More' and collaborators. The user-supplied function should compute an array of weighted deviations between model and data. In a typical scientific problem the residuals should be weighted so that each deviate has a gaussian sigma of 1.0. If x represents values of the independent variable, y represents a measurement for each value of x, and err represents the error in the measurements, then the deviates could be calculated as follows:
for {set i 0} {$i<$m} {incr i} { lset deviates $i [expr {([lindex $y $i] - [f [lindex $x $i]])/[lindex $err $i]}] }
where m is the number of data points, and where f is the function representing the model evaluated at x. If ERR are the 1-sigma uncertainties in Y, then the sum of deviates squared will be the total chi-squared value, which mpfit will seek to minimize. Simple constraints are placed on parameter values by adding objects of class ::tclopt::ParameterMpfit to mpfit with method [:tclopt::Mpfit::addPars], where other parameter-specific options can be set. For details of how to specify constraints, please look at the description of [::tclopt::parCreate] procedure. Please note, that order in which we attach parameters objects is the order in which values will be supplied to minimized function, and the order in which resulted will be written to X property of the class. Example of user defined function (using linear equation t=a+b*x):
proc f {xall pdata args} { set x [dget $pdata x] set y [dget $pdata y] set ey [dget $pdata ey] foreach xVal $x yVal $y eyVal $ey { set f [= {[@ $xall 0]+[@ $xall 1]*$xVal}] lappend fval [= {($yVal-$f)/$eyVal}] } return [dcreate fvec $fval] }
where xall is list of initial parameters values, pdata - dictionary that contains x, y and ey lists with length m. It returns dictionary with residuals values. Alternative form of function f could also provide analytical derivatives:
proc quadfunc {xall pdata args} { set x [dget $pdata x] set y [dget $pdata y] set ey [dget $pdata ey] foreach xVal $x yVal $y eyVal $ey { lappend fvec [= {($yVal-[@ $xall 0]-[@ $xall 1]*$xVal-[@ $xall 2]*$xVal*$xVal)/$eyVal}] } if {[@ $args 0]!=""} { set derivs [@ $args 0] foreach deriv $derivs { if {$deriv==0} { foreach xVal $x yVal $y eyVal $ey { lappend dvec [= {-1/$eyVal}] } } if {$deriv==1} { foreach xVal $x yVal $y eyVal $ey { lappend dvec [= {(-$xVal)/$eyVal}] } } if {$deriv==2} { foreach xVal $x yVal $y eyVal $ey { lappend dvec [= {(-$xVal*$xVal)/$eyVal}] } } } return [dcreate fvec $fvec dvec $dvec] } else { return [dcreate fvec $fvec] } }
The first element of the args
list is a list specifying the ordinal numbers of the parameters for which we need to calculate the analytical derivative. In this case, the returned dvec
list contains the derivative at each x point for each specified parameter, following the same order as in the input list. For example, if the input list is {0, 2} and the number m of x points is 3, the dvec
list will look like this:
⎛⎛df ⎞ ⎛df ⎞ ⎛df ⎞ ⎛df ⎞ ⎛df ⎞ ⎛df ⎞ ⎞ ⎜⎜───⎟ ⎜───⎟ ⎜───⎟ ⎜───⎟ ⎜───⎟ ⎜───⎟ ⎟ ⎜⎝dp0⎠ ⎝dp0⎠ ⎝dp0⎠ ⎝dp2⎠ ⎝dp2⎠ ⎝dp2⎠ ⎟ ⎝ x0 x1 x2 x0 x1 x2⎠
Description of keys and data in returned dictionary:
-bestnorm | Final chi^2. |
-orignorm | Starting value of chi^2. |
-status | Fitting status code. |
-niter | Number of iterations. |
-nfev | Number of function evaluations. |
-npar | Total number of parameters. |
-nfree | Number of free parameters. |
-npegged | Number of pegged parameters. |
-nfunc | Number of residuals (= num. of data points) |
-resid | List of final residuals. |
-xerror | Final parameter uncertainties (1-sigma), in the order of elements in Pars property dictionary. |
-x | Final parameters values list in the order of elements in Pars property dictionary. |
-debug | String with derivatives debugging output. |
-covar | Final parameters covariance matrix. |
You can also access all results by [my configure propertyName] mechanism.
Return value
object of class
method constructor {args} { # Creates optimization object that does least squares fitting using modified Levenberg-Marquardt algorithm. # -funct - name of the procedure that should be minimized # -m - number of data points # -pdata - list or dictionary that provides private data to funct that is needed to evaluate residuals. Usually # it contains x and y values lists, but you can provide any data necessary for function residuals evaluation. # Will be passed upon each function evaluation without modification. # -ftol - control termination of mpfit. Termination occurs when both the actual and predicted relative # reductions in the sum of squares are at most ftol. Therefore, ftol measures the relative error desired # in the sum of squares. Value must be of the type float more than zero, default is 1e-10. # -xtol - control termination of mpfit. Termination occurs when the relative error between two consecutive # iterates is at most xtol. Therefore, xtol measures the relative error desired in the approximate solution. # Value must be of the type float more than zero, default is 1e-10. # -gtol - control termination of mpfit. Termination occurs when the cosine of the angle between fvec and any # column of the jacobian is at most gtol in absolute value. Therefore, gtol measures the orthogonality desired # between the function vector and the columns of the jacobian. Value must be of the type float more than zero, # default is 1e-10. # -maxfev - control termination of mpfit. Termination occurs when the number of calls to funct is at least # maxfev by the end of an iteration. Value must be the positive integer, default is 0. If it equals to 0, # number of evaluations is not restricted. # -stepfactor - used in determining the initial step bound. This bound is set to the product of factor and the # euclidean norm of diag*x if nonzero, or else to factor itself. In most cases factor should lie in the # interval (.1,100.). 100. is a generally recommended value. Value must be of the type float more than zero, # default is 100. # -covtol - range tolerance for covariance calculation. Value must be of the type float more than zero, default # is 1e-14. # -maxiter - maximum number of iterations. If maxiter equal to 0, then basic error checking is done, and # parameter errors/covariances are estimated based on input arameter values, but no fitting iterations are # done. Value must be the positive integer, default is 200. # -epsfcn - finite derivative step size. Value must be of the type float more than zero, default is # 2.2204460e-16. # -nofinitecheck - enable check for infinite quantities, default is off. # Returns: object of class # # Class uses the Levenberg-Marquardt technique to solve the least-squares problem. In its typical use, it will # be used to fit a user-supplied function (the "model") to user-supplied data points (the "data") by adjusting a # set of parameters. mpfit is based upon MINPACK-1 (LMDIF.F) by More' and collaborators. # The user-supplied function should compute an array of weighted deviations between model and data. In a typical # scientific problem the residuals should be weighted so that each deviate has a gaussian sigma of 1.0. If x # represents values of the independent variable, y represents a measurement for each value of x, and err # represents the error in the measurements, then the deviates could be calculated as follows: # ``` # for {set i 0} {$i<$m} {incr i} { # lset deviates $i [expr {([lindex $y $i] - [f [lindex $x $i]])/[lindex $err $i]}] # } # ``` # where m is the number of data points, and where f is the function representing the model evaluated at x. If ERR # are the 1-sigma uncertainties in Y, then the sum of deviates squared will be the total chi-squared value, which # mpfit will seek to minimize. # Simple constraints are placed on parameter values by adding objects of class [::tclopt::ParameterMpfit] to # mpfit with method [:tclopt::Mpfit::addPars], where other parameter-specific options can be set. # For details of how to specify constraints, please look at the # description of [::tclopt::parCreate] procedure. Please note, that order in which we attach parameters objects # is the order in which values will be supplied to minimized function, and the order in which resulted will # be written to X property of the class. # Example of user defined function (using linear equation t=a+b*x): # ``` # proc f {xall pdata args} { # set x [dget $pdata x] # set y [dget $pdata y] # set ey [dget $pdata ey] # foreach xVal $x yVal $y eyVal $ey { # set f [= {[@ $xall 0]+[@ $xall 1]*$xVal}] # lappend fval [= {($yVal-$f)/$eyVal}] # } # return [dcreate fvec $fval] # } # ``` # where xall is list of initial parameters values, pdata - dictionary that contains x, y and ey lists with # length m. It returns dictionary with residuals values. # Alternative form of function f could also provide analytical derivatives: # ``` # proc quadfunc {xall pdata args} { # set x [dget $pdata x] # set y [dget $pdata y] # set ey [dget $pdata ey] # foreach xVal $x yVal $y eyVal $ey { # lappend fvec [= {($yVal-[@ $xall 0]-[@ $xall 1]*$xVal-[@ $xall 2]*$xVal*$xVal)/$eyVal}] # } # if {[@ $args 0]!=""} { # set derivs [@ $args 0] # foreach deriv $derivs { # if {$deriv==0} { # foreach xVal $x yVal $y eyVal $ey { # lappend dvec [= {-1/$eyVal}] # } # } # if {$deriv==1} { # foreach xVal $x yVal $y eyVal $ey { # lappend dvec [= {(-$xVal)/$eyVal}] # } # } # if {$deriv==2} { # foreach xVal $x yVal $y eyVal $ey { # lappend dvec [= {(-$xVal*$xVal)/$eyVal}] # } # } # } # return [dcreate fvec $fvec dvec $dvec] # } else { # return [dcreate fvec $fvec] # } # } # ``` # The first element of the `args` list is a list specifying the ordinal numbers of the parameters for which we # need to calculate the analytical derivative. In this case, the returned `dvec` list contains the derivative at # each x point for each specified parameter, following the same order as in the input list. For example, if the # input list is {0, 2} and the number m of x points is 3, the `dvec` list will look like this: # ``` # ⎛⎛df ⎞ ⎛df ⎞ ⎛df ⎞ ⎛df ⎞ ⎛df ⎞ ⎛df ⎞ ⎞ # ⎜⎜───⎟ ⎜───⎟ ⎜───⎟ ⎜───⎟ ⎜───⎟ ⎜───⎟ ⎟ # ⎜⎝dp0⎠ ⎝dp0⎠ ⎝dp0⎠ ⎝dp2⎠ ⎝dp2⎠ ⎝dp2⎠ ⎟ # ⎝ x0 x1 x2 x0 x1 x2⎠ # ``` # # Description of keys and data in returned dictionary: # -bestnorm - final chi^2 # -orignorm - starting value of chi^2 # -status - fitting status code # -niter - number of iterations # -nfev - number of function evaluations # -npar - total number of parameters # -nfree - number of free parameters # -npegged - number of pegged parameters # -nfunc - number of residuals (= num. of data points) # -resid - list of final residuals # -xerror - final parameter uncertainties (1-sigma), in the order of elements in `Pars` property dictionary. # -x - final parameters values list in the order of elements in `Pars` property dictionary. # -debug - string with derivatives debugging output # -covar - final parameters covariance matrix. # You can also access all results by [my configure propertyName] mechanism. # # Synopsis: -funct value -m value -pdata value ?-ftol value? ?-xtol value? ?-gtol value? ?-stepfactor value? # ?-covtol value? ?-maxiter value? ?-maxfev value? ?-epsfcn value? ?-nofinitecheck? set arguments [argparse -inline -help {Creates optimization object that does least squares fitting using modified Levenberg-Marquardt algorithm. For more detailed description please see documentation} { {-funct= -required -help {Name of the procedure that should be minimized}} {-m= -required -help {Number of data points}} {-pdata= -default {} -help {List or dictionary that provides private data to funct that is needed to evaluate residuals. Usually it contains x and y values lists, but you can provide any data necessary for function residuals evaluation. Will be passed upon each function evaluation without modification}} {-ftol= -default 1e-10 -help {Control termination of mpfit. Termination occurs when both the actual and predicted relative reductions in the sum of squares are at most ftol}} {-xtol= -default 1e-10 -help {Control termination of mpfit. Termination occurs when the relative error between two consecutive iterates is at most xtol}} {-gtol= -default 1e-10 -help {Control termination of mpfit. Termination occurs when the cosine of the angle between fvec and any column of the jacobian is at most gtol in absolute value}} {-stepfactor= -default 100 -help {Used in determining the initial step bound. This bound is set to the product of factor and the euclidean norm of diag*x if nonzero, or else to factor itself}} {-covtol= -default 1e-14 -help {Range tolerance for covariance calculation}} {-maxiter= -default 200 -help {Maximum number of iterations}} {-maxfev= -default 0 -help {Control termination of mpfit. Termination occurs when the number of calls to funct is at least maxfev by the end of an iteration. If it equals to 0, number of evaluations is not restricted}} {-epsfcn= -default 2.2204460e-16 -help {Finite derivative step size}} {-nofinitecheck -boolean -help {Enable check for infinite quantities}} }] dict for {elName elValue} $arguments { my configure -$elName $elValue } }
addPars [::tclopt::Mpfit]Mpfit, Top, Main, Index
Details
Parameters
method addPars {args} { argparse -help {Attaches parameters to optimizer object} { {params -catchall -help {References to objects of class '::tclopt::ParameterMpfit'}} } foreach arg $params { set argClass [info object class $arg] if {$argClass ne {::tclopt::ParameterMpfit}} { return -code error "Only ::tclopt::ParameterMpfit could be added to optimizer, '$argClass' was provided" } lappend parsNamesList [$arg configure -name] } if {[info exists Pars]} { lappend parsNamesList {*}[my getAllParsNames] } set dup [my duplListCheck $parsNamesList] if {$dup ne {}} { return -code error "Optimizer already contains parameter with name '$dup'" } foreach arg $params { set parName [$arg configure -name] dict append Pars $parName $arg } return }
getAllPars [::tclopt::Mpfit]Mpfit, Top, Main, Index
Gets references of all parameters objects.
Details
Parameters
Return value
list of elements names
method getAllPars {args} { # Gets references of all parameters objects. # Returns: list of elements names argparse -help {Gets references of all parameters objects. Returns: list of references} {} if {![info exists Pars]} { return -code error {There are no parameters attached to optimizer} } else { return $Pars } }
getAllParsNames [::tclopt::Mpfit]Mpfit, Top, Main, Index
Gets names of all parameters.
Details
Parameters
Return value
list of elements names
method getAllParsNames {args} { # Gets names of all parameters. # Returns: list of elements names argparse -help {Gets names of all parameters. Returns: list of elements names} {} if {![info exists Pars]} { return -code error {There are no parameters attached to optimizer} } else { return [dkeys $Pars] } }
run [::tclopt::Mpfit]Mpfit, Top, Main, Index
Runs optimization.
Details
Return value
dictionary containing resulted data
method run {} { # Runs optimization. # Returns: dictionary containing resulted data set sideMap [dict create auto 0 right 1 left -1 both 2 an 3] if {![info exists Pars]} { return -code error {At least one parameter must be attached to optimizer before call to run} } set pars [dvalues [my getAllPars]] set npar [llength $pars] foreach par $pars { lappend xall [$par configure -initval] } set qanylim false set npegged 0 set nfev 0 set info 0 set fnorm -1.0 set fnorm1 -1.0 set xnorm -1.0 set delta 0.0 foreach par $pars { if {[$par configure -fixed]} { lappend pfixed true } else { lappend pfixed false } if {[catch {$par configure -step}]} { lappend step 0 } else { lappend step [$par configure -step] } if {[catch {$par configure -relstep}]} { lappend dstep 0 } else { lappend dstep [$par configure -relstep] } lappend mpside [dict get $sideMap [$par configure -side]] if {![$par configure -debugder]} { lappend ddebug 0 lappend ddrtol 0 lappend ddatol 0 } else { lappend ddebug 1 lappend ddrtol [$par configure -derivreltol] lappend ddatol [$par configure -derivabstol] } } # Finish up the free parameters set nfree 0 for {set i 0} {$i<$npar} {incr i} { lappend ifree 0 } set j 0 for {set i 0} {$i<$npar} {incr i} { if {![@ $pfixed $i]} { incr nfree lset ifree $j $i incr j } } if {$nfree==0} { return -code error {All parameters are fixed, optimization is not possible} } # allocate lists with zeros for {set i 0} {$i<$npar} {incr i} { lappend qllim 0 lappend qulim 0 lappend llim 0 lappend ulim 0 } for {set i 0} {$i<$nfree} {incr i} { set par [@ $pars [@ $ifree $i]] if {[catch {$par configure -lowlim}]} { lset qllim $i 0 lset llim $i 0 } else { lset qllim $i 1 lset llim $i [$par configure -lowlim] } if {[catch {$par configure -uplim}]} { lset qulim $i 0 lset ulim $i 0 } else { lset qulim $i 1 lset ulim $i [$par configure -uplim] } if {li($qllim,$i) || li($qulim,$i)} { set qanylim true } } if {$m<$nfree} { return -code error "Degree of freedom check failed because of 'm=$m>=n=$nfree'" } # allocate temporary storage for {set i 0} {$i<$npar} {incr i} { lappend diag 0 } for {set i 0} {$i<$m} {incr i} { lappend wa4 0 } set ldfjac $m # Evaluate user function with initial parameter values set fvec [dget [$funct $xall $pdata] fvec] if {[llength $fvec]!=$m} { return -code error "Length of list '[llength $fvec]' returned from the function is less than m '$m' value" } incr nfev set fnorm [my Enorm $fvec] #puts $fnorm set orignorm [= {$fnorm*$fnorm}] # Make a new copy set xnew $xall # Transfer free parameters to 'x' for {set i 0} {$i<$nfree} {incr i} { lappend x [@ $xall [@ $ifree $i]] } # Initialize Levelberg-Marquardt parameter and iteration counter set par 0.0 set iter 1 for {set i 0} {$i<$nfree} {incr i} { lappend qtf 0 } # Beginning of the outer loop while {true} { # puts "beginning of the outer loop" for {set i 0} {$i<$nfree} {incr i} { lset xnew [@ $ifree $i] [@ $x $i] } # Calculate the jacobian matrix set fdjac2Data [my Fdjac2 $funct $ifree $nfree $xnew $fvec $ldfjac $epsfcn $pdata $nfev $step $dstep $mpside $qulim $ulim $ddebug $ddrtol $ddatol] set fjac [dget $fdjac2Data fjac] set nfev [dget $fdjac2Data nfev] if {[dexist $fdjac2Data debug]} { lappend debugOutput {*}[dget $fdjac2Data debug] } # Determine if any of the parameters are pegged at the limits if {$qanylim} { for {set j 0} {$j<$nfree} {incr j} { set lpegged [= {li($qllim,$j) && (li($x,$j)==li($llim,$j))}] set upegged [= {li($qulim,$j) && (li($x,$j)==li($ulim,$j))}] set sum 0 # If the parameter is pegged at a limit, compute the gradient direction if {$lpegged || $upegged} { set ij [= {$j*$ldfjac}] for {set i 0} {$i<$m} {incr i} { set sum [= {$sum+li($fvec,$i)*li($fjac,$ij)}] incr ij } } # If pegged at lower limit and gradient is toward negative then reset gradient to zero if {$lpegged && ($sum>0)} { set ij [= {$j*$ldfjac}] for {set i 0} {$i<$m} {incr i} { lset fjac $ij 0 incr ij } } # If pegged at upper limit and gradient is toward positive then reset gradient to zero if {$upegged && ($sum<0)} { set ij [= {$j*$ldfjac}] for {set i 0} {$i<$m} {incr i} { lset fjac $ij 0 incr ij } } } } # Compute the QR factorization of the jacobian set qfracData [my Qfrac $nfree $fjac $ldfjac 1 $nfree] set ipvt [dget $qfracData ipvt] set fjac [dget $qfracData a] set wa1 [dget $qfracData rdiag] set wa2 [dget $qfracData acnorm] set wa3 [dget $qfracData wa] # on the first iteration and if mode is 1, scale according to the norms of the columns of the initial jacobian. if {$iter==1} { for {set j 0} {$j<$nfree} {incr j} { lset diag [@ $ifree $j] [@ $wa2 $j] if {li($wa2,$j)==0.0} { lset diag [@ $ifree $j] 1.0 } } # on the first iteration, calculate the norm of the scaled x and initialize the step bound delta. for {set j 0} {$j<$nfree} {incr j} { lset wa3 $j [= {li($diag,li($ifree,$j))*li($x,$j)}] } set xnorm [my Enorm $wa3] set delta [= {$stepfactor*$xnorm}] if {$delta==0.0} { set delta $stepfactor } } # form (q transpose)*fvec and store the first n components in qtf for {set i 0} {$i<$m} {incr i} { lset wa4 $i [@ $fvec $i] } set jj 0 for {set j 0} {$j<$nfree} {incr j} { set temp3 [@ $fjac $jj] if {$temp3!=0.0} { set sum 0.0 set ij $jj for {set i $j} {$i<$m} {incr i} { set sum [= {$sum+li($fjac,$ij)*li($wa4,$i)}] incr ij; # fjac[i+m*j] } set temp [= {-$sum/$temp3}] set ij $jj for {set i $j} {$i<$m} {incr i} { lset wa4 $i [= {li($wa4,$i)+li($fjac,$ij)*$temp}] incr ij; # fjac[i+m*j] } } lset fjac $jj [@ $wa1 $j] incr jj [= {$m+1}]; # fjac[j+m*j]" lset qtf $j [@ $wa4 $j] } # (From this point on, only the square matrix, consisting of the triangle of R, is needed.) if {$nofinitecheck} { # Check for overflow. This should be a cheap test here since FJAC # has been reduced to a (small) square matrix, and the test is O(N^2). set off 0 set nonfinite 0 for {set j 0} {$j<$nfree} {incr j} { for {set i 0} {$i<$nfree} {incr i} { if {li($fjac,$off+$i)>$::tclopt::MP_GIANT} { set nonfinite 1 } } incr off $ldfjac } if {$nonfinite} { return -code error {Overflow occured during finite check} } } # compute the norm of the scaled gradient. set gnorm 0.0 if {$fnorm!=0.0} { set jj 0 for {set j 0} {$j<$nfree} {incr j} { set l [@ $ipvt $j] if {li($wa2,$l)!=0.0} { set sum 0.0 set ij $jj for {set i 0} {$i<=$j} {incr i} { set sum [= {$sum+li($fjac,$ij)*li($qtf,$i)/$fnorm}] incr ij; # fjac[i+m*j] } set gnorm [my Dmax1 $gnorm [= {abs($sum/li($wa2,$l))}]] } incr jj $m } } # test for convergence of the gradient norm. if {$gnorm<=$gtol} { set info {Convergence in orthogonality} } if {$info!=0} { break } if {$maxiter==0} { set info {Maximum number of iterations reached} break } # rescale for {set j 0} {$j<$nfree} {incr j} { lset diag [@ $ifree $j] [my Dmax1 [@ $diag [@ $ifree $j]] [@ $wa2 $j]] } # beginning of the inner loop. while {true} { # determine the levenberg-marquardt parameter. set lmparData [my Lmpar $nfree $fjac $ldfjac $ipvt $ifree $diag $qtf $delta $par] set fjac [dget $lmparData r] set par [dget $lmparData par] set wa1 [dget $lmparData x] set wa2 [dget $lmparData sdiag] set wa3 [dget $lmparData wa1] # store the direction p and x + p. calculate the norm of p. for {set j 0} {$j<$nfree} {incr j} { lset wa1 $j [= {-li($wa1,$j)}] } set alpha 1.0 if {!$qanylim} { # No parameter limits, so just move to new position WA2 for {set j 0} {$j<$nfree} {incr j} { lset wa2 $j [= {li($x,$j)+li($wa1,$j)}] } } else { # Respect the limits. If a step were to go out of bounds, then # we should take a step in the same direction but shorter distance. # The step should take us right to the limit in that case. for {set j 0} {$j<$nfree} {incr j} { set lpegged [= {li($qllim,$j) && (li($x,$j)<=li($llim,$j))}] set upegged [= {li($qulim,$j) && (li($x,$j)>=li($ulim,$j))}] set dwa1 [= {abs(li($wa1,$j))>$::tclopt::MP_MACHEP0}] if {$lpegged && (li($wa1,$j)<0)} { lset wa1 $j 0 } if {$upegged && (li($wa1,$j)>0)} { lset wa1 $j 0 } if {$dwa1 && li($qllim,$j) && ((li($x,$j)+li($wa1,$j))<li($llim,$j))} { set alpha [my Dmin1 $alpha [= {(li($llim,$j)-li($x,$j))/li($wa1,$j)}]] } if {$dwa1 && li($qulim,$j) && ((li($x,$j)+li($wa1,$j))>li($ulim,$j))} { set alpha [my Dmin1 $alpha [= {(li($ulim,$j)-li($x,$j))/li($wa1,$j)}]] } } # Scale the resulting vector, advance to the next position for {set j 0} {$j<$nfree} {incr j} { lset wa1 $j [= {li($wa1,$j)*$alpha}] lset wa2 $j [= {li($x,$j)+li($wa1,$j)}] # Adjust the output values. If the step put us exactly on a boundary, make sure it is exact. set sgnu [= {(li($ulim,$j)>=0) ? 1 : -1}] set sgnl [= {(li($llim,$j)>=0) ? 1 : -1}] set ulim1 [= {li($ulim,$j)*(1-$sgnu*$::tclopt::MP_MACHEP0)- ((li($ulim,$j)==0) ? $::tclopt::MP_MACHEP0 : 0)}] set llim1 [= {li($llim,$j)*(1+$sgnl*$::tclopt::MP_MACHEP0)+ ((li($llim,$j)==0) ? $::tclopt::MP_MACHEP0 : 0)}] if {li($qulim,$j) && (li($wa2,$j)>=$ulim1)} { lset wa2 $j [@ $ulim $j] } if {li($qllim,$j) && (li($wa2,$j)<=$llim1)} { lset wa2 $j [@ $llim $j] } } } for {set j 0} {$j<$nfree} {incr j} { lset wa3 $j [= {li($diag,li($ifree,$j))*li($wa1,$j)}] } set pnorm [my Enorm $wa3] # on the first iteration, adjust the initial step bound. if {$iter==1} { set delta [my Dmin1 $delta $pnorm] } # evaluate the function at x + p and calculate its norm. for {set i 0} {$i<$nfree} {incr i} { lset xnew [@ $ifree $i] [@ $wa2 $i] } set functData [$funct $xnew $pdata] set wa4 [dget $functData fvec] incr nfev set fnorm1 [my Enorm $wa4] # compute the scaled actual reduction. set actred -1.0 if {[= {0.1*$fnorm1}]<$fnorm} { set temp [= {$fnorm1/$fnorm}] set actred [= {1.0-$temp*$temp}] } # compute the scaled predicted reduction and the scaled directional derivative. set jj 0 for {set j 0} {$j<$nfree} {incr j} { lset wa3 $j 0.0 set l [@ $ipvt $j] set temp [@ $wa1 $l] set ij $jj for {set i 0} {$i<=$j} {incr i} { lset wa3 $i [= {li($wa3,$i)+li($fjac,$ij)*$temp}] incr ij } incr jj $m } # Remember, alpha is the fraction of the full LM step actually taken set temp1 [= {[my Enorm $wa3]*$alpha/$fnorm}] set temp2 [= {sqrt($par*$alpha)*$pnorm/$fnorm}] set prered [= {$temp1*$temp1+($temp2*$temp2)/0.5}] set dirder [= {-($temp1*$temp1+$temp2*$temp2)}] # compute the ratio of the actual to the predicted reduction set ratio 0.0 if {$prered!=0.0} { set ratio [= {$actred/$prered}] } # update the step bound. if {$ratio<=0.25} { if {$actred>=0.0} { set temp 0.5 } else { set temp [= {0.5*$dirder/($dirder+0.5*$actred)}] } if {((0.1*$fnorm1)>=$fnorm) || ($temp<0.1)} { set temp 0.1 } set delta [= {$temp*[my Dmin1 $delta [= {$pnorm/0.1}]]}] set par [= {$par/$temp}] } else { if {($par==0.0) || ($ratio>=0.75)} { set delta [= {$pnorm/0.5}] set par [= {0.5*$par}] } } # test for successful iteration. if {$ratio>=1e-4} { # successful iteration. update x, fvec, and their norms. for {set j 0} {$j<$nfree} {incr j} { lset x $j [@ $wa2 $j] #puts $x lset wa2 $j [= {li($diag,li($ifree,$j))*li($x,$j)}] } for {set i 0} {$i<$m} {incr i} { lset fvec $i [@ $wa4 $i] } set xnorm [my Enorm $wa2] set fnorm $fnorm1 incr iter } # tests for convergence. if {(abs($actred)<=$ftol) && ($prered<=$ftol) && (0.5*$ratio<=1.0)} { set info {Convergence in chi-square value} } if {$delta<=($xtol*$xnorm)} { set info {Convergence in parameter value} } if {(abs($actred)<=$ftol) && ($prered<=$ftol) && (0.5*$ratio<=1.0) && ($info==2)} { set info {Both convergence in parameter value and convergence in chi-square value hold} } if {$info!=0} { break } # tests for termination and stringent tolerances. if {($maxfev>0) && ($nfev>=$maxfev)} { # Too many function evaluations set info {Maximum number of function evaluations reached} } if {$iter>=$maxiter} { # Too many iterations set info {Maximum number of iterations reached} } if {(abs($actred)<=$::tclopt::MP_MACHEP0) && ($prered<=$::tclopt::MP_MACHEP0) && (0.5*$ratio<=1.0)} { set info {ftol is too small, no further improvement} } if {$delta<=$::tclopt::MP_MACHEP0*$xnorm} { set info {xtol is too small, no further improvement} } if {$gnorm<=$::tclopt::MP_MACHEP0} { set info {gtol is too small, no further improvement} } if {$info!=0} { break } # end of the inner loop. repeat if iteration unsuccessful. if {$ratio<1e-4} { continue } else { break } } if {$info!=0} { break } } # termination, either normal or user imposed. for {set i 0} {$i<$nfree} {incr i} { lset xall [@ $ifree $i] [@ $x $i] } if {$info>0} { set functData [$funct $xall $pdata] set fvec [dget $functData fvec] incr nfev } # Compute number of pegged parameters set npegged 0 for {set i 0} {$i<$npar} {incr i} { set par [@ $pars $i] if {[catch {$par configure -lowlim}]} { set parsLim0 0 } else { set parsLim0 [$par configure -lowlim] } if {[catch {$par configure -uplim}]} { set parsLim1 0 } else { set parsLim1 [$par configure -uplim] } if {($parsLim0 && ($parsLim0==li($xall,$i))) || ($parsLim1 && ($parsLim1==li($xall,$i)))} { incr npegged } } # Compute and return the covariance matrix and/or parameter errors set covarData [my Covar $nfree $fjac $ldfjac $ipvt $covtol] set fjac [dget $covarData r] set wa2 [dget $covarData wa] for {set j 0} {$j<$npar*$npar} {incr j} { lappend covar 0.0 } # Transfer the covariance array for {set j 0} {$j<$nfree} {incr j} { for {set i 0} {$i<$nfree} {incr i} { lset covar [= {int(li($ifree,$j)*$npar+li($ifree,$i))}] [@ $fjac [= {int($j*$ldfjac+$i)}]] } } for {set j 0} {$j<$npar} {incr j} { lappend xerror 0.0 } for {set j 0} {$j<$nfree} {incr j} { set cc [@ $fjac [= {int($j*$ldfjac+$j)}]] if {$cc>0} { lset xerror [@ $ifree $j] [= {sqrt($cc)}] } } set bestnorm [= {[my Dmax1 $fnorm $fnorm1]**2}] for {set j 0} {$j<$m} {incr j} { lappend resid [@ $fvec $j] } if {[info exists debugOutput]} { set debugOutput [join $debugOutput \n] } else { set debugOutput {} } set resDict [dcreate bestnorm $bestnorm orignorm $orignorm status $info niter $iter nfev $nfev npar $npar nfree $nfree npegged $npegged nfunc $m resid $resid xerror $xerror x $xall debug $debugOutput covar $covar] set results $resDict return $resDict }
Parameter [::tclopt]Top, Main, Index
Method summary
constructor | Constructor for the class. |
configure | Configure properties. |
Properties
Readable: -fixed
, -initval
, -lowlim
, -name
, -uplim
Writable: -fixed
, -initval
, -lowlim
, -name
, -uplim
Subclasses
constructor [::tclopt::Parameter]Parameter, Top, Main, Index
Parameter new ?args?
Details
Parameters
method constructor {args} { argparse -pfirst { {-fixed -boolean} -lowlim= -uplim= name initval } my configure -fixed $fixed -initval $initval -name $name if {[info exists lowlim]} { if {$lowlim>$initval} { return -code error {Initial value must be higher than the lower limit} } my configure -lowlim $lowlim } if {[info exists uplim]} { if {[info exists lowlim]} { if {$lowlim>=$uplim} { return -code error {Lower limit must be lower than the upper limit} } } if {$uplim<$initval} { return -code error {Initial value must be lower than the upper limit} } my configure -uplim $uplim } }
ParameterMpfit [::tclopt]Top, Main, Index
Method summary
constructor | Constructor for the class. |
configure | Configure properties. |
Properties
Readable: -debugder
, -derivabstol
, -derivreltol
, -fixed
, -initval
, -lowlim
, -name
, -relstep
, -side
, -step
, -uplim
Writable: -debugder
, -derivabstol
, -derivreltol
, -fixed
, -initval
, -lowlim
, -name
, -relstep
, -side
, -step
, -uplim
Superclasses
constructor [::tclopt::ParameterMpfit]ParameterMpfit, Top, Main, Index
Creates parameter object for ::tclopt::Mpfit class.
Details
Parameters
-debugabstol | Absolute error that controls printing of derivatives comparison if absolute error exceeds this value. Requires -debugder and -debugreltol. |
-debugder | Switch to enable console debug logging of user-computed derivatives, as described above. Note that when debugging is enabled, then -side should be set to auto, right, left or both, depending on which numerical derivative you wish to compare to. Requires -debugreltol and -debugabstol values. |
-debugreltol | Relative error that controls printing of derivatives comparison if relative error exceeds this value. Requires -debugder and -debugabstol. |
-fixed | Specify that parameter is fixed during optimization, optional. |
-lowlim | Specify lower limit for parameter, must be lower than upper limit if upper limit is provided, optional. |
-relstep | The relative step size to be used in calculating the numerical derivatives. This number is the fractional size of the step, compared to the parameter value. This value supercedes the -step setting. If the parameter is zero, then a default step size is chosen. |
-side | The sidedness of the finite difference when computing numerical derivatives. This field can take four values: auto : one-sided derivative computed automatically, right : one-sided derivative (f(x+h)-f(x))/h, left : one-sided derivative (f(x)-f(x-h))/h, both : two-sided derivative (f(x+h)-f(x-h))/(2*h), an : user-computed explicit derivatives, where h is the -step parameter described above. The "automatic" one-sided derivative method will chose a direction for the finite difference which does not violate any constraints. The other methods do not perform this check. The two-sided method is in principle more precise, but requires twice as many function evaluations. Default is auto. |
-step | The step size to be used in calculating the numerical derivatives. If set to zero, then the step size is computed automatically, optional. |
-uplim | Specify upper limit for parameter, must be higher than lower limit if lower limit is provided, optional. |
initval | Initial value of parameter. |
name | Name of the parameter. |
Description
Example of building 4 parameters with different constraints:
set par0 [::tclopt::ParameterMpfit new a 1.0 -fixed -side both] set par1 [::tclopt::ParameterMpfit new b 2.0] set par2 [::tclopt::ParameterMpfit new c 0.0 -fixed] set par3 [::tclopt::ParameterMpfit new d 0.1 -lowlim -0.3 -uplim 0.2]
method constructor {args} { # Creates parameter object for [::tclopt::Mpfit] class. # name - name of the parameter # initval - initial value of parameter # -fixed - specify that parameter is fixed during optimization, optional # -lowlim - specify lower limit for parameter, must be lower than upper limit if upper limit is provided, # optional # -uplim - specify upper limit for parameter, must be higher than lower limit if lower limit is provided, # optional # -step - the step size to be used in calculating the numerical derivatives. If set to zero, then the step # size is computed automatically, optional # -relstep - the *relative* step size to be used in calculating the numerical derivatives. This number is the # fractional size of the step, compared to the parameter value. This value supercedes the -step setting. # If the parameter is zero, then a default step size is chosen. # -side - the sidedness of the finite difference when computing numerical derivatives. This field can take four # values: auto : one-sided derivative computed automatically, right : one-sided derivative (f(x+h)-f(x))/h, # left : one-sided derivative (f(x)-f(x-h))/h, both : two-sided derivative (f(x+h)-f(x-h))/(2*h), an : # user-computed explicit derivatives, where h is the -step parameter described above. The "automatic" # one-sided derivative method will chose a direction for the finite difference which does not violate any # constraints. The other methods do not perform this check. The two-sided method is in principle more precise, # but requires twice as many function evaluations. Default is auto. # -debugder - switch to enable console debug logging of user-computed derivatives, as described above. Note that # when debugging is enabled, then -side should be set to auto, right, left or both, depending on which # numerical derivative you wish to compare to. Requires -debugreltol and -debugabstol values. # -debugreltol - relative error that controls printing of derivatives comparison if relative error exceeds this # value. Requires -debugder and -debugabstol. # -debugabstol - absolute error that controls printing of derivatives comparison if absolute error exceeds this # value. Requires -debugder and -debugreltol. # Synopsis: value value ?-fixed? ?-lowlim value? ?-uplim value? ?-step value? ?-relstep value? ?-side value? # ?-debugder -debugreltol value -debugabstol value? # # Example of building 4 parameters with different constraints: # ``` # set par0 [::tclopt::ParameterMpfit new a 1.0 -fixed -side both] # set par1 [::tclopt::ParameterMpfit new b 2.0] # set par2 [::tclopt::ParameterMpfit new c 0.0 -fixed] # set par3 [::tclopt::ParameterMpfit new d 0.1 -lowlim -0.3 -uplim 0.2] # ``` set arguments [argparse -inline -pfirst -help {Creates parameter object for '::tclopt::Mpfit' class} { {-fixed -boolean -help {Specify that parameter is fixed during optimization}} {-lowlim= -help {Specify lower limit for parameter, must be lower than upper limit if upper limit is provided}} {-uplim= -help {Specify upper limit for parameter, must be higher than lower limit if lower limit is provided}} {-step= -help {The step size to be used in calculating the numerical derivatives. If set to zero, then the size is computed automatically step}} {-relstep= -help {The *relative* step size to be used in calculating the numerical derivatives. This number is the fractional size of the step, compared to the parameter value. This value supercedes the -step setting. If the parameter is zero, then a default step size is chosen}} {-side= -enum {auto right left both an} -default auto -help {The sidedness of the finite difference when computing numerical derivatives. This field can take four values: auto : one-sided derivative computed automatically, right : one-sided derivative (f(x+h)-f(x))/h, left : one-sided derivative (f(x)-f(x-h))/h, both : two-sided derivative (f(x+h)-f(x-h))/(2*h), an : user-computed explicit derivatives, where h is the -step parameter described above. The "automatic" one-sided derivative method will chose a direction for the finite difference which does not violate any constraints. The other methods do not perform this check. The two-sided method is in principle more precise, but requires twice as many function evaluations. Default is auto}} {-debugder -boolean -require {debugreltol debugabstol} -help {Switch to enable console debug logging of user-computed derivatives, as described above. Note that when debugging is enabled, then -side should be set to auto, right, left or both, depending on which numerical derivative you wish to compare to}} {-debugreltol= -help {Relative error that controls printing of derivatives comparison if relative error exceeds this value}} {-debugabstol= -help {Absolute error that controls printing of derivatives comparison if absolute error exceeds this value}} {name -help {Name of the parameter}} {initval -help {Initial value of parameter}} }] my configure -side [dget $arguments side] if {[dexist $arguments step]} { my configure -step [dget $arguments step] } if {[dexist $arguments relstep]} { my configure -relstep [dget $arguments relstep] } my configure -debugder [dget $arguments debugder] if {[dget $arguments debugder]} { my configure -derivreltol [dget $arguments debugreltol] my configure -derivabstol [dget $arguments debugabstol] } set params {} if {[dget $arguments fixed]} { lappend params -fixed } dict for {paramName value} $arguments { if {$paramName ni {name initval side step relstep debugder debugreltol debugabstol fixed}} { lappend params -$paramName $value } } next [dget $arguments name] [dget $arguments initval] {*}$params }