template<class TElastix>
class elastix::ConjugateGradient< TElastix >
An optimizer based on the itk::GenericConjugateGradientOptimizer.
A ConjugateGradient optimizer, using the itk::MoreThuenteLineSearchOptimizer. Different conjugate gradient methods can be selected with this optimizer.
This optimizer support the NewSamplesEveryIteration option. It requests new samples for the computation of each search direction (not during the line search). Actually this makes no sense for a conjugate gradient optimizer. So, think twice before using the NewSamplesEveryIteration option.
The parameters used in this class are:
- Parameters:
Optimizer: Select this optimizer as follows:
(Optimizer "ConjugateGradient")
GenerateLineSearchIterations: Whether line search iteration should be counted as elastix-iterations.
example: (GenerateLineSearchIterations "true")
Can only be specified for all resolutions at once.
Default value: "false".
MaximumNumberOfIterations: The maximum number of iterations in each resolution.
example: (MaximumNumberOfIterations 100 100 50)
Default value: 100.
MaximumNumberOfLineSearchIterations: The maximum number of iterations in each resolution.
example: (MaximumNumberOfIterations 10 10 5)
Default value: 10.
StepLength: Set the length of the initial step tried by the itk::MoreThuenteLineSearchOptimizer.
example: (StepLength 2.0 1.0 0.5)
Default value: 1.0.
LineSearchValueTolerance: Determine the Wolfe conditions that the itk::MoreThuenteLineSearchOptimizer tries to satisfy.
example: (LineSearchValueTolerance 0.0001 0.0001 0.0001)
Default value: 0.0001.
LineSearchGradientTolerance: Determine the Wolfe conditions that the itk::MoreThuenteLineSearchOptimizer tries to satisfy.
example: (LineSearchGradientTolerance 0.9 0.9 0.9)
Default value: 0.9.
ValueTolerance: Stopping criterion. See the documentation of the itk::GenericConjugateGradientOptimizer for more information.
example: (ValueTolerance 0.001 0.0001 0.000001)
Default value: 0.00001.
GradientMagnitudeTolerance: Stopping criterion. See the documentation of the itk::GenericConjugateGradientOptimizer for more information.
example: (GradientMagnitudeTolerance 0.001 0.0001 0.000001)
Default value: 0.000001.
ConjugateGradientType: a string that defines how 'beta' is computed in each resolution. The following methods are implemented: "SteepestDescent", "FletcherReeves", "PolakRibiere", "DaiYuan", "HestenesStiefel", and "DaiYuanHestenesStiefel". "SteepestDescent" simply sets beta=0. See the source code of the GenericConjugateGradientOptimizer for more information.
example: (ConjugateGradientType "FletcherReeves" "PolakRibiere")
Default value: "DaiYuanHestenesStiefel".
StopIfWolfeNotSatisfied: Whether to stop the optimisation if in one iteration the Wolfe conditions can not be satisfied by the itk::MoreThuenteLineSearchOptimizer.
In general it is wise to do so.
example: (StopIfWolfeNotSatisfied "true" "false")
Default value: "true".
Definition at line 91 of file elxConjugateGradient.h.
|
| ConjugateGradient () |
|
virtual std::string | DeterminePhase () const |
|
virtual std::string | GetLineSearchStopCondition () const |
|
void | LineSearch (const ParametersType searchDir, double &step, ParametersType &x, MeasureType &f, DerivativeType &g) override |
|
bool | TestConvergence (bool firstLineSearchDone) override |
|
| ~ConjugateGradient () override=default |
|
void | AddBetaDefinition (const BetaDefinitionType &name, ComputeBetaFunctionType function) |
|
virtual double | ComputeBeta (const DerivativeType &previousGradient, const DerivativeType &gradient, const ParametersType &previousSearchDir) |
|
double | ComputeBetaDY (const DerivativeType &previousGradient, const DerivativeType &gradient, const ParametersType &previousSearchDir) |
|
double | ComputeBetaDYHS (const DerivativeType &previousGradient, const DerivativeType &gradient, const ParametersType &previousSearchDir) |
|
double | ComputeBetaFR (const DerivativeType &previousGradient, const DerivativeType &gradient, const ParametersType &previousSearchDir) |
|
double | ComputeBetaHS (const DerivativeType &previousGradient, const DerivativeType &gradient, const ParametersType &previousSearchDir) |
|
double | ComputeBetaPR (const DerivativeType &previousGradient, const DerivativeType &gradient, const ParametersType &previousSearchDir) |
|
double | ComputeBetaSD (const DerivativeType &previousGradient, const DerivativeType &gradient, const ParametersType &previousSearchDir) |
|
virtual void | ComputeSearchDirection (const DerivativeType &previousGradient, const DerivativeType &gradient, ParametersType &searchDir) |
|
| GenericConjugateGradientOptimizer () |
|
virtual void | LineSearch (const ParametersType searchDir, double &step, ParametersType &x, MeasureType &f, DerivativeType &g) |
|
void | PrintSelf (std::ostream &os, Indent indent) const override |
|
virtual void | SetInLineSearch (bool _arg) |
|
virtual bool | TestConvergence (bool firstLineSearchDone) |
|
| ~GenericConjugateGradientOptimizer () override=default |
|
virtual void | GetScaledDerivative (const ParametersType ¶meters, DerivativeType &derivative) const |
|
virtual MeasureType | GetScaledValue (const ParametersType ¶meters) const |
|
virtual void | GetScaledValueAndDerivative (const ParametersType ¶meters, MeasureType &value, DerivativeType &derivative) const |
|
void | PrintSelf (std::ostream &os, Indent indent) const override |
|
| ScaledSingleValuedNonLinearOptimizer () |
|
void | SetCurrentPosition (const ParametersType ¶m) override |
|
virtual void | SetScaledCurrentPosition (const ParametersType ¶meters) |
|
| ~ScaledSingleValuedNonLinearOptimizer () override=default |
|
virtual bool | GetNewSamplesEveryIteration () const |
|
| OptimizerBase ()=default |
|
virtual void | SelectNewSamples () |
|
| ~OptimizerBase () override=default |
|
| BaseComponentSE ()=default |
|
| ~BaseComponentSE () override=default |
|
| BaseComponent ()=default |
|
virtual | ~BaseComponent ()=default |
|