go home Home | Main Page | Modules | Namespace List | Class Hierarchy | Alphabetical List | Data Structures | File List | Namespace Members | Data Fields | Globals | Related Pages
Public Types | Public Member Functions | Static Public Member Functions | Protected Member Functions | Protected Attributes | Private Attributes
itk::AdaptiveStochasticPreconditionedGradientDescentOptimizer Class Reference

#include <itkAdaptiveStochasticPreconditionedGradientDescentOptimizer.h>

Detailed Description

This class implements a gradient descent optimizer with adaptive gain.

If $C(x)$ is a cost function that has to be minimized, the following iterative algorithm is used to find the optimal parameters $x$:

\[ x(k+1) = x(k) - a(t_k) P dC/dx \]

The gain $a(t_k)$ at each iteration $k$ is defined by:

\[ a(t_k) =  a / (A + t_k + 1)^alpha \]

.

And the time $t_k$ is updated according to:

\[ t_{k+1} = [ t_k + sigmoid( -g_k^T P g_{k-1} ) ]^+ \]

where $g_k$ equals $dC/dx$ at iteration $k$. For $t_0$ the InitialTime is used, which is defined in the the superclass (StandardGradientDescentOptimizer). Whereas in the superclass this parameter is superfluous, in this class it makes sense.

This method is described in the following references:

[1] Y. Qiao, B.P.F. Lelieveldt, M. Staring An efficient preconditioner for stochastic gradient descent optimization of image registration IEEE Transactions on Medical Imaging, 2019 https://doi.org/10.1109/TMI.2019.2897943

[2] P. Cruz Almost sure convergence and asymptotical normality of a generalization of Kesten's stochastic approximation algorithm for multidimensional case Technical Report, 2005. http://hdl.handle.net/2052/74

[3] S. Klein, J.P.W. Pluim, M. Staring, M.A. Viergever Adaptive stochastic gradient descent optimisation for image registration International Journal of Computer Vision, vol. 81, no. 3, pp. 227-239, 2009 http://dx.doi.org/10.1007/s11263-008-0168-y

It is very suitable to be used in combination with a stochastic estimate of the gradient $dC/dx$. For example, in image registration problems it is often advantageous to compute the metric derivative ( $dC/dx$) on a new set of randomly selected image samples in each iteration. You may set the parameter NewSamplesEveryIteration to "true" to achieve this effect. For more information on this strategy, you may have a look at:

See also
StochasticPreconditionedGradientDescent, AdaptiveStochasticGradientDescentOptimizer

Definition at line 78 of file itkAdaptiveStochasticPreconditionedGradientDescentOptimizer.h.

Inheritance diagram for itk::AdaptiveStochasticPreconditionedGradientDescentOptimizer:
Inheritance graph
[legend]

Public Types

using ConstPointer = SmartPointer< const Self >
 
using Pointer = SmartPointer< Self >
 
using Self = AdaptiveStochasticPreconditionedGradientDescentOptimizer
 
using Superclass = StochasticPreconditionedGradientDescentOptimizer
 
- Public Types inherited from itk::StochasticPreconditionedGradientDescentOptimizer
using ConstPointer = SmartPointer< const Self >
 
using Pointer = SmartPointer< Self >
 
using PreconditionType = vnl_sparse_matrix< PreconditionValueType >
 
using PreconditionValueType = DerivativeType::ValueType
 
using ScaledCostFunctionPointer = ScaledCostFunctionType::Pointer
 
using ScaledCostFunctionType = ScaledSingleValuedCostFunction
 
using ScalesType = NonLinearOptimizer::ScalesType
 
using Self = StochasticPreconditionedGradientDescentOptimizer
 
enum  StopConditionType
 
using Superclass = PreconditionedGradientDescentOptimizer
 
- Public Types inherited from itk::PreconditionedGradientDescentOptimizer
using ConstPointer = SmartPointer< const Self >
 
using Pointer = SmartPointer< Self >
 
using PreconditionType = vnl_sparse_matrix< PreconditionValueType >
 
using PreconditionValueType = DerivativeType::ValueType
 
using ScaledCostFunctionPointer = ScaledCostFunctionType::Pointer
 
using ScaledCostFunctionType = ScaledSingleValuedCostFunction
 
using ScalesType = NonLinearOptimizer::ScalesType
 
using Self = PreconditionedGradientDescentOptimizer
 
enum  StopConditionType { MaximumNumberOfIterations , MetricError , MinimumStepSize }
 
using Superclass = ScaledSingleValuedNonLinearOptimizer
 
- Public Types inherited from itk::ScaledSingleValuedNonLinearOptimizer
using ConstPointer = SmartPointer< const Self >
 
using Pointer = SmartPointer< Self >
 
using ScaledCostFunctionPointer = ScaledCostFunctionType::Pointer
 
using ScaledCostFunctionType = ScaledSingleValuedCostFunction
 
using ScalesType = NonLinearOptimizer::ScalesType
 
using Self = ScaledSingleValuedNonLinearOptimizer
 
using Superclass = SingleValuedNonLinearOptimizer
 

Public Member Functions

virtual const char * GetClassName () const
 
virtual double GetSigmoidMax () const
 
virtual double GetSigmoidMin () const
 
virtual double GetSigmoidScale () const
 
virtual bool GetUseAdaptiveStepSizes () const
 
 ITK_DISALLOW_COPY_AND_MOVE (AdaptiveStochasticPreconditionedGradientDescentOptimizer)
 
virtual void SetSigmoidMax (double _arg)
 
virtual void SetSigmoidMin (double _arg)
 
virtual void SetSigmoidScale (double _arg)
 
virtual void SetUseAdaptiveStepSizes (bool _arg)
 
- Public Member Functions inherited from itk::StochasticPreconditionedGradientDescentOptimizer
virtual void AdvanceOneStep ()
 
virtual const char * GetClassName () const
 
virtual double GetCurrentTime () const
 
virtual double GetInitialTime () const
 
virtual double GetParam_a () const
 
virtual double GetParam_A () const
 
virtual double GetParam_alpha () const
 
 ITK_DISALLOW_COPY_AND_MOVE (StochasticPreconditionedGradientDescentOptimizer)
 
virtual void SetInitialTime (double _arg)
 
virtual void SetParam_a (double _arg)
 
virtual void SetParam_A (double _arg)
 
virtual void SetParam_alpha (double _arg)
 
virtual void StartOptimization ()
 
- Public Member Functions inherited from itk::PreconditionedGradientDescentOptimizer
virtual void AdvanceOneStep ()
 
const cholmod_common * GetCholmodCommon () const
 
const cholmod_factor * GetCholmodFactor () const
 
virtual const char * GetClassName () const
 
virtual double GetConditionNumber () const
 
virtual unsigned int GetCurrentIteration () const
 
virtual double GetDiagonalWeight () const
 
virtual const DerivativeType & GetGradient ()
 
virtual double GetLargestEigenValue () const
 
virtual const doubleGetLearningRate ()
 
virtual double GetMinimumGradientElementMagnitude () const
 
virtual const unsigned long & GetNumberOfIterations ()
 
virtual const DerivativeType & GetSearchDirection ()
 
virtual double GetSparsity () const
 
virtual const StopConditionTypeGetStopCondition ()
 
virtual const doubleGetValue ()
 
 ITK_DISALLOW_COPY_AND_MOVE (PreconditionedGradientDescentOptimizer)
 
virtual void MetricErrorResponse (ExceptionObject &err)
 
virtual void ResumeOptimization ()
 
virtual void SetDiagonalWeight (double _arg)
 
virtual void SetLearningRate (double _arg)
 
virtual void SetMinimumGradientElementMagnitude (double _arg)
 
virtual void SetNumberOfIterations (unsigned long _arg)
 
virtual void SetPreconditionMatrix (PreconditionType &precondition)
 
virtual void StartOptimization ()
 
virtual void StopOptimization ()
 
- Public Member Functions inherited from itk::ScaledSingleValuedNonLinearOptimizer
virtual const char * GetClassName () const
 
const ParametersType & GetCurrentPosition () const override
 
virtual bool GetMaximize () const
 
virtual const ScaledCostFunctionTypeGetScaledCostFunction ()
 
virtual const ParametersType & GetScaledCurrentPosition ()
 
bool GetUseScales () const
 
virtual void InitializeScales ()
 
 ITK_DISALLOW_COPY_AND_MOVE (ScaledSingleValuedNonLinearOptimizer)
 
virtual void MaximizeOff ()
 
virtual void MaximizeOn ()
 
void SetCostFunction (CostFunctionType *costFunction) override
 
virtual void SetMaximize (bool _arg)
 
virtual void SetUseScales (bool arg)
 

Static Public Member Functions

static Pointer New ()
 
- Static Public Member Functions inherited from itk::StochasticPreconditionedGradientDescentOptimizer
static Pointer New ()
 
- Static Public Member Functions inherited from itk::PreconditionedGradientDescentOptimizer
static Pointer New ()
 
- Static Public Member Functions inherited from itk::ScaledSingleValuedNonLinearOptimizer
static Pointer New ()
 

Protected Member Functions

 AdaptiveStochasticPreconditionedGradientDescentOptimizer ()
 
virtual void UpdateCurrentTime ()
 
virtual ~AdaptiveStochasticPreconditionedGradientDescentOptimizer ()
 
- Protected Member Functions inherited from itk::StochasticPreconditionedGradientDescentOptimizer
virtual double Compute_a (double k) const
 
 StochasticPreconditionedGradientDescentOptimizer ()
 
virtual void UpdateCurrentTime ()
 
virtual ~StochasticPreconditionedGradientDescentOptimizer ()
 
- Protected Member Functions inherited from itk::PreconditionedGradientDescentOptimizer
virtual void CholmodSolve (const DerivativeType &gradient, DerivativeType &searchDirection, int solveType=CHOLMOD_A)
 
 PreconditionedGradientDescentOptimizer ()
 
void PrintSelf (std::ostream &os, Indent indent) const
 
virtual ~PreconditionedGradientDescentOptimizer ()
 
- Protected Member Functions inherited from itk::ScaledSingleValuedNonLinearOptimizer
virtual void GetScaledDerivative (const ParametersType &parameters, DerivativeType &derivative) const
 
virtual MeasureType GetScaledValue (const ParametersType &parameters) const
 
virtual void GetScaledValueAndDerivative (const ParametersType &parameters, MeasureType &value, DerivativeType &derivative) const
 
void PrintSelf (std::ostream &os, Indent indent) const override
 
 ScaledSingleValuedNonLinearOptimizer ()
 
void SetCurrentPosition (const ParametersType &param) override
 
virtual void SetScaledCurrentPosition (const ParametersType &parameters)
 
 ~ScaledSingleValuedNonLinearOptimizer () override=default
 

Protected Attributes

DerivativeType m_PreviousSearchDirection
 
- Protected Attributes inherited from itk::StochasticPreconditionedGradientDescentOptimizer
double m_CurrentTime { 0.0 }
 
- Protected Attributes inherited from itk::PreconditionedGradientDescentOptimizer
cholmod_common * m_CholmodCommon
 
cholmod_factor * m_CholmodFactor { nullptr }
 
cholmod_sparse * m_CholmodGradient { nullptr }
 
double m_ConditionNumber { 1.0 }
 
DerivativeType m_Gradient
 
double m_LargestEigenValue { 1.0 }
 
double m_LearningRate { 1.0 }
 
DerivativeType m_SearchDirection
 
double m_Sparsity { 1.0 }
 
StopConditionType m_StopCondition { MaximumNumberOfIterations }
 
- Protected Attributes inherited from itk::ScaledSingleValuedNonLinearOptimizer
ScaledCostFunctionPointer m_ScaledCostFunction
 
ParametersType m_ScaledCurrentPosition
 

Private Attributes

double m_SigmoidMax { 1.0 }
 
double m_SigmoidMin { -0.8 }
 
double m_SigmoidScale { 1e-8 }
 
bool m_UseAdaptiveStepSizes { true }
 

Additional Inherited Members

- Protected Types inherited from itk::PreconditionedGradientDescentOptimizer
using cholmod_l = int CInt
 

Member Typedef Documentation

◆ ConstPointer

◆ Pointer

◆ Self

Standard ITK.

Definition at line 84 of file itkAdaptiveStochasticPreconditionedGradientDescentOptimizer.h.

◆ Superclass

Constructor & Destructor Documentation

◆ AdaptiveStochasticPreconditionedGradientDescentOptimizer()

itk::AdaptiveStochasticPreconditionedGradientDescentOptimizer::AdaptiveStochasticPreconditionedGradientDescentOptimizer ( )
protected

◆ ~AdaptiveStochasticPreconditionedGradientDescentOptimizer()

virtual itk::AdaptiveStochasticPreconditionedGradientDescentOptimizer::~AdaptiveStochasticPreconditionedGradientDescentOptimizer ( )
inlineprotectedvirtual

Member Function Documentation

◆ GetClassName()

virtual const char * itk::AdaptiveStochasticPreconditionedGradientDescentOptimizer::GetClassName ( ) const
virtual

Run-time type information (and related methods).

Reimplemented from itk::StochasticPreconditionedGradientDescentOptimizer.

Reimplemented in elastix::PreconditionedGradientDescent< TElastix >.

◆ GetSigmoidMax()

virtual double itk::AdaptiveStochasticPreconditionedGradientDescentOptimizer::GetSigmoidMax ( ) const
virtual

◆ GetSigmoidMin()

virtual double itk::AdaptiveStochasticPreconditionedGradientDescentOptimizer::GetSigmoidMin ( ) const
virtual

◆ GetSigmoidScale()

virtual double itk::AdaptiveStochasticPreconditionedGradientDescentOptimizer::GetSigmoidScale ( ) const
virtual

◆ GetUseAdaptiveStepSizes()

virtual bool itk::AdaptiveStochasticPreconditionedGradientDescentOptimizer::GetUseAdaptiveStepSizes ( ) const
virtual

◆ ITK_DISALLOW_COPY_AND_MOVE()

itk::AdaptiveStochasticPreconditionedGradientDescentOptimizer::ITK_DISALLOW_COPY_AND_MOVE ( AdaptiveStochasticPreconditionedGradientDescentOptimizer  )

◆ New()

static Pointer itk::AdaptiveStochasticPreconditionedGradientDescentOptimizer::New ( )
static

Method for creation through the object factory.

◆ SetSigmoidMax()

virtual void itk::AdaptiveStochasticPreconditionedGradientDescentOptimizer::SetSigmoidMax ( double  _arg)
virtual

Set/Get the maximum of the sigmoid. Should be >0. Default: 1.0

◆ SetSigmoidMin()

virtual void itk::AdaptiveStochasticPreconditionedGradientDescentOptimizer::SetSigmoidMin ( double  _arg)
virtual

Set/Get the maximum of the sigmoid. Should be <0. Default: -0.8

◆ SetSigmoidScale()

virtual void itk::AdaptiveStochasticPreconditionedGradientDescentOptimizer::SetSigmoidScale ( double  _arg)
virtual

Set/Get the scaling of the sigmoid width. Large values cause a more wide sigmoid. Default: 1e-8. Should be >0.

◆ SetUseAdaptiveStepSizes()

virtual void itk::AdaptiveStochasticPreconditionedGradientDescentOptimizer::SetUseAdaptiveStepSizes ( bool  _arg)
virtual

Set/Get whether the adaptive step size mechanism is desired. Default: true

◆ UpdateCurrentTime()

virtual void itk::AdaptiveStochasticPreconditionedGradientDescentOptimizer::UpdateCurrentTime ( )
protectedvirtual

Function to update the current time If UseAdaptiveStepSizes is false this function just increments the CurrentTime by $E_0 = (sigmoid_{max} + sigmoid_{min})/2$. Else, the CurrentTime is updated according to:
time = max[ 0, time + sigmoid( -gradient*previoussearchdirection) ]
In that case, also the m_PreviousSearchDirection is updated.

Reimplemented from itk::StochasticPreconditionedGradientDescentOptimizer.

Field Documentation

◆ m_PreviousSearchDirection

DerivativeType itk::AdaptiveStochasticPreconditionedGradientDescentOptimizer::m_PreviousSearchDirection
protected

The Previous search direction = P g, necessary for the CruzAcceleration

Definition at line 146 of file itkAdaptiveStochasticPreconditionedGradientDescentOptimizer.h.

◆ m_SigmoidMax

double itk::AdaptiveStochasticPreconditionedGradientDescentOptimizer::m_SigmoidMax { 1.0 }
private

◆ m_SigmoidMin

double itk::AdaptiveStochasticPreconditionedGradientDescentOptimizer::m_SigmoidMin { -0.8 }
private

◆ m_SigmoidScale

double itk::AdaptiveStochasticPreconditionedGradientDescentOptimizer::m_SigmoidScale { 1e-8 }
private

◆ m_UseAdaptiveStepSizes

bool itk::AdaptiveStochasticPreconditionedGradientDescentOptimizer::m_UseAdaptiveStepSizes { true }
private


Generated on 2023-01-13 for elastix by doxygen 1.9.6 elastix logo