Open CASCADE Technology Reference Manual 8.0.0
Loading...
Searching...
No Matches
Namespaces | Data Structures | Enumerations | Functions | Variables
MathUtils Namespace Reference

Modern math solver types and result structures. More...

Namespaces

namespace  detail
 

Data Structures

class  Ackley
 Ackley function functor (for testing global optimization). f(x) = -a*exp(-b*sqrt(sum(x[i]^2)/n)) - exp(sum(cos(c*x[i]))/n) + a + e Default: a = 20, b = 0.2, c = 2*pi Global minimum at origin with f = 0. More...
 
class  Beale
 Beale function functor (for testing optimization). f(x,y) = (1.5 - x + xy)^2 + (2.25 - x + xy^2)^2 + (2.625 - x + xy^3)^2 Global minimum at (3, 0.5) with f = 0. More...
 
class  Booth
 Booth function functor (for testing optimization). f(x,y) = (x + 2y - 7)^2 + (2x + y - 5)^2 Global minimum at (1, 3) with f = 0. More...
 
struct  BoundedConfig
 Configuration for bounded 1D optimization and root finding. Extends Config with interval bounds. More...
 
struct  BracketResult
 Result of root bracketing operation. More...
 
class  Composite
 Composite functor: f(g(x)). Evaluates the outer function at the result of the inner function. More...
 
struct  Config
 Configuration for iterative solvers. Provides common settings for convergence criteria and iteration limits. More...
 
class  Constant
 Constant function functor: f(x) = c. More...
 
class  Cosine
 Cosine function functor: f(x) = a * cos(b*x + c) + d. More...
 
struct  DecompResult
 Result for matrix decomposition (LU, SVD, QR). Structure depends on decomposition type. More...
 
class  Difference
 Difference of functions functor: f(x) - g(x). More...
 
struct  Domain1D
 1D parameter domain for curves. More...
 
struct  Domain2D
 2D parameter domain for surfaces. More...
 
struct  EigenResult
 Result for eigenvalue/eigenvector computation. Contains eigenvalues and optionally eigenvectors. More...
 
class  Exponential
 Exponential function functor: f(x) = a * exp(b*x) + c. More...
 
class  Gaussian
 Gaussian function functor: f(x) = a * exp(-((x-mu)^2)/(2*sigma^2)). More...
 
class  Himmelblau
 Himmelblau function functor (for testing optimization). f(x,y) = (x^2 + y - 11)^2 + (x + y^2 - 7)^2 Has four local minima, all with f = 0: (3.0, 2.0), (-2.805118, 3.131312), (-3.779310, -3.283186), (3.584428, -1.848126) More...
 
struct  IntegConfig
 Configuration for numerical integration. Provides settings for quadrature order and adaptive refinement. More...
 
struct  IntegResult
 Result for numerical integration. Contains integral value and error estimates. More...
 
struct  InverseResult
 Result for matrix inverse computation. Contains the inverse matrix if computation succeeded. More...
 
struct  LinConfig
 Configuration for linear algebra solvers. Provides settings for singularity detection and pivoting. More...
 
class  Linear
 Linear function functor: f(x) = a*x + b. More...
 
struct  LinearMultipleResult
 Result for multiple linear systems solving (AX = B with matrix RHS). Contains the full solution matrix and determinant if computed. More...
 
class  LinearResidual
 Linear system residual functor: f(x) = ||Ax - b||^2. Useful for solving overdetermined linear systems via optimization. More...
 
struct  LinearResult
 Result for linear system solving (Ax = b). Contains the solution vector and matrix determinant if computed. More...
 
struct  LineSearchResult
 Result of line search operation. More...
 
struct  MinBracketOptions
 Options for minimum bracketing. More...
 
struct  MinBracketResult
 Result of minimum bracketing operation. More...
 
struct  NDimConfig
 Configuration for N-dimensional optimization with optional bounds. Bounds are passed separately as math_Vector for flexibility. More...
 
class  Negated
 Negated function functor: -f(x). More...
 
class  Polynomial
 Polynomial functor: f(x) = sum(a[i] * x^i). Coefficients are stored in order: a[0] + a[1]*x + a[2]*x^2 + ... More...
 
struct  PolyResult
 Result for polynomial root finding. Supports up to 4 real roots (for quartic equations). More...
 
class  Power
 Power function functor: f(x) = a * x^n + b. More...
 
class  Product
 Product of functions functor: f(x) * g(x). More...
 
class  QuadraticForm
 Quadratic form functor: f(x) = x^T A x + b^T x + c. Commonly used for testing optimization algorithms. More...
 
class  Quotient
 Quotient of functions functor: f(x) / g(x). More...
 
class  RandomGenerator
 High-quality pseudo-random number generator based on xoshiro256**. More...
 
class  Rastrigin
 Rastrigin function functor (for testing global optimization). f(x) = A*n + sum(x[i]^2 - A*cos(2*pi*x[i])) for all i Default: A = 10 Global minimum at origin with f = 0. Highly multimodal - challenging for local optimizers. More...
 
class  Rational
 Rational function functor: f(x) = P(x) / Q(x). Both numerator P and denominator Q are polynomials. More...
 
class  Rosenbrock
 Rosenbrock function functor (for testing optimization). f(x,y) = (a - x)^2 + b*(y - x^2)^2 Default: a = 1, b = 100 Global minimum at (a, a^2) = (1, 1) with f = 0. More...
 
class  ScalarLambda
 Lambda wrapper for scalar functions with value only. Wraps a lambda/callable into a functor with Value() method. More...
 
class  ScalarLambdaWithDerivative
 Lambda wrapper for scalar functions with value and derivative. Wraps a lambda/callable into a functor with Values() method. More...
 
struct  ScalarResult
 Result for scalar (1D) root finding and minimization. Contains the found root/minimum location and diagnostic information. More...
 
class  Scaled
 Scaled function functor: c * f(x). More...
 
class  Shifted
 Shifted function functor: f(x) + c. More...
 
class  Sine
 Sine function functor: f(x) = a * sin(b*x + c) + d. More...
 
class  Sphere
 Sphere function functor (for testing optimization). f(x) = sum(x[i]^2) for all i. Global minimum at origin with f = 0. More...
 
class  Sum
 Sum of functions functor: f(x) + g(x). More...
 
class  SystemLambda
 Nonlinear system functor: F(x) = [f1(x), f2(x), ..., fn(x)]. Lambda wrapper for systems of nonlinear equations. More...
 
class  VectorLambda
 Lambda wrapper for N-D objective functions (value only). Wraps a lambda/callable into a functor with Value() method. More...
 
class  VectorLambdaWithGradient
 Lambda wrapper for N-D objective functions with gradient. Wraps a lambda/callable into a functor with Value() and Gradient() methods. More...
 
struct  VectorResult
 Result for N-dimensional optimization and system solving. Contains the solution vector and optional gradient/Jacobian information. More...
 

Enumerations

enum class  Status {
  OK , NotConverged , MaxIterations , NumericalError ,
  InvalidInput , InfiniteSolutions , NoSolution , NotPositiveDefinite ,
  Singular , NonDescentDirection
}
 Computation status for all math solvers. Provides detailed information about solver outcome. More...
 

Functions

constexpr double Clamp (double theValue, double theLower, double theUpper)
 Clamp value to range [theLower, theUpper].
 
bool IsZero (double theValue, double theTolerance=THE_ZERO_TOL)
 Check if value is effectively zero.
 
bool IsEqual (double theA, double theB, double theTolerance=THE_ZERO_TOL)
 Check if two values are approximately equal.
 
double SafeDiv (double theNumerator, double theDenominator, double theDefault=0.0)
 Safe division avoiding division by zero.
 
int Sign (double theValue)
 Sign function.
 
double SignTransfer (double theA, double theB)
 Sign transfer function: returns |theA| with sign of theB. Equivalent to copysign but avoids edge cases with zero.
 
constexpr double Sqr (double theValue)
 Square of a value.
 
constexpr double Cube (double theValue)
 Cube of a value.
 
double CubeRoot (double theValue)
 Cube root with proper sign handling. Unlike std::cbrt, this handles negative values correctly on all platforms.
 
bool IsFinite (double theValue)
 Check if value is finite (not NaN or Inf).
 
double ComputeScaleFactor (const double *theCoeffs, int theCount)
 Compute scaling factor for coefficient normalization. Used to improve numerical stability by scaling coefficients.
 
double DotProduct (const math_Vector &theA, const math_Vector &theB)
 Compute dot product of two vectors.
 
double VectorNorm (const math_Vector &theVec)
 Compute Euclidean norm of a vector.
 
double VectorInfNorm (const math_Vector &theVec)
 Compute infinity norm (maximum absolute value) of a vector.
 
bool IsXConverged (double theXOld, double theXNew, double theTolerance)
 Check convergence based on relative change in X. Uses relative tolerance scaled by current value magnitude.
 
bool IsFConverged (double theFValue, double theTolerance)
 Check convergence based on absolute function value.
 
bool IsConverged (double theXOld, double theXNew, double theFValue, const Config &theConfig)
 Combined convergence test for scalar root finders. Checks both X convergence and function value convergence.
 
bool IsMinConverged (double theXOld, double theXNew, double theFOld, double theFNew, const Config &theConfig)
 Convergence test for minimization (checks both X and F change).
 
bool IsVectorConverged (const math_Vector &theOld, const math_Vector &theNew, double theTolerance)
 Convergence test for vector solvers using infinity norm.
 
bool IsGradientConverged (const math_Vector &theGradient, double theTolerance)
 Convergence test using gradient norm for minimization.
 
double InfinityNorm (const math_Vector &theVector)
 Compute infinity norm of a vector.
 
double EuclideanNorm (const math_Vector &theVector)
 Compute Euclidean (L2) norm of a vector.
 
double EvalPoly (const double *theCoeffs, int theDegree, double theX)
 Evaluate polynomial using Horner's method. Coefficients are ordered as [a0, a1, a2, ...] for P(x) = a0 + a1*x + a2*x^2 + ...
 
void EvalPolyDeriv (const double *theCoeffs, int theDegree, double theX, double &theValue, double &theDeriv)
 Evaluate polynomial and its derivative using Horner's method. Computes both P(x) and P'(x) efficiently in a single pass.
 
double EvalPolyDesc (const double *theCoeffs, int theDegree, double theX)
 Evaluate polynomial with coefficients in descending order. Coefficients are ordered as [an, a(n-1), ..., a1, a0] for P(x) = an*x^n + ... + a0.
 
double RefinePolyRoot (const double *theCoeffs, int theDegree, double theRoot, int theMaxIter=5)
 Newton-Raphson refinement for polynomial root. Polishes an approximate root to higher precision.
 
double RefinePolyRootDesc (const double *theCoeffs, int theDegree, double theRoot, int theMaxIter=5)
 Newton-Raphson refinement with coefficients in descending order.
 
void SortRoots (double *theRoots, size_t theCount)
 Sort roots in ascending order (simple insertion sort for small arrays).
 
size_t RemoveDuplicateRoots (double *theRoots, size_t theCount, double theTolerance=1.0e-10)
 Remove duplicate roots within tolerance.
 
void DepressCubic (double theB, double theC, double theD, double &theP, double &theQ, double &theShift)
 Compute depressed cubic coefficients. Transforms x^3 + bx^2 + cx + d to t^3 + pt + q via x = t - b/3.
 
void DepressQuartic (double theB, double theC, double theD, double theE, double &theP, double &theQ, double &theR, double &theShift)
 Compute depressed quartic coefficients. Transforms x^4 + bx^3 + cx^2 + dx + e to t^4 + pt^2 + qt + r via x = t - b/4.
 
template<typename Function >
BracketResult BracketRoot (Function &theFunc, double theA, double theB, int theMaxIter=50)
 Bracket a root by expanding interval until sign change is found. Starting from [theA, theB], expands outward using golden ratio.
 
template<typename Function >
MinBracketResult BracketMinimum (Function &theFunc, double theA, double theB, const MinBracketOptions &theOptions=MinBracketOptions())
 Bracket a minimum by finding three points a < b < c with f(b) < f(a) and f(b) < f(c). Uses golden section expansion with parabolic interpolation.
 
template<typename Function >
MinBracketResult BracketMinimum (Function &theFunc, double theA, double theB, int theMaxIter)
 Backward-compatible convenience overload with only max-iterations argument.
 
bool GetGaussPointsAndWeights (int theOrder, math_Vector &thePoints, math_Vector &theWeights)
 Get ordered Gauss-Legendre points and weights for given order. Points are returned in ascending order on [-1, 1].
 
template<typename Function >
bool CentralDifference (Function &theFunc, double theX, double &theDeriv, double theStep=1.0e-8)
 Central difference derivative approximation for scalar functions. f'(x) ~= (f(x+h) - f(x-h)) / (2h) Accuracy: O(h^2)
 
template<typename Function >
bool ForwardDifference (Function &theFunc, double theX, double theFx, double &theDeriv, double theStep=1.0e-8)
 Forward difference derivative (one-sided). f'(x) ~= (f(x+h) - f(x)) / h Accuracy: O(h) Useful when central difference is not possible (e.g., at boundaries).
 
template<typename Function >
bool NumericalGradient (Function &theFunc, math_Vector &theX, math_Vector &theGrad, double theStep=1.0e-8)
 Numerical gradient using central differences for N-D functions. df/dx[i] ~= (f(x + h[i]*e[i]) - f(x - h[i]*e[i])) / (2*h[i])
 
template<typename Function >
bool NumericalGradientAdaptive (Function &theFunc, math_Vector &theX, math_Vector &theGrad, double theRelStep=1.0e-8)
 Numerical gradient with adaptive step size. Uses step proportional to |x[i]| for better conditioning.
 
template<typename Function >
bool NumericalJacobian (Function &theFunc, math_Vector &theX, math_Matrix &theJac, double theStep=1.0e-8)
 Numerical Jacobian matrix for vector-valued functions. J[i,j] = dF[i]/dx[j] ~= (F[i](x + h[j]*e[j]) - F[i](x - h[j]*e[j])) / (2*h[j])
 
template<typename Function >
bool NumericalHessian (Function &theFunc, math_Vector &theX, math_Matrix &theHess, double theStep=1.0e-5)
 Numerical Hessian matrix using finite differences. H[i,j] = d^2f/dx[i]dx[j] Uses central differences on gradient.
 
template<typename Function >
bool SecondDerivative (Function &theFunc, double theX, double theFx, double &theD2f, double theStep=1.0e-5)
 Second derivative using central difference. f''(x) ~= (f(x+h) - 2f(x) + f(x-h)) / h^2.
 
template<typename Function >
LineSearchResult ArmijoBacktrack (Function &theFunc, const math_Vector &theX, const math_Vector &theDir, const math_Vector &theGrad, double theFx, double theAlphaInit=1.0, double theC1=1.0e-4, double theRho=0.5, int theMaxIter=50)
 Backtracking line search with Armijo condition. Finds alpha such that f(x + alpha*d) <= f(x) + c1*alpha*grad(f).d (sufficient decrease condition)
 
template<typename Function >
LineSearchResult WolfeSearch (Function &theFunc, const math_Vector &theX, const math_Vector &theDir, const math_Vector &theGrad, double theFx, double theAlphaInit=1.0, double theC1=1.0e-4, double theC2=0.9, int theMaxIter=20)
 Strong Wolfe line search. Finds alpha satisfying both:
 
template<typename Function >
LineSearchResult ExactLineSearch (Function &theFunc, const math_Vector &theX, const math_Vector &theDir, double theAlphaMax=10.0, double theTolerance=1.0e-6, int theMaxIter=100)
 Exact line search using Brent's method. Minimizes f(x + alpha*d) over alpha in [-alpha_max, alpha_max]. First brackets the minimum by exploring both directions. More expensive than inexact line search but can be more robust.
 
double QuadraticInterpolation (double thePhi0, double thePhi0Prime, double theAlpha1, double thePhi1)
 Quadratic interpolation step for line search. Given phi(0), phi'(0), and phi(alpha1), finds minimum of quadratic fit.
 
template<typename Function >
bool BrentAlongCoordinate (Function &theFunc, math_Vector &thePoint, int theDimIdx, double theLoBound, double theUpBound, double &theFx, double theTolerance, int theMaxIter, int &theEvalCount)
 Brent's method for 1D minimization along a single coordinate axis. Operates in-place on one coordinate of thePoint, avoiding vector allocations. Unlike ExactLineSearch which works with arbitrary direction vectors and allocates internal temporaries, this function modifies only thePoint(theDimIdx) during the search and restores it if no improvement is found.
 
bool GetKronrodPointsAndWeights (int theNbKronrod, math_Vector &thePoints, math_Vector &theWeights)
 Get Gauss-Kronrod points and weights.
 
bool GetOrderedGaussPointsAndWeights (int theNbGauss, math_Vector &thePoints, math_Vector &theWeights)
 Get ordered Gauss points and weights.
 
template<typename Lambda >
ScalarLambda< LambdaMakeScalar (Lambda theLambda)
 Helper function to create ScalarLambda with type deduction.
 
template<typename Lambda >
ScalarLambdaWithDerivative< LambdaMakeScalarWithDerivative (Lambda theLambda)
 Helper function to create ScalarLambdaWithDerivative with type deduction.
 
template<typename Outer , typename Inner >
Composite< Outer, InnerMakeComposite (Outer theOuter, Inner theInner)
 Helper function to create Composite with type deduction.
 
template<typename F , typename G >
Sum< F, G > MakeSum (F theF, G theG)
 Helper function to create Sum with type deduction.
 
template<typename F , typename G >
Difference< F, G > MakeDifference (F theF, G theG)
 Helper function to create Difference with type deduction.
 
template<typename F , typename G >
Product< F, G > MakeProduct (F theF, G theG)
 Helper function to create Product with type deduction.
 
template<typename F , typename G >
Quotient< F, G > MakeQuotient (F theF, G theG)
 Helper function to create Quotient with type deduction.
 
template<typename F >
Scaled< FMakeScaled (F theF, double theScale)
 Helper function to create Scaled with type deduction.
 
template<typename F >
Shifted< FMakeShifted (F theF, double theShift)
 Helper function to create Shifted with type deduction.
 
template<typename F >
Negated< FMakeNegated (F theF)
 Helper function to create Negated with type deduction.
 
template<typename Lambda >
VectorLambda< LambdaMakeVector (Lambda theLambda)
 Helper function to create VectorLambda with type deduction.
 
template<typename ValueLambda , typename GradLambda >
VectorLambdaWithGradient< ValueLambda, GradLambdaMakeVectorWithGradient (ValueLambda theValueLambda, GradLambda theGradLambda)
 Helper function to create VectorLambdaWithGradient with type deduction.
 
template<typename Lambda >
SystemLambda< LambdaMakeSystem (Lambda theLambda, int theNbEquations)
 Helper function to create SystemLambda with type deduction.
 

Variables

constexpr double THE_NEWTON_FTOL_SQ = 1.0e-32
 Squared function tolerance for Newton iterations (matching math_FunctionSetRoot Eps).
 
constexpr double THE_NEWTON_STEP_TOL_FACTOR = 1.0e-16
 Relative parametric step tolerance factor for Newton iterations.
 
constexpr size_t THE_NEWTON_MAX_ITER = 100
 Default maximum iterations for Newton solvers.
 
constexpr double THE_EPSILON = std::numeric_limits<double>::epsilon()
 Machine epsilon for double precision.
 
constexpr double THE_ZERO_TOL = 1.0e-15
 Small value for zero comparisons (more practical than epsilon).
 
constexpr double THE_PI = 3.14159265358979323846
 Pi constant.
 
constexpr double THE_2PI = 6.28318530717958647692
 Two Pi constant.
 
constexpr double THE_GOLDEN_RATIO = 1.618033988749895
 Golden ratio for optimization algorithms.
 
constexpr double THE_GOLDEN_SECTION = 0.381966011250105
 Inverse golden ratio (1 - 1/phi).
 
Newton 2D Solver Constants

Constants for 2D Newton-Raphson solver (MathSys_Newton2D).

constexpr double THE_NEWTON2D_SINGULAR_DET = 1.0e-25
 Determinant threshold below which 2x2 Jacobian matrix is considered singular. When |det(J)| < threshold, falls back to SVD or gradient descent.
 
constexpr double THE_NEWTON2D_CRITICAL_GRAD_SQ = 1.0e-60
 Squared gradient threshold below which point is considered a critical point. When |grad|^2 < threshold, Newton iteration is at a stationary point.
 
constexpr double THE_NEWTON2D_MAX_STEP_RATIO = 0.5
 Maximum step size as fraction of domain size for bounded Newton iterations.
 
constexpr double THE_NEWTON2D_DOMAIN_EXT = 1.0e-4
 Domain extension factor for soft boundary handling. Solutions slightly outside domain (by this fraction) are allowed if converged. Reduced to 1e-4 to closely match old algorithm behavior.
 
constexpr double THE_NEWTON2D_STAGNATION_RATIO = 0.999
 Stagnation progress ratio - improvement required each iteration to avoid stagnation. Value of 0.999 means at least 0.1% improvement required.
 
constexpr int THE_NEWTON2D_STAGNATION_COUNT = 3
 Number of iterations without progress before declaring stagnation.
 
constexpr int THE_NEWTON2D_LINE_SEARCH_MAX = 8
 Maximum line search backtracking iterations.
 
constexpr double THE_NEWTON2D_SKIP_LINESEARCH_SQ = 0.01
 Relative step threshold below which line search is skipped. If step^2 / domain^2 < threshold, Newton is converging well.
 
constexpr double THE_NEWTON2D_STAGNATION_RELAX = 10.0
 Tolerance relaxation factor for accepting stagnated solutions. Stagnated solution accepted if |F| < tolerance * factor.
 
constexpr double THE_NEWTON2D_BACKTRACK_TRIGGER = 1.5
 Function increase threshold triggering line search backtracking. If f(new) > f(old) * threshold, backtracking is triggered.
 
constexpr double THE_NEWTON2D_BACKTRACK_ACCEPT = 1.2
 Backtracking acceptance ratio for line search. Step accepted if f(new) < f(old) * ratio.
 
constexpr double THE_NEWTON2D_MAXITER_RELAX = 10.0
 Maximum relaxation factor for solutions at max iterations.
 
Hessian Classification Constants

Constants for extremum classification using second derivatives.

constexpr double THE_HESSIAN_DEGENERACY_REL = 1.0e-8
 Relative tolerance for Hessian degeneracy detection. If |det(H)| < threshold * |H_ii| * |H_jj|, Hessian is considered degenerate.
 
constexpr double THE_HESSIAN_DEGENERACY_ABS = 1.0e-20
 Absolute tolerance for Hessian degeneracy detection. Minimum threshold for determinant comparison.
 
Line Search Constants

Constants for line search acceptance conditions.

constexpr double THE_ARMIJO_C1 = 1.0e-4
 Armijo condition constant (sufficient decrease). Step accepted if: f(x + alpha*d) <= f(x) + c1 * alpha * grad_f . d.
 

Detailed Description

Modern math solver types and result structures.

Core utilities for modern math solvers.

Modern math solver utilities.

Modern math solver configuration structures.

Enumeration Type Documentation

◆ Status

Computation status for all math solvers. Provides detailed information about solver outcome.

Enumerator
OK 

Computation successful, solution found.

NotConverged 

Did not converge within tolerance.

MaxIterations 

Maximum iterations reached without convergence.

NumericalError 

Numerical issue (overflow, NaN, division by zero, etc.)

InvalidInput 

Invalid input parameters.

InfiniteSolutions 

Infinite number of solutions (degenerate case)

NoSolution 

No solution exists.

NotPositiveDefinite 

Matrix not positive definite (for Cholesky, Newton, etc.)

Singular 

Matrix is singular or nearly singular.

NonDescentDirection 

Search direction is not a descent direction for merit function.

Function Documentation

◆ ArmijoBacktrack()

template<typename Function >
LineSearchResult MathUtils::ArmijoBacktrack ( Function & theFunc,
const math_Vector & theX,
const math_Vector & theDir,
const math_Vector & theGrad,
double theFx,
double theAlphaInit = 1.0,
double theC1 = 1.0e-4,
double theRho = 0.5,
int theMaxIter = 50 )

Backtracking line search with Armijo condition. Finds alpha such that f(x + alpha*d) <= f(x) + c1*alpha*grad(f).d (sufficient decrease condition)

Algorithm:

  1. Start with alpha = alpha_init
  2. If Armijo condition satisfied, return alpha
  3. Otherwise alpha = rho * alpha and repeat
Template Parameters
Functiontype with Value(const math_Vector&, double&) method
Parameters
theFuncobjective function
theXcurrent point
theDirsearch direction (should be descent direction)
theGradgradient at current point
theFxfunction value at current point
theAlphaInitinitial step size (default 1.0)
theC1Armijo parameter (default 1e-4)
theRhobacktracking factor (default 0.5)
theMaxItermaximum iterations (default 50)
Returns
line search result with step size

◆ BracketMinimum() [1/2]

template<typename Function >
MinBracketResult MathUtils::BracketMinimum ( Function & theFunc,
double theA,
double theB,
const MinBracketOptions & theOptions = MinBracketOptions() )

Bracket a minimum by finding three points a < b < c with f(b) < f(a) and f(b) < f(c). Uses golden section expansion with parabolic interpolation.

Template Parameters
Functiontype with Value(double theX, double& theF) method
Parameters
theFuncfunction to bracket
theAinitial point A
theBinitial point B (should be to the right of A in descent direction)
theOptionsbracketing options
Returns
bracketing result

◆ BracketMinimum() [2/2]

template<typename Function >
MinBracketResult MathUtils::BracketMinimum ( Function & theFunc,
double theA,
double theB,
int theMaxIter )

Backward-compatible convenience overload with only max-iterations argument.

◆ BracketRoot()

template<typename Function >
BracketResult MathUtils::BracketRoot ( Function & theFunc,
double theA,
double theB,
int theMaxIter = 50 )

Bracket a root by expanding interval until sign change is found. Starting from [theA, theB], expands outward using golden ratio.

Template Parameters
Functiontype with Value(double theX, double& theF) method
Parameters
theFuncfunction to bracket
theAinitial lower bound
theBinitial upper bound
theMaxItermaximum expansion iterations
Returns
bracketing result

◆ BrentAlongCoordinate()

template<typename Function >
bool MathUtils::BrentAlongCoordinate ( Function & theFunc,
math_Vector & thePoint,
int theDimIdx,
double theLoBound,
double theUpBound,
double & theFx,
double theTolerance,
int theMaxIter,
int & theEvalCount )

Brent's method for 1D minimization along a single coordinate axis. Operates in-place on one coordinate of thePoint, avoiding vector allocations. Unlike ExactLineSearch which works with arbitrary direction vectors and allocates internal temporaries, this function modifies only thePoint(theDimIdx) during the search and restores it if no improvement is found.

Template Parameters
Functiontype with Value(const math_Vector&, double&) method
Parameters
theFuncfunction to minimize
thePointcurrent point (the active coordinate is modified during search, restored if no improvement)
theDimIdxindex of the coordinate to optimize
theLoBoundlower bound for this coordinate
theUpBoundupper bound for this coordinate
theFxcurrent function value at thePoint (updated if improved)
theToleranceconvergence tolerance
theMaxItermaximum iterations
theEvalCountincremented by the number of function evaluations used
Returns
true if an improvement was found; thePoint and theFx are updated accordingly

◆ CentralDifference()

template<typename Function >
bool MathUtils::CentralDifference ( Function & theFunc,
double theX,
double & theDeriv,
double theStep = 1.0e-8 )

Central difference derivative approximation for scalar functions. f'(x) ~= (f(x+h) - f(x-h)) / (2h) Accuracy: O(h^2)

Template Parameters
Functiontype with Value(double theX, double& theF) method
Parameters
theFuncfunction to differentiate
theXpoint at which to evaluate derivative
theDerivcomputed derivative value
theStepstep size (default 1e-8)
Returns
true if successful

◆ Clamp()

constexpr double MathUtils::Clamp ( double theValue,
double theLower,
double theUpper )
inlineconstexpr

Clamp value to range [theLower, theUpper].

Parameters
theValuevalue to clamp
theLowerlower bound
theUpperupper bound
Returns
clamped value

◆ ComputeScaleFactor()

double MathUtils::ComputeScaleFactor ( const double * theCoeffs,
int theCount )
inline

Compute scaling factor for coefficient normalization. Used to improve numerical stability by scaling coefficients.

Parameters
theCoeffsarray of coefficients
theCountnumber of coefficients
Returns
scaling factor (power of 2 for exact arithmetic)

◆ Cube()

constexpr double MathUtils::Cube ( double theValue)
inlineconstexpr

Cube of a value.

Parameters
theValueinput value
Returns
theValue^3

◆ CubeRoot()

double MathUtils::CubeRoot ( double theValue)
inline

Cube root with proper sign handling. Unlike std::cbrt, this handles negative values correctly on all platforms.

Parameters
theValueinput value
Returns
cube root of theValue

◆ DepressCubic()

void MathUtils::DepressCubic ( double theB,
double theC,
double theD,
double & theP,
double & theQ,
double & theShift )
inline

Compute depressed cubic coefficients. Transforms x^3 + bx^2 + cx + d to t^3 + pt + q via x = t - b/3.

Parameters
theBcoefficient of x^2 (after dividing by leading coeff)
theCcoefficient of x
theDconstant term
[out]thePcoefficient of t in depressed form
[out]theQconstant term in depressed form
[out]theShiftsubstitution shift (b/3)

◆ DepressQuartic()

void MathUtils::DepressQuartic ( double theB,
double theC,
double theD,
double theE,
double & theP,
double & theQ,
double & theR,
double & theShift )
inline

Compute depressed quartic coefficients. Transforms x^4 + bx^3 + cx^2 + dx + e to t^4 + pt^2 + qt + r via x = t - b/4.

Parameters
theBcoefficient of x^3 (after dividing by leading coeff)
theCcoefficient of x^2
theDcoefficient of x
theEconstant term
[out]thePcoefficient of t^2 in depressed form
[out]theQcoefficient of t in depressed form
[out]theRconstant term in depressed form
[out]theShiftsubstitution shift (b/4)

◆ DotProduct()

double MathUtils::DotProduct ( const math_Vector & theA,
const math_Vector & theB )
inline

Compute dot product of two vectors.

Parameters
theAfirst vector
theBsecond vector
Returns
dot product sum(A[i] * B[i])

◆ EuclideanNorm()

double MathUtils::EuclideanNorm ( const math_Vector & theVector)
inline

Compute Euclidean (L2) norm of a vector.

Parameters
theVectorinput vector
Returns
sqrt(sum(v_i^2))

◆ EvalPoly()

double MathUtils::EvalPoly ( const double * theCoeffs,
int theDegree,
double theX )
inline

Evaluate polynomial using Horner's method. Coefficients are ordered as [a0, a1, a2, ...] for P(x) = a0 + a1*x + a2*x^2 + ...

Parameters
theCoeffscoefficient array (constant term first)
theDegreepolynomial degree
theXevaluation point
Returns
P(theX)

◆ EvalPolyDeriv()

void MathUtils::EvalPolyDeriv ( const double * theCoeffs,
int theDegree,
double theX,
double & theValue,
double & theDeriv )
inline

Evaluate polynomial and its derivative using Horner's method. Computes both P(x) and P'(x) efficiently in a single pass.

Parameters
theCoeffscoefficient array (constant term first)
theDegreepolynomial degree
theXevaluation point
[out]theValueP(theX)
[out]theDerivP'(theX)

◆ EvalPolyDesc()

double MathUtils::EvalPolyDesc ( const double * theCoeffs,
int theDegree,
double theX )
inline

Evaluate polynomial with coefficients in descending order. Coefficients are ordered as [an, a(n-1), ..., a1, a0] for P(x) = an*x^n + ... + a0.

Parameters
theCoeffscoefficient array (leading term first)
theDegreepolynomial degree
theXevaluation point
Returns
P(theX)

◆ ExactLineSearch()

template<typename Function >
LineSearchResult MathUtils::ExactLineSearch ( Function & theFunc,
const math_Vector & theX,
const math_Vector & theDir,
double theAlphaMax = 10.0,
double theTolerance = 1.0e-6,
int theMaxIter = 100 )

Exact line search using Brent's method. Minimizes f(x + alpha*d) over alpha in [-alpha_max, alpha_max]. First brackets the minimum by exploring both directions. More expensive than inexact line search but can be more robust.

Template Parameters
Functiontype with Value(const math_Vector&, double&) method
Parameters
theFuncobjective function
theXcurrent point
theDirsearch direction
theAlphaMaxmaximum step size (searches in both directions)
theToleranceconvergence tolerance
theMaxItermaximum iterations
Returns
line search result

◆ ForwardDifference()

template<typename Function >
bool MathUtils::ForwardDifference ( Function & theFunc,
double theX,
double theFx,
double & theDeriv,
double theStep = 1.0e-8 )

Forward difference derivative (one-sided). f'(x) ~= (f(x+h) - f(x)) / h Accuracy: O(h) Useful when central difference is not possible (e.g., at boundaries).

Template Parameters
Functiontype with Value(double theX, double& theF) method
Parameters
theFuncfunction to differentiate
theXpoint at which to evaluate derivative
theFxfunction value at theX (if already computed)
theDerivcomputed derivative value
theStepstep size (default 1e-8)
Returns
true if successful

◆ GetGaussPointsAndWeights()

bool MathUtils::GetGaussPointsAndWeights ( int theOrder,
math_Vector & thePoints,
math_Vector & theWeights )

Get ordered Gauss-Legendre points and weights for given order. Points are returned in ascending order on [-1, 1].

Parameters
theOrdernumber of quadrature points (>= 1)
[out]thePointspoints array
[out]theWeightsweights array
Returns
true if points/weights are available

◆ GetKronrodPointsAndWeights()

bool MathUtils::GetKronrodPointsAndWeights ( int theNbKronrod,
math_Vector & thePoints,
math_Vector & theWeights )

Get Gauss-Kronrod points and weights.

Parameters
theNbKronrodnumber of Kronrod points (should be 2n+1)
thePointsoutput vector for points
theWeightsoutput vector for weights
Returns
true if successful

◆ GetOrderedGaussPointsAndWeights()

bool MathUtils::GetOrderedGaussPointsAndWeights ( int theNbGauss,
math_Vector & thePoints,
math_Vector & theWeights )

Get ordered Gauss points and weights.

Parameters
theNbGaussnumber of Gauss points
thePointsoutput vector for points
theWeightsoutput vector for weights
Returns
true if successful

◆ InfinityNorm()

double MathUtils::InfinityNorm ( const math_Vector & theVector)
inline

Compute infinity norm of a vector.

Parameters
theVectorinput vector
Returns
max|v_i|

◆ IsConverged()

bool MathUtils::IsConverged ( double theXOld,
double theXNew,
double theFValue,
const Config & theConfig )
inline

Combined convergence test for scalar root finders. Checks both X convergence and function value convergence.

Parameters
theXOldprevious X value
theXNewcurrent X value
theFValuefunction value at theXNew
theConfigsolver configuration
Returns
true if either criterion is satisfied

◆ IsEqual()

bool MathUtils::IsEqual ( double theA,
double theB,
double theTolerance = THE_ZERO_TOL )
inline

Check if two values are approximately equal.

Parameters
theAfirst value
theBsecond value
theTolerancerelative tolerance
Returns
true if values are approximately equal

◆ IsFConverged()

bool MathUtils::IsFConverged ( double theFValue,
double theTolerance )
inline

Check convergence based on absolute function value.

Parameters
theFValuefunction value f(x)
theToleranceabsolute tolerance
Returns
true if |f(x)| < tolerance

◆ IsFinite()

bool MathUtils::IsFinite ( double theValue)
inline

Check if value is finite (not NaN or Inf).

Parameters
theValuevalue to check
Returns
true if finite

◆ IsGradientConverged()

bool MathUtils::IsGradientConverged ( const math_Vector & theGradient,
double theTolerance )
inline

Convergence test using gradient norm for minimization.

Parameters
theGradientgradient vector
theTolerancetolerance for gradient norm
Returns
true if ||gradient|| < tolerance

◆ IsMinConverged()

bool MathUtils::IsMinConverged ( double theXOld,
double theXNew,
double theFOld,
double theFNew,
const Config & theConfig )
inline

Convergence test for minimization (checks both X and F change).

Parameters
theXOldprevious X value
theXNewcurrent X value
theFOldprevious function value
theFNewcurrent function value
theConfigsolver configuration
Returns
true if converged

◆ IsVectorConverged()

bool MathUtils::IsVectorConverged ( const math_Vector & theOld,
const math_Vector & theNew,
double theTolerance )
inline

Convergence test for vector solvers using infinity norm.

Parameters
theOldprevious solution vector
theNewcurrent solution vector
theTolerancerelative tolerance
Returns
true if max|new_i - old_i| / max(1, |new_i|) < tolerance

◆ IsXConverged()

bool MathUtils::IsXConverged ( double theXOld,
double theXNew,
double theTolerance )
inline

Check convergence based on relative change in X. Uses relative tolerance scaled by current value magnitude.

Parameters
theXOldprevious X value
theXNewcurrent X value
theTolerancerelative tolerance
Returns
true if converged

◆ IsZero()

bool MathUtils::IsZero ( double theValue,
double theTolerance = THE_ZERO_TOL )
inline

Check if value is effectively zero.

Parameters
theValuevalue to check
theTolerancetolerance for zero comparison
Returns
true if |theValue| < theTolerance

◆ MakeComposite()

template<typename Outer , typename Inner >
Composite< Outer, Inner > MathUtils::MakeComposite ( Outer theOuter,
Inner theInner )

Helper function to create Composite with type deduction.

Template Parameters
Outerouter function type
Innerinner function type
Parameters
theOuterouter function f
theInnerinner function g
Returns
Composite functor representing f(g(x))

◆ MakeDifference()

template<typename F , typename G >
Difference< F, G > MathUtils::MakeDifference ( F theF,
G theG )

Helper function to create Difference with type deduction.

◆ MakeNegated()

template<typename F >
Negated< F > MathUtils::MakeNegated ( F theF)

Helper function to create Negated with type deduction.

◆ MakeProduct()

template<typename F , typename G >
Product< F, G > MathUtils::MakeProduct ( F theF,
G theG )

Helper function to create Product with type deduction.

◆ MakeQuotient()

template<typename F , typename G >
Quotient< F, G > MathUtils::MakeQuotient ( F theF,
G theG )

Helper function to create Quotient with type deduction.

◆ MakeScalar()

template<typename Lambda >
ScalarLambda< Lambda > MathUtils::MakeScalar ( Lambda theLambda)

Helper function to create ScalarLambda with type deduction.

Template Parameters
Lambdacallable type (auto-deduced)
Parameters
theLambdacallable with signature bool(double, double&)
Returns
ScalarLambda wrapper

◆ MakeScalarWithDerivative()

template<typename Lambda >
ScalarLambdaWithDerivative< Lambda > MathUtils::MakeScalarWithDerivative ( Lambda theLambda)

Helper function to create ScalarLambdaWithDerivative with type deduction.

Template Parameters
Lambdacallable type (auto-deduced)
Parameters
theLambdacallable with signature bool(double, double&, double&)
Returns
ScalarLambdaWithDerivative wrapper

◆ MakeScaled()

template<typename F >
Scaled< F > MathUtils::MakeScaled ( F theF,
double theScale )

Helper function to create Scaled with type deduction.

◆ MakeShifted()

template<typename F >
Shifted< F > MathUtils::MakeShifted ( F theF,
double theShift )

Helper function to create Shifted with type deduction.

◆ MakeSum()

template<typename F , typename G >
Sum< F, G > MathUtils::MakeSum ( F theF,
G theG )

Helper function to create Sum with type deduction.

◆ MakeSystem()

template<typename Lambda >
SystemLambda< Lambda > MathUtils::MakeSystem ( Lambda theLambda,
int theNbEquations )

Helper function to create SystemLambda with type deduction.

Template Parameters
Lambdacallable type
Parameters
theLambdacallable for system evaluation
theNbEquationsnumber of equations
Returns
SystemLambda wrapper

◆ MakeVector()

template<typename Lambda >
VectorLambda< Lambda > MathUtils::MakeVector ( Lambda theLambda)

Helper function to create VectorLambda with type deduction.

Template Parameters
Lambdacallable type (auto-deduced)
Parameters
theLambdacallable with signature bool(const math_Vector&, double&)
Returns
VectorLambda wrapper

◆ MakeVectorWithGradient()

VectorLambdaWithGradient< ValueLambda, GradLambda > MathUtils::MakeVectorWithGradient ( ValueLambda theValueLambda,
GradLambda theGradLambda )

Helper function to create VectorLambdaWithGradient with type deduction.

Template Parameters
ValueLambdacallable for value
GradLambdacallable for gradient
Parameters
theValueLambdacallable for value evaluation
theGradLambdacallable for gradient evaluation
Returns
VectorLambdaWithGradient wrapper

◆ NumericalGradient()

template<typename Function >
bool MathUtils::NumericalGradient ( Function & theFunc,
math_Vector & theX,
math_Vector & theGrad,
double theStep = 1.0e-8 )

Numerical gradient using central differences for N-D functions. df/dx[i] ~= (f(x + h[i]*e[i]) - f(x - h[i]*e[i])) / (2*h[i])

Template Parameters
Functiontype with Value(const math_Vector&, double&) method
Parameters
theFuncfunction to differentiate
theXpoint at which to evaluate gradient (temporarily modified)
theGradoutput gradient vector (same dimension as theX)
theStepstep size (default 1e-8)
Returns
true if successful

◆ NumericalGradientAdaptive()

template<typename Function >
bool MathUtils::NumericalGradientAdaptive ( Function & theFunc,
math_Vector & theX,
math_Vector & theGrad,
double theRelStep = 1.0e-8 )

Numerical gradient with adaptive step size. Uses step proportional to |x[i]| for better conditioning.

Template Parameters
Functiontype with Value(const math_Vector&, double&) method
Parameters
theFuncfunction to differentiate
theXpoint at which to evaluate gradient
theGradoutput gradient vector
theRelSteprelative step size (default 1e-8)
Returns
true if successful

◆ NumericalHessian()

template<typename Function >
bool MathUtils::NumericalHessian ( Function & theFunc,
math_Vector & theX,
math_Matrix & theHess,
double theStep = 1.0e-5 )

Numerical Hessian matrix using finite differences. H[i,j] = d^2f/dx[i]dx[j] Uses central differences on gradient.

Template Parameters
Functiontype with Value(const math_Vector&, double&) method
Parameters
theFuncscalar function f: R^n -> R
theXpoint at which to evaluate Hessian (n-dimensional)
theHessoutput Hessian matrix (n x n, symmetric)
theStepstep size (default 1e-5, larger than gradient step)
Returns
true if successful

◆ NumericalJacobian()

template<typename Function >
bool MathUtils::NumericalJacobian ( Function & theFunc,
math_Vector & theX,
math_Matrix & theJac,
double theStep = 1.0e-8 )

Numerical Jacobian matrix for vector-valued functions. J[i,j] = dF[i]/dx[j] ~= (F[i](x + h[j]*e[j]) - F[i](x - h[j]*e[j])) / (2*h[j])

Template Parameters
Functiontype with Value(const math_Vector& x, math_Vector& F) method
Parameters
theFuncvector-valued function F: R^n -> R^m
theXpoint at which to evaluate Jacobian (n-dimensional)
theJacoutput Jacobian matrix (m x n)
theStepstep size (default 1e-8)
Returns
true if successful

◆ QuadraticInterpolation()

double MathUtils::QuadraticInterpolation ( double thePhi0,
double thePhi0Prime,
double theAlpha1,
double thePhi1 )
inline

Quadratic interpolation step for line search. Given phi(0), phi'(0), and phi(alpha1), finds minimum of quadratic fit.

Parameters
thePhi0function value at 0
thePhi0Primedirectional derivative at 0
theAlpha1current step size
thePhi1function value at alpha1
Returns
interpolated step size

◆ RefinePolyRoot()

double MathUtils::RefinePolyRoot ( const double * theCoeffs,
int theDegree,
double theRoot,
int theMaxIter = 5 )
inline

Newton-Raphson refinement for polynomial root. Polishes an approximate root to higher precision.

Parameters
theCoeffscoefficient array (constant term first)
theDegreepolynomial degree
theRootinitial root estimate
theMaxItermaximum refinement iterations
Returns
refined root value

◆ RefinePolyRootDesc()

double MathUtils::RefinePolyRootDesc ( const double * theCoeffs,
int theDegree,
double theRoot,
int theMaxIter = 5 )
inline

Newton-Raphson refinement with coefficients in descending order.

Parameters
theCoeffscoefficient array (leading term first)
theDegreepolynomial degree
theRootinitial root estimate
theMaxItermaximum refinement iterations
Returns
refined root value

◆ RemoveDuplicateRoots()

size_t MathUtils::RemoveDuplicateRoots ( double * theRoots,
size_t theCount,
double theTolerance = 1.0e-10 )
inline

Remove duplicate roots within tolerance.

Parameters
theRootsarray of sorted roots
theCountcurrent number of roots
theTolerancetolerance for duplicate detection
Returns
new count after removing duplicates

◆ SafeDiv()

double MathUtils::SafeDiv ( double theNumerator,
double theDenominator,
double theDefault = 0.0 )
inline

Safe division avoiding division by zero.

Parameters
theNumeratornumerator
theDenominatordenominator
theDefaultdefault value if denominator is zero
Returns
theNumerator / theDenominator or theDefault

◆ SecondDerivative()

template<typename Function >
bool MathUtils::SecondDerivative ( Function & theFunc,
double theX,
double theFx,
double & theD2f,
double theStep = 1.0e-5 )

Second derivative using central difference. f''(x) ~= (f(x+h) - 2f(x) + f(x-h)) / h^2.

Template Parameters
Functiontype with Value(double theX, double& theF) method
Parameters
theFuncfunction to differentiate
theXpoint at which to evaluate second derivative
theFxfunction value at theX (if already computed)
theD2fcomputed second derivative value
theStepstep size (default 1e-5)
Returns
true if successful

◆ Sign()

int MathUtils::Sign ( double theValue)
inline

Sign function.

Parameters
theValueinput value
Returns
-1 if negative, 0 if zero, +1 if positive

◆ SignTransfer()

double MathUtils::SignTransfer ( double theA,
double theB )
inline

Sign transfer function: returns |theA| with sign of theB. Equivalent to copysign but avoids edge cases with zero.

Parameters
theAvalue whose magnitude is used
theBvalue whose sign is used
Returns
|theA| * sign(theB)

◆ SortRoots()

void MathUtils::SortRoots ( double * theRoots,
size_t theCount )
inline

Sort roots in ascending order (simple insertion sort for small arrays).

Parameters
theRootsarray of roots
theCountnumber of roots

◆ Sqr()

constexpr double MathUtils::Sqr ( double theValue)
inlineconstexpr

Square of a value.

Parameters
theValueinput value
Returns
theValue * theValue

◆ VectorInfNorm()

double MathUtils::VectorInfNorm ( const math_Vector & theVec)
inline

Compute infinity norm (maximum absolute value) of a vector.

Parameters
theVecinput vector
Returns
max(|V[i]|)

◆ VectorNorm()

double MathUtils::VectorNorm ( const math_Vector & theVec)
inline

Compute Euclidean norm of a vector.

Parameters
theVecinput vector
Returns
sqrt(sum(V[i]^2))

◆ WolfeSearch()

template<typename Function >
LineSearchResult MathUtils::WolfeSearch ( Function & theFunc,
const math_Vector & theX,
const math_Vector & theDir,
const math_Vector & theGrad,
double theFx,
double theAlphaInit = 1.0,
double theC1 = 1.0e-4,
double theC2 = 0.9,
int theMaxIter = 20 )

Strong Wolfe line search. Finds alpha satisfying both:

  1. Armijo: f(x + alpha*d) <= f(x) + c1*alpha*grad(f).d
  2. Curvature: |grad(f)(x + alpha*d).d| <= c2*|grad(f).d|

Uses bracketing and zoom procedure for guaranteed convergence.

Template Parameters
Functiontype with:
  • Value(const math_Vector&, double&) for function value
  • Gradient(const math_Vector&, math_Vector&) for gradient
Parameters
theFuncobjective function with gradient
theXcurrent point
theDirsearch direction
theGradgradient at current point
theFxfunction value at current point
theAlphaInitinitial step size
theC1Armijo parameter (default 1e-4)
theC2curvature parameter (default 0.9)
theMaxItermaximum iterations
Returns
line search result

Variable Documentation

◆ THE_2PI

constexpr double MathUtils::THE_2PI = 6.28318530717958647692
inlineconstexpr

Two Pi constant.

◆ THE_ARMIJO_C1

constexpr double MathUtils::THE_ARMIJO_C1 = 1.0e-4
constexpr

Armijo condition constant (sufficient decrease). Step accepted if: f(x + alpha*d) <= f(x) + c1 * alpha * grad_f . d.

◆ THE_EPSILON

constexpr double MathUtils::THE_EPSILON = std::numeric_limits<double>::epsilon()
inlineconstexpr

Machine epsilon for double precision.

◆ THE_GOLDEN_RATIO

constexpr double MathUtils::THE_GOLDEN_RATIO = 1.618033988749895
inlineconstexpr

Golden ratio for optimization algorithms.

◆ THE_GOLDEN_SECTION

constexpr double MathUtils::THE_GOLDEN_SECTION = 0.381966011250105
inlineconstexpr

Inverse golden ratio (1 - 1/phi).

◆ THE_HESSIAN_DEGENERACY_ABS

constexpr double MathUtils::THE_HESSIAN_DEGENERACY_ABS = 1.0e-20
constexpr

Absolute tolerance for Hessian degeneracy detection. Minimum threshold for determinant comparison.

◆ THE_HESSIAN_DEGENERACY_REL

constexpr double MathUtils::THE_HESSIAN_DEGENERACY_REL = 1.0e-8
constexpr

Relative tolerance for Hessian degeneracy detection. If |det(H)| < threshold * |H_ii| * |H_jj|, Hessian is considered degenerate.

◆ THE_NEWTON2D_BACKTRACK_ACCEPT

constexpr double MathUtils::THE_NEWTON2D_BACKTRACK_ACCEPT = 1.2
constexpr

Backtracking acceptance ratio for line search. Step accepted if f(new) < f(old) * ratio.

◆ THE_NEWTON2D_BACKTRACK_TRIGGER

constexpr double MathUtils::THE_NEWTON2D_BACKTRACK_TRIGGER = 1.5
constexpr

Function increase threshold triggering line search backtracking. If f(new) > f(old) * threshold, backtracking is triggered.

◆ THE_NEWTON2D_CRITICAL_GRAD_SQ

constexpr double MathUtils::THE_NEWTON2D_CRITICAL_GRAD_SQ = 1.0e-60
constexpr

Squared gradient threshold below which point is considered a critical point. When |grad|^2 < threshold, Newton iteration is at a stationary point.

◆ THE_NEWTON2D_DOMAIN_EXT

constexpr double MathUtils::THE_NEWTON2D_DOMAIN_EXT = 1.0e-4
constexpr

Domain extension factor for soft boundary handling. Solutions slightly outside domain (by this fraction) are allowed if converged. Reduced to 1e-4 to closely match old algorithm behavior.

◆ THE_NEWTON2D_LINE_SEARCH_MAX

constexpr int MathUtils::THE_NEWTON2D_LINE_SEARCH_MAX = 8
constexpr

Maximum line search backtracking iterations.

◆ THE_NEWTON2D_MAX_STEP_RATIO

constexpr double MathUtils::THE_NEWTON2D_MAX_STEP_RATIO = 0.5
constexpr

Maximum step size as fraction of domain size for bounded Newton iterations.

◆ THE_NEWTON2D_MAXITER_RELAX

constexpr double MathUtils::THE_NEWTON2D_MAXITER_RELAX = 10.0
constexpr

Maximum relaxation factor for solutions at max iterations.

◆ THE_NEWTON2D_SINGULAR_DET

constexpr double MathUtils::THE_NEWTON2D_SINGULAR_DET = 1.0e-25
constexpr

Determinant threshold below which 2x2 Jacobian matrix is considered singular. When |det(J)| < threshold, falls back to SVD or gradient descent.

◆ THE_NEWTON2D_SKIP_LINESEARCH_SQ

constexpr double MathUtils::THE_NEWTON2D_SKIP_LINESEARCH_SQ = 0.01
constexpr

Relative step threshold below which line search is skipped. If step^2 / domain^2 < threshold, Newton is converging well.

◆ THE_NEWTON2D_STAGNATION_COUNT

constexpr int MathUtils::THE_NEWTON2D_STAGNATION_COUNT = 3
constexpr

Number of iterations without progress before declaring stagnation.

◆ THE_NEWTON2D_STAGNATION_RATIO

constexpr double MathUtils::THE_NEWTON2D_STAGNATION_RATIO = 0.999
constexpr

Stagnation progress ratio - improvement required each iteration to avoid stagnation. Value of 0.999 means at least 0.1% improvement required.

◆ THE_NEWTON2D_STAGNATION_RELAX

constexpr double MathUtils::THE_NEWTON2D_STAGNATION_RELAX = 10.0
constexpr

Tolerance relaxation factor for accepting stagnated solutions. Stagnated solution accepted if |F| < tolerance * factor.

◆ THE_NEWTON_FTOL_SQ

constexpr double MathUtils::THE_NEWTON_FTOL_SQ = 1.0e-32
constexpr

Squared function tolerance for Newton iterations (matching math_FunctionSetRoot Eps).

◆ THE_NEWTON_MAX_ITER

constexpr size_t MathUtils::THE_NEWTON_MAX_ITER = 100
constexpr

Default maximum iterations for Newton solvers.

◆ THE_NEWTON_STEP_TOL_FACTOR

constexpr double MathUtils::THE_NEWTON_STEP_TOL_FACTOR = 1.0e-16
constexpr

Relative parametric step tolerance factor for Newton iterations.

◆ THE_PI

constexpr double MathUtils::THE_PI = 3.14159265358979323846
inlineconstexpr

Pi constant.

◆ THE_ZERO_TOL

constexpr double MathUtils::THE_ZERO_TOL = 1.0e-15
inlineconstexpr

Small value for zero comparisons (more practical than epsilon).