![]() |
Open CASCADE Technology Reference Manual 8.0.0
|
Modern math solver types and result structures. More...
Namespaces | |
| namespace | detail |
Data Structures | |
| class | Ackley |
| Ackley function functor (for testing global optimization). f(x) = -a*exp(-b*sqrt(sum(x[i]^2)/n)) - exp(sum(cos(c*x[i]))/n) + a + e Default: a = 20, b = 0.2, c = 2*pi Global minimum at origin with f = 0. More... | |
| class | Beale |
| Beale function functor (for testing optimization). f(x,y) = (1.5 - x + xy)^2 + (2.25 - x + xy^2)^2 + (2.625 - x + xy^3)^2 Global minimum at (3, 0.5) with f = 0. More... | |
| class | Booth |
| Booth function functor (for testing optimization). f(x,y) = (x + 2y - 7)^2 + (2x + y - 5)^2 Global minimum at (1, 3) with f = 0. More... | |
| struct | BoundedConfig |
| Configuration for bounded 1D optimization and root finding. Extends Config with interval bounds. More... | |
| struct | BracketResult |
| Result of root bracketing operation. More... | |
| class | Composite |
| Composite functor: f(g(x)). Evaluates the outer function at the result of the inner function. More... | |
| struct | Config |
| Configuration for iterative solvers. Provides common settings for convergence criteria and iteration limits. More... | |
| class | Constant |
| Constant function functor: f(x) = c. More... | |
| class | Cosine |
| Cosine function functor: f(x) = a * cos(b*x + c) + d. More... | |
| struct | DecompResult |
| Result for matrix decomposition (LU, SVD, QR). Structure depends on decomposition type. More... | |
| class | Difference |
| Difference of functions functor: f(x) - g(x). More... | |
| struct | Domain1D |
| 1D parameter domain for curves. More... | |
| struct | Domain2D |
| 2D parameter domain for surfaces. More... | |
| struct | EigenResult |
| Result for eigenvalue/eigenvector computation. Contains eigenvalues and optionally eigenvectors. More... | |
| class | Exponential |
| Exponential function functor: f(x) = a * exp(b*x) + c. More... | |
| class | Gaussian |
| Gaussian function functor: f(x) = a * exp(-((x-mu)^2)/(2*sigma^2)). More... | |
| class | Himmelblau |
| Himmelblau function functor (for testing optimization). f(x,y) = (x^2 + y - 11)^2 + (x + y^2 - 7)^2 Has four local minima, all with f = 0: (3.0, 2.0), (-2.805118, 3.131312), (-3.779310, -3.283186), (3.584428, -1.848126) More... | |
| struct | IntegConfig |
| Configuration for numerical integration. Provides settings for quadrature order and adaptive refinement. More... | |
| struct | IntegResult |
| Result for numerical integration. Contains integral value and error estimates. More... | |
| struct | InverseResult |
| Result for matrix inverse computation. Contains the inverse matrix if computation succeeded. More... | |
| struct | LinConfig |
| Configuration for linear algebra solvers. Provides settings for singularity detection and pivoting. More... | |
| class | Linear |
| Linear function functor: f(x) = a*x + b. More... | |
| struct | LinearMultipleResult |
| Result for multiple linear systems solving (AX = B with matrix RHS). Contains the full solution matrix and determinant if computed. More... | |
| class | LinearResidual |
| Linear system residual functor: f(x) = ||Ax - b||^2. Useful for solving overdetermined linear systems via optimization. More... | |
| struct | LinearResult |
| Result for linear system solving (Ax = b). Contains the solution vector and matrix determinant if computed. More... | |
| struct | LineSearchResult |
| Result of line search operation. More... | |
| struct | MinBracketOptions |
| Options for minimum bracketing. More... | |
| struct | MinBracketResult |
| Result of minimum bracketing operation. More... | |
| struct | NDimConfig |
| Configuration for N-dimensional optimization with optional bounds. Bounds are passed separately as math_Vector for flexibility. More... | |
| class | Negated |
| Negated function functor: -f(x). More... | |
| class | Polynomial |
| Polynomial functor: f(x) = sum(a[i] * x^i). Coefficients are stored in order: a[0] + a[1]*x + a[2]*x^2 + ... More... | |
| struct | PolyResult |
| Result for polynomial root finding. Supports up to 4 real roots (for quartic equations). More... | |
| class | Power |
| Power function functor: f(x) = a * x^n + b. More... | |
| class | Product |
| Product of functions functor: f(x) * g(x). More... | |
| class | QuadraticForm |
| Quadratic form functor: f(x) = x^T A x + b^T x + c. Commonly used for testing optimization algorithms. More... | |
| class | Quotient |
| Quotient of functions functor: f(x) / g(x). More... | |
| class | RandomGenerator |
| High-quality pseudo-random number generator based on xoshiro256**. More... | |
| class | Rastrigin |
| Rastrigin function functor (for testing global optimization). f(x) = A*n + sum(x[i]^2 - A*cos(2*pi*x[i])) for all i Default: A = 10 Global minimum at origin with f = 0. Highly multimodal - challenging for local optimizers. More... | |
| class | Rational |
| Rational function functor: f(x) = P(x) / Q(x). Both numerator P and denominator Q are polynomials. More... | |
| class | Rosenbrock |
| Rosenbrock function functor (for testing optimization). f(x,y) = (a - x)^2 + b*(y - x^2)^2 Default: a = 1, b = 100 Global minimum at (a, a^2) = (1, 1) with f = 0. More... | |
| class | ScalarLambda |
| Lambda wrapper for scalar functions with value only. Wraps a lambda/callable into a functor with Value() method. More... | |
| class | ScalarLambdaWithDerivative |
| Lambda wrapper for scalar functions with value and derivative. Wraps a lambda/callable into a functor with Values() method. More... | |
| struct | ScalarResult |
| Result for scalar (1D) root finding and minimization. Contains the found root/minimum location and diagnostic information. More... | |
| class | Scaled |
| Scaled function functor: c * f(x). More... | |
| class | Shifted |
| Shifted function functor: f(x) + c. More... | |
| class | Sine |
| Sine function functor: f(x) = a * sin(b*x + c) + d. More... | |
| class | Sphere |
| Sphere function functor (for testing optimization). f(x) = sum(x[i]^2) for all i. Global minimum at origin with f = 0. More... | |
| class | Sum |
| Sum of functions functor: f(x) + g(x). More... | |
| class | SystemLambda |
| Nonlinear system functor: F(x) = [f1(x), f2(x), ..., fn(x)]. Lambda wrapper for systems of nonlinear equations. More... | |
| class | VectorLambda |
| Lambda wrapper for N-D objective functions (value only). Wraps a lambda/callable into a functor with Value() method. More... | |
| class | VectorLambdaWithGradient |
| Lambda wrapper for N-D objective functions with gradient. Wraps a lambda/callable into a functor with Value() and Gradient() methods. More... | |
| struct | VectorResult |
| Result for N-dimensional optimization and system solving. Contains the solution vector and optional gradient/Jacobian information. More... | |
Enumerations | |
| enum class | Status { OK , NotConverged , MaxIterations , NumericalError , InvalidInput , InfiniteSolutions , NoSolution , NotPositiveDefinite , Singular , NonDescentDirection } |
| Computation status for all math solvers. Provides detailed information about solver outcome. More... | |
Functions | |
| constexpr double | Clamp (double theValue, double theLower, double theUpper) |
| Clamp value to range [theLower, theUpper]. | |
| bool | IsZero (double theValue, double theTolerance=THE_ZERO_TOL) |
| Check if value is effectively zero. | |
| bool | IsEqual (double theA, double theB, double theTolerance=THE_ZERO_TOL) |
| Check if two values are approximately equal. | |
| double | SafeDiv (double theNumerator, double theDenominator, double theDefault=0.0) |
| Safe division avoiding division by zero. | |
| int | Sign (double theValue) |
| Sign function. | |
| double | SignTransfer (double theA, double theB) |
| Sign transfer function: returns |theA| with sign of theB. Equivalent to copysign but avoids edge cases with zero. | |
| constexpr double | Sqr (double theValue) |
| Square of a value. | |
| constexpr double | Cube (double theValue) |
| Cube of a value. | |
| double | CubeRoot (double theValue) |
| Cube root with proper sign handling. Unlike std::cbrt, this handles negative values correctly on all platforms. | |
| bool | IsFinite (double theValue) |
| Check if value is finite (not NaN or Inf). | |
| double | ComputeScaleFactor (const double *theCoeffs, int theCount) |
| Compute scaling factor for coefficient normalization. Used to improve numerical stability by scaling coefficients. | |
| double | DotProduct (const math_Vector &theA, const math_Vector &theB) |
| Compute dot product of two vectors. | |
| double | VectorNorm (const math_Vector &theVec) |
| Compute Euclidean norm of a vector. | |
| double | VectorInfNorm (const math_Vector &theVec) |
| Compute infinity norm (maximum absolute value) of a vector. | |
| bool | IsXConverged (double theXOld, double theXNew, double theTolerance) |
| Check convergence based on relative change in X. Uses relative tolerance scaled by current value magnitude. | |
| bool | IsFConverged (double theFValue, double theTolerance) |
| Check convergence based on absolute function value. | |
| bool | IsConverged (double theXOld, double theXNew, double theFValue, const Config &theConfig) |
| Combined convergence test for scalar root finders. Checks both X convergence and function value convergence. | |
| bool | IsMinConverged (double theXOld, double theXNew, double theFOld, double theFNew, const Config &theConfig) |
| Convergence test for minimization (checks both X and F change). | |
| bool | IsVectorConverged (const math_Vector &theOld, const math_Vector &theNew, double theTolerance) |
| Convergence test for vector solvers using infinity norm. | |
| bool | IsGradientConverged (const math_Vector &theGradient, double theTolerance) |
| Convergence test using gradient norm for minimization. | |
| double | InfinityNorm (const math_Vector &theVector) |
| Compute infinity norm of a vector. | |
| double | EuclideanNorm (const math_Vector &theVector) |
| Compute Euclidean (L2) norm of a vector. | |
| double | EvalPoly (const double *theCoeffs, int theDegree, double theX) |
| Evaluate polynomial using Horner's method. Coefficients are ordered as [a0, a1, a2, ...] for P(x) = a0 + a1*x + a2*x^2 + ... | |
| void | EvalPolyDeriv (const double *theCoeffs, int theDegree, double theX, double &theValue, double &theDeriv) |
| Evaluate polynomial and its derivative using Horner's method. Computes both P(x) and P'(x) efficiently in a single pass. | |
| double | EvalPolyDesc (const double *theCoeffs, int theDegree, double theX) |
| Evaluate polynomial with coefficients in descending order. Coefficients are ordered as [an, a(n-1), ..., a1, a0] for P(x) = an*x^n + ... + a0. | |
| double | RefinePolyRoot (const double *theCoeffs, int theDegree, double theRoot, int theMaxIter=5) |
| Newton-Raphson refinement for polynomial root. Polishes an approximate root to higher precision. | |
| double | RefinePolyRootDesc (const double *theCoeffs, int theDegree, double theRoot, int theMaxIter=5) |
| Newton-Raphson refinement with coefficients in descending order. | |
| void | SortRoots (double *theRoots, size_t theCount) |
| Sort roots in ascending order (simple insertion sort for small arrays). | |
| size_t | RemoveDuplicateRoots (double *theRoots, size_t theCount, double theTolerance=1.0e-10) |
| Remove duplicate roots within tolerance. | |
| void | DepressCubic (double theB, double theC, double theD, double &theP, double &theQ, double &theShift) |
| Compute depressed cubic coefficients. Transforms x^3 + bx^2 + cx + d to t^3 + pt + q via x = t - b/3. | |
| void | DepressQuartic (double theB, double theC, double theD, double theE, double &theP, double &theQ, double &theR, double &theShift) |
| Compute depressed quartic coefficients. Transforms x^4 + bx^3 + cx^2 + dx + e to t^4 + pt^2 + qt + r via x = t - b/4. | |
| template<typename Function > | |
| BracketResult | BracketRoot (Function &theFunc, double theA, double theB, int theMaxIter=50) |
| Bracket a root by expanding interval until sign change is found. Starting from [theA, theB], expands outward using golden ratio. | |
| template<typename Function > | |
| MinBracketResult | BracketMinimum (Function &theFunc, double theA, double theB, const MinBracketOptions &theOptions=MinBracketOptions()) |
| Bracket a minimum by finding three points a < b < c with f(b) < f(a) and f(b) < f(c). Uses golden section expansion with parabolic interpolation. | |
| template<typename Function > | |
| MinBracketResult | BracketMinimum (Function &theFunc, double theA, double theB, int theMaxIter) |
| Backward-compatible convenience overload with only max-iterations argument. | |
| bool | GetGaussPointsAndWeights (int theOrder, math_Vector &thePoints, math_Vector &theWeights) |
| Get ordered Gauss-Legendre points and weights for given order. Points are returned in ascending order on [-1, 1]. | |
| template<typename Function > | |
| bool | CentralDifference (Function &theFunc, double theX, double &theDeriv, double theStep=1.0e-8) |
| Central difference derivative approximation for scalar functions. f'(x) ~= (f(x+h) - f(x-h)) / (2h) Accuracy: O(h^2) | |
| template<typename Function > | |
| bool | ForwardDifference (Function &theFunc, double theX, double theFx, double &theDeriv, double theStep=1.0e-8) |
| Forward difference derivative (one-sided). f'(x) ~= (f(x+h) - f(x)) / h Accuracy: O(h) Useful when central difference is not possible (e.g., at boundaries). | |
| template<typename Function > | |
| bool | NumericalGradient (Function &theFunc, math_Vector &theX, math_Vector &theGrad, double theStep=1.0e-8) |
| Numerical gradient using central differences for N-D functions. df/dx[i] ~= (f(x + h[i]*e[i]) - f(x - h[i]*e[i])) / (2*h[i]) | |
| template<typename Function > | |
| bool | NumericalGradientAdaptive (Function &theFunc, math_Vector &theX, math_Vector &theGrad, double theRelStep=1.0e-8) |
| Numerical gradient with adaptive step size. Uses step proportional to |x[i]| for better conditioning. | |
| template<typename Function > | |
| bool | NumericalJacobian (Function &theFunc, math_Vector &theX, math_Matrix &theJac, double theStep=1.0e-8) |
| Numerical Jacobian matrix for vector-valued functions. J[i,j] = dF[i]/dx[j] ~= (F[i](x + h[j]*e[j]) - F[i](x - h[j]*e[j])) / (2*h[j]) | |
| template<typename Function > | |
| bool | NumericalHessian (Function &theFunc, math_Vector &theX, math_Matrix &theHess, double theStep=1.0e-5) |
| Numerical Hessian matrix using finite differences. H[i,j] = d^2f/dx[i]dx[j] Uses central differences on gradient. | |
| template<typename Function > | |
| bool | SecondDerivative (Function &theFunc, double theX, double theFx, double &theD2f, double theStep=1.0e-5) |
| Second derivative using central difference. f''(x) ~= (f(x+h) - 2f(x) + f(x-h)) / h^2. | |
| template<typename Function > | |
| LineSearchResult | ArmijoBacktrack (Function &theFunc, const math_Vector &theX, const math_Vector &theDir, const math_Vector &theGrad, double theFx, double theAlphaInit=1.0, double theC1=1.0e-4, double theRho=0.5, int theMaxIter=50) |
| Backtracking line search with Armijo condition. Finds alpha such that f(x + alpha*d) <= f(x) + c1*alpha*grad(f).d (sufficient decrease condition) | |
| template<typename Function > | |
| LineSearchResult | WolfeSearch (Function &theFunc, const math_Vector &theX, const math_Vector &theDir, const math_Vector &theGrad, double theFx, double theAlphaInit=1.0, double theC1=1.0e-4, double theC2=0.9, int theMaxIter=20) |
| Strong Wolfe line search. Finds alpha satisfying both: | |
| template<typename Function > | |
| LineSearchResult | ExactLineSearch (Function &theFunc, const math_Vector &theX, const math_Vector &theDir, double theAlphaMax=10.0, double theTolerance=1.0e-6, int theMaxIter=100) |
| Exact line search using Brent's method. Minimizes f(x + alpha*d) over alpha in [-alpha_max, alpha_max]. First brackets the minimum by exploring both directions. More expensive than inexact line search but can be more robust. | |
| double | QuadraticInterpolation (double thePhi0, double thePhi0Prime, double theAlpha1, double thePhi1) |
| Quadratic interpolation step for line search. Given phi(0), phi'(0), and phi(alpha1), finds minimum of quadratic fit. | |
| template<typename Function > | |
| bool | BrentAlongCoordinate (Function &theFunc, math_Vector &thePoint, int theDimIdx, double theLoBound, double theUpBound, double &theFx, double theTolerance, int theMaxIter, int &theEvalCount) |
| Brent's method for 1D minimization along a single coordinate axis. Operates in-place on one coordinate of thePoint, avoiding vector allocations. Unlike ExactLineSearch which works with arbitrary direction vectors and allocates internal temporaries, this function modifies only thePoint(theDimIdx) during the search and restores it if no improvement is found. | |
| bool | GetKronrodPointsAndWeights (int theNbKronrod, math_Vector &thePoints, math_Vector &theWeights) |
| Get Gauss-Kronrod points and weights. | |
| bool | GetOrderedGaussPointsAndWeights (int theNbGauss, math_Vector &thePoints, math_Vector &theWeights) |
| Get ordered Gauss points and weights. | |
| template<typename Lambda > | |
| ScalarLambda< Lambda > | MakeScalar (Lambda theLambda) |
| Helper function to create ScalarLambda with type deduction. | |
| template<typename Lambda > | |
| ScalarLambdaWithDerivative< Lambda > | MakeScalarWithDerivative (Lambda theLambda) |
| Helper function to create ScalarLambdaWithDerivative with type deduction. | |
| template<typename Outer , typename Inner > | |
| Composite< Outer, Inner > | MakeComposite (Outer theOuter, Inner theInner) |
| Helper function to create Composite with type deduction. | |
| template<typename F , typename G > | |
| Sum< F, G > | MakeSum (F theF, G theG) |
| Helper function to create Sum with type deduction. | |
| template<typename F , typename G > | |
| Difference< F, G > | MakeDifference (F theF, G theG) |
| Helper function to create Difference with type deduction. | |
| template<typename F , typename G > | |
| Product< F, G > | MakeProduct (F theF, G theG) |
| Helper function to create Product with type deduction. | |
| template<typename F , typename G > | |
| Quotient< F, G > | MakeQuotient (F theF, G theG) |
| Helper function to create Quotient with type deduction. | |
| template<typename F > | |
| Scaled< F > | MakeScaled (F theF, double theScale) |
| Helper function to create Scaled with type deduction. | |
| template<typename F > | |
| Shifted< F > | MakeShifted (F theF, double theShift) |
| Helper function to create Shifted with type deduction. | |
| template<typename F > | |
| Negated< F > | MakeNegated (F theF) |
| Helper function to create Negated with type deduction. | |
| template<typename Lambda > | |
| VectorLambda< Lambda > | MakeVector (Lambda theLambda) |
| Helper function to create VectorLambda with type deduction. | |
| template<typename ValueLambda , typename GradLambda > | |
| VectorLambdaWithGradient< ValueLambda, GradLambda > | MakeVectorWithGradient (ValueLambda theValueLambda, GradLambda theGradLambda) |
| Helper function to create VectorLambdaWithGradient with type deduction. | |
| template<typename Lambda > | |
| SystemLambda< Lambda > | MakeSystem (Lambda theLambda, int theNbEquations) |
| Helper function to create SystemLambda with type deduction. | |
Variables | |
| constexpr double | THE_NEWTON_FTOL_SQ = 1.0e-32 |
| Squared function tolerance for Newton iterations (matching math_FunctionSetRoot Eps). | |
| constexpr double | THE_NEWTON_STEP_TOL_FACTOR = 1.0e-16 |
| Relative parametric step tolerance factor for Newton iterations. | |
| constexpr size_t | THE_NEWTON_MAX_ITER = 100 |
| Default maximum iterations for Newton solvers. | |
| constexpr double | THE_EPSILON = std::numeric_limits<double>::epsilon() |
| Machine epsilon for double precision. | |
| constexpr double | THE_ZERO_TOL = 1.0e-15 |
| Small value for zero comparisons (more practical than epsilon). | |
| constexpr double | THE_PI = 3.14159265358979323846 |
| Pi constant. | |
| constexpr double | THE_2PI = 6.28318530717958647692 |
| Two Pi constant. | |
| constexpr double | THE_GOLDEN_RATIO = 1.618033988749895 |
| Golden ratio for optimization algorithms. | |
| constexpr double | THE_GOLDEN_SECTION = 0.381966011250105 |
| Inverse golden ratio (1 - 1/phi). | |
Newton 2D Solver Constants | |
Constants for 2D Newton-Raphson solver (MathSys_Newton2D). | |
| constexpr double | THE_NEWTON2D_SINGULAR_DET = 1.0e-25 |
| Determinant threshold below which 2x2 Jacobian matrix is considered singular. When |det(J)| < threshold, falls back to SVD or gradient descent. | |
| constexpr double | THE_NEWTON2D_CRITICAL_GRAD_SQ = 1.0e-60 |
| Squared gradient threshold below which point is considered a critical point. When |grad|^2 < threshold, Newton iteration is at a stationary point. | |
| constexpr double | THE_NEWTON2D_MAX_STEP_RATIO = 0.5 |
| Maximum step size as fraction of domain size for bounded Newton iterations. | |
| constexpr double | THE_NEWTON2D_DOMAIN_EXT = 1.0e-4 |
| Domain extension factor for soft boundary handling. Solutions slightly outside domain (by this fraction) are allowed if converged. Reduced to 1e-4 to closely match old algorithm behavior. | |
| constexpr double | THE_NEWTON2D_STAGNATION_RATIO = 0.999 |
| Stagnation progress ratio - improvement required each iteration to avoid stagnation. Value of 0.999 means at least 0.1% improvement required. | |
| constexpr int | THE_NEWTON2D_STAGNATION_COUNT = 3 |
| Number of iterations without progress before declaring stagnation. | |
| constexpr int | THE_NEWTON2D_LINE_SEARCH_MAX = 8 |
| Maximum line search backtracking iterations. | |
| constexpr double | THE_NEWTON2D_SKIP_LINESEARCH_SQ = 0.01 |
| Relative step threshold below which line search is skipped. If step^2 / domain^2 < threshold, Newton is converging well. | |
| constexpr double | THE_NEWTON2D_STAGNATION_RELAX = 10.0 |
| Tolerance relaxation factor for accepting stagnated solutions. Stagnated solution accepted if |F| < tolerance * factor. | |
| constexpr double | THE_NEWTON2D_BACKTRACK_TRIGGER = 1.5 |
| Function increase threshold triggering line search backtracking. If f(new) > f(old) * threshold, backtracking is triggered. | |
| constexpr double | THE_NEWTON2D_BACKTRACK_ACCEPT = 1.2 |
| Backtracking acceptance ratio for line search. Step accepted if f(new) < f(old) * ratio. | |
| constexpr double | THE_NEWTON2D_MAXITER_RELAX = 10.0 |
| Maximum relaxation factor for solutions at max iterations. | |
Hessian Classification Constants | |
Constants for extremum classification using second derivatives. | |
| constexpr double | THE_HESSIAN_DEGENERACY_REL = 1.0e-8 |
| Relative tolerance for Hessian degeneracy detection. If |det(H)| < threshold * |H_ii| * |H_jj|, Hessian is considered degenerate. | |
| constexpr double | THE_HESSIAN_DEGENERACY_ABS = 1.0e-20 |
| Absolute tolerance for Hessian degeneracy detection. Minimum threshold for determinant comparison. | |
Line Search Constants | |
Constants for line search acceptance conditions. | |
| constexpr double | THE_ARMIJO_C1 = 1.0e-4 |
| Armijo condition constant (sufficient decrease). Step accepted if: f(x + alpha*d) <= f(x) + c1 * alpha * grad_f . d. | |
Modern math solver types and result structures.
Core utilities for modern math solvers.
Modern math solver utilities.
Modern math solver configuration structures.
|
strong |
Computation status for all math solvers. Provides detailed information about solver outcome.
| LineSearchResult MathUtils::ArmijoBacktrack | ( | Function & | theFunc, |
| const math_Vector & | theX, | ||
| const math_Vector & | theDir, | ||
| const math_Vector & | theGrad, | ||
| double | theFx, | ||
| double | theAlphaInit = 1.0, | ||
| double | theC1 = 1.0e-4, | ||
| double | theRho = 0.5, | ||
| int | theMaxIter = 50 ) |
Backtracking line search with Armijo condition. Finds alpha such that f(x + alpha*d) <= f(x) + c1*alpha*grad(f).d (sufficient decrease condition)
Algorithm:
| Function | type with Value(const math_Vector&, double&) method |
| theFunc | objective function |
| theX | current point |
| theDir | search direction (should be descent direction) |
| theGrad | gradient at current point |
| theFx | function value at current point |
| theAlphaInit | initial step size (default 1.0) |
| theC1 | Armijo parameter (default 1e-4) |
| theRho | backtracking factor (default 0.5) |
| theMaxIter | maximum iterations (default 50) |
| MinBracketResult MathUtils::BracketMinimum | ( | Function & | theFunc, |
| double | theA, | ||
| double | theB, | ||
| const MinBracketOptions & | theOptions = MinBracketOptions() ) |
Bracket a minimum by finding three points a < b < c with f(b) < f(a) and f(b) < f(c). Uses golden section expansion with parabolic interpolation.
| Function | type with Value(double theX, double& theF) method |
| theFunc | function to bracket |
| theA | initial point A |
| theB | initial point B (should be to the right of A in descent direction) |
| theOptions | bracketing options |
| MinBracketResult MathUtils::BracketMinimum | ( | Function & | theFunc, |
| double | theA, | ||
| double | theB, | ||
| int | theMaxIter ) |
Backward-compatible convenience overload with only max-iterations argument.
| BracketResult MathUtils::BracketRoot | ( | Function & | theFunc, |
| double | theA, | ||
| double | theB, | ||
| int | theMaxIter = 50 ) |
Bracket a root by expanding interval until sign change is found. Starting from [theA, theB], expands outward using golden ratio.
| Function | type with Value(double theX, double& theF) method |
| theFunc | function to bracket |
| theA | initial lower bound |
| theB | initial upper bound |
| theMaxIter | maximum expansion iterations |
| bool MathUtils::BrentAlongCoordinate | ( | Function & | theFunc, |
| math_Vector & | thePoint, | ||
| int | theDimIdx, | ||
| double | theLoBound, | ||
| double | theUpBound, | ||
| double & | theFx, | ||
| double | theTolerance, | ||
| int | theMaxIter, | ||
| int & | theEvalCount ) |
Brent's method for 1D minimization along a single coordinate axis. Operates in-place on one coordinate of thePoint, avoiding vector allocations. Unlike ExactLineSearch which works with arbitrary direction vectors and allocates internal temporaries, this function modifies only thePoint(theDimIdx) during the search and restores it if no improvement is found.
| Function | type with Value(const math_Vector&, double&) method |
| theFunc | function to minimize |
| thePoint | current point (the active coordinate is modified during search, restored if no improvement) |
| theDimIdx | index of the coordinate to optimize |
| theLoBound | lower bound for this coordinate |
| theUpBound | upper bound for this coordinate |
| theFx | current function value at thePoint (updated if improved) |
| theTolerance | convergence tolerance |
| theMaxIter | maximum iterations |
| theEvalCount | incremented by the number of function evaluations used |
| bool MathUtils::CentralDifference | ( | Function & | theFunc, |
| double | theX, | ||
| double & | theDeriv, | ||
| double | theStep = 1.0e-8 ) |
Central difference derivative approximation for scalar functions. f'(x) ~= (f(x+h) - f(x-h)) / (2h) Accuracy: O(h^2)
| Function | type with Value(double theX, double& theF) method |
| theFunc | function to differentiate |
| theX | point at which to evaluate derivative |
| theDeriv | computed derivative value |
| theStep | step size (default 1e-8) |
|
inlineconstexpr |
Clamp value to range [theLower, theUpper].
| theValue | value to clamp |
| theLower | lower bound |
| theUpper | upper bound |
Compute scaling factor for coefficient normalization. Used to improve numerical stability by scaling coefficients.
| theCoeffs | array of coefficients |
| theCount | number of coefficients |
Cube of a value.
| theValue | input value |
Cube root with proper sign handling. Unlike std::cbrt, this handles negative values correctly on all platforms.
| theValue | input value |
|
inline |
Compute depressed cubic coefficients. Transforms x^3 + bx^2 + cx + d to t^3 + pt + q via x = t - b/3.
| theB | coefficient of x^2 (after dividing by leading coeff) | |
| theC | coefficient of x | |
| theD | constant term | |
| [out] | theP | coefficient of t in depressed form |
| [out] | theQ | constant term in depressed form |
| [out] | theShift | substitution shift (b/3) |
|
inline |
Compute depressed quartic coefficients. Transforms x^4 + bx^3 + cx^2 + dx + e to t^4 + pt^2 + qt + r via x = t - b/4.
| theB | coefficient of x^3 (after dividing by leading coeff) | |
| theC | coefficient of x^2 | |
| theD | coefficient of x | |
| theE | constant term | |
| [out] | theP | coefficient of t^2 in depressed form |
| [out] | theQ | coefficient of t in depressed form |
| [out] | theR | constant term in depressed form |
| [out] | theShift | substitution shift (b/4) |
|
inline |
Compute dot product of two vectors.
| theA | first vector |
| theB | second vector |
|
inline |
Compute Euclidean (L2) norm of a vector.
| theVector | input vector |
Evaluate polynomial using Horner's method. Coefficients are ordered as [a0, a1, a2, ...] for P(x) = a0 + a1*x + a2*x^2 + ...
| theCoeffs | coefficient array (constant term first) |
| theDegree | polynomial degree |
| theX | evaluation point |
|
inline |
Evaluate polynomial and its derivative using Horner's method. Computes both P(x) and P'(x) efficiently in a single pass.
| theCoeffs | coefficient array (constant term first) | |
| theDegree | polynomial degree | |
| theX | evaluation point | |
| [out] | theValue | P(theX) |
| [out] | theDeriv | P'(theX) |
Evaluate polynomial with coefficients in descending order. Coefficients are ordered as [an, a(n-1), ..., a1, a0] for P(x) = an*x^n + ... + a0.
| theCoeffs | coefficient array (leading term first) |
| theDegree | polynomial degree |
| theX | evaluation point |
| LineSearchResult MathUtils::ExactLineSearch | ( | Function & | theFunc, |
| const math_Vector & | theX, | ||
| const math_Vector & | theDir, | ||
| double | theAlphaMax = 10.0, | ||
| double | theTolerance = 1.0e-6, | ||
| int | theMaxIter = 100 ) |
Exact line search using Brent's method. Minimizes f(x + alpha*d) over alpha in [-alpha_max, alpha_max]. First brackets the minimum by exploring both directions. More expensive than inexact line search but can be more robust.
| Function | type with Value(const math_Vector&, double&) method |
| theFunc | objective function |
| theX | current point |
| theDir | search direction |
| theAlphaMax | maximum step size (searches in both directions) |
| theTolerance | convergence tolerance |
| theMaxIter | maximum iterations |
| bool MathUtils::ForwardDifference | ( | Function & | theFunc, |
| double | theX, | ||
| double | theFx, | ||
| double & | theDeriv, | ||
| double | theStep = 1.0e-8 ) |
Forward difference derivative (one-sided). f'(x) ~= (f(x+h) - f(x)) / h Accuracy: O(h) Useful when central difference is not possible (e.g., at boundaries).
| Function | type with Value(double theX, double& theF) method |
| theFunc | function to differentiate |
| theX | point at which to evaluate derivative |
| theFx | function value at theX (if already computed) |
| theDeriv | computed derivative value |
| theStep | step size (default 1e-8) |
| bool MathUtils::GetGaussPointsAndWeights | ( | int | theOrder, |
| math_Vector & | thePoints, | ||
| math_Vector & | theWeights ) |
Get ordered Gauss-Legendre points and weights for given order. Points are returned in ascending order on [-1, 1].
| theOrder | number of quadrature points (>= 1) | |
| [out] | thePoints | points array |
| [out] | theWeights | weights array |
| bool MathUtils::GetKronrodPointsAndWeights | ( | int | theNbKronrod, |
| math_Vector & | thePoints, | ||
| math_Vector & | theWeights ) |
Get Gauss-Kronrod points and weights.
| theNbKronrod | number of Kronrod points (should be 2n+1) |
| thePoints | output vector for points |
| theWeights | output vector for weights |
| bool MathUtils::GetOrderedGaussPointsAndWeights | ( | int | theNbGauss, |
| math_Vector & | thePoints, | ||
| math_Vector & | theWeights ) |
Get ordered Gauss points and weights.
| theNbGauss | number of Gauss points |
| thePoints | output vector for points |
| theWeights | output vector for weights |
|
inline |
Compute infinity norm of a vector.
| theVector | input vector |
|
inline |
Combined convergence test for scalar root finders. Checks both X convergence and function value convergence.
| theXOld | previous X value |
| theXNew | current X value |
| theFValue | function value at theXNew |
| theConfig | solver configuration |
|
inline |
Check if two values are approximately equal.
| theA | first value |
| theB | second value |
| theTolerance | relative tolerance |
Check convergence based on absolute function value.
| theFValue | function value f(x) |
| theTolerance | absolute tolerance |
Check if value is finite (not NaN or Inf).
| theValue | value to check |
|
inline |
Convergence test using gradient norm for minimization.
| theGradient | gradient vector |
| theTolerance | tolerance for gradient norm |
|
inline |
Convergence test for minimization (checks both X and F change).
| theXOld | previous X value |
| theXNew | current X value |
| theFOld | previous function value |
| theFNew | current function value |
| theConfig | solver configuration |
|
inline |
Convergence test for vector solvers using infinity norm.
| theOld | previous solution vector |
| theNew | current solution vector |
| theTolerance | relative tolerance |
Check convergence based on relative change in X. Uses relative tolerance scaled by current value magnitude.
| theXOld | previous X value |
| theXNew | current X value |
| theTolerance | relative tolerance |
|
inline |
Check if value is effectively zero.
| theValue | value to check |
| theTolerance | tolerance for zero comparison |
Helper function to create Difference with type deduction.
Helper function to create Negated with type deduction.
Helper function to create Product with type deduction.
Helper function to create Quotient with type deduction.
| ScalarLambda< Lambda > MathUtils::MakeScalar | ( | Lambda | theLambda | ) |
Helper function to create ScalarLambda with type deduction.
| Lambda | callable type (auto-deduced) |
| theLambda | callable with signature bool(double, double&) |
| ScalarLambdaWithDerivative< Lambda > MathUtils::MakeScalarWithDerivative | ( | Lambda | theLambda | ) |
Helper function to create ScalarLambdaWithDerivative with type deduction.
| Lambda | callable type (auto-deduced) |
| theLambda | callable with signature bool(double, double&, double&) |
Helper function to create Scaled with type deduction.
Helper function to create Shifted with type deduction.
Helper function to create Sum with type deduction.
| SystemLambda< Lambda > MathUtils::MakeSystem | ( | Lambda | theLambda, |
| int | theNbEquations ) |
Helper function to create SystemLambda with type deduction.
| Lambda | callable type |
| theLambda | callable for system evaluation |
| theNbEquations | number of equations |
| VectorLambda< Lambda > MathUtils::MakeVector | ( | Lambda | theLambda | ) |
Helper function to create VectorLambda with type deduction.
| Lambda | callable type (auto-deduced) |
| theLambda | callable with signature bool(const math_Vector&, double&) |
| VectorLambdaWithGradient< ValueLambda, GradLambda > MathUtils::MakeVectorWithGradient | ( | ValueLambda | theValueLambda, |
| GradLambda | theGradLambda ) |
Helper function to create VectorLambdaWithGradient with type deduction.
| ValueLambda | callable for value |
| GradLambda | callable for gradient |
| theValueLambda | callable for value evaluation |
| theGradLambda | callable for gradient evaluation |
| bool MathUtils::NumericalGradient | ( | Function & | theFunc, |
| math_Vector & | theX, | ||
| math_Vector & | theGrad, | ||
| double | theStep = 1.0e-8 ) |
Numerical gradient using central differences for N-D functions. df/dx[i] ~= (f(x + h[i]*e[i]) - f(x - h[i]*e[i])) / (2*h[i])
| Function | type with Value(const math_Vector&, double&) method |
| theFunc | function to differentiate |
| theX | point at which to evaluate gradient (temporarily modified) |
| theGrad | output gradient vector (same dimension as theX) |
| theStep | step size (default 1e-8) |
| bool MathUtils::NumericalGradientAdaptive | ( | Function & | theFunc, |
| math_Vector & | theX, | ||
| math_Vector & | theGrad, | ||
| double | theRelStep = 1.0e-8 ) |
Numerical gradient with adaptive step size. Uses step proportional to |x[i]| for better conditioning.
| Function | type with Value(const math_Vector&, double&) method |
| theFunc | function to differentiate |
| theX | point at which to evaluate gradient |
| theGrad | output gradient vector |
| theRelStep | relative step size (default 1e-8) |
| bool MathUtils::NumericalHessian | ( | Function & | theFunc, |
| math_Vector & | theX, | ||
| math_Matrix & | theHess, | ||
| double | theStep = 1.0e-5 ) |
Numerical Hessian matrix using finite differences. H[i,j] = d^2f/dx[i]dx[j] Uses central differences on gradient.
| Function | type with Value(const math_Vector&, double&) method |
| theFunc | scalar function f: R^n -> R |
| theX | point at which to evaluate Hessian (n-dimensional) |
| theHess | output Hessian matrix (n x n, symmetric) |
| theStep | step size (default 1e-5, larger than gradient step) |
| bool MathUtils::NumericalJacobian | ( | Function & | theFunc, |
| math_Vector & | theX, | ||
| math_Matrix & | theJac, | ||
| double | theStep = 1.0e-8 ) |
Numerical Jacobian matrix for vector-valued functions. J[i,j] = dF[i]/dx[j] ~= (F[i](x + h[j]*e[j]) - F[i](x - h[j]*e[j])) / (2*h[j])
| Function | type with Value(const math_Vector& x, math_Vector& F) method |
| theFunc | vector-valued function F: R^n -> R^m |
| theX | point at which to evaluate Jacobian (n-dimensional) |
| theJac | output Jacobian matrix (m x n) |
| theStep | step size (default 1e-8) |
|
inline |
Quadratic interpolation step for line search. Given phi(0), phi'(0), and phi(alpha1), finds minimum of quadratic fit.
| thePhi0 | function value at 0 |
| thePhi0Prime | directional derivative at 0 |
| theAlpha1 | current step size |
| thePhi1 | function value at alpha1 |
|
inline |
Newton-Raphson refinement for polynomial root. Polishes an approximate root to higher precision.
| theCoeffs | coefficient array (constant term first) |
| theDegree | polynomial degree |
| theRoot | initial root estimate |
| theMaxIter | maximum refinement iterations |
|
inline |
Newton-Raphson refinement with coefficients in descending order.
| theCoeffs | coefficient array (leading term first) |
| theDegree | polynomial degree |
| theRoot | initial root estimate |
| theMaxIter | maximum refinement iterations |
|
inline |
Remove duplicate roots within tolerance.
| theRoots | array of sorted roots |
| theCount | current number of roots |
| theTolerance | tolerance for duplicate detection |
|
inline |
Safe division avoiding division by zero.
| theNumerator | numerator |
| theDenominator | denominator |
| theDefault | default value if denominator is zero |
| bool MathUtils::SecondDerivative | ( | Function & | theFunc, |
| double | theX, | ||
| double | theFx, | ||
| double & | theD2f, | ||
| double | theStep = 1.0e-5 ) |
Second derivative using central difference. f''(x) ~= (f(x+h) - 2f(x) + f(x-h)) / h^2.
| Function | type with Value(double theX, double& theF) method |
| theFunc | function to differentiate |
| theX | point at which to evaluate second derivative |
| theFx | function value at theX (if already computed) |
| theD2f | computed second derivative value |
| theStep | step size (default 1e-5) |
Sign function.
| theValue | input value |
Sign transfer function: returns |theA| with sign of theB. Equivalent to copysign but avoids edge cases with zero.
| theA | value whose magnitude is used |
| theB | value whose sign is used |
Sort roots in ascending order (simple insertion sort for small arrays).
| theRoots | array of roots |
| theCount | number of roots |
Square of a value.
| theValue | input value |
|
inline |
Compute infinity norm (maximum absolute value) of a vector.
| theVec | input vector |
|
inline |
Compute Euclidean norm of a vector.
| theVec | input vector |
| LineSearchResult MathUtils::WolfeSearch | ( | Function & | theFunc, |
| const math_Vector & | theX, | ||
| const math_Vector & | theDir, | ||
| const math_Vector & | theGrad, | ||
| double | theFx, | ||
| double | theAlphaInit = 1.0, | ||
| double | theC1 = 1.0e-4, | ||
| double | theC2 = 0.9, | ||
| int | theMaxIter = 20 ) |
Strong Wolfe line search. Finds alpha satisfying both:
Uses bracketing and zoom procedure for guaranteed convergence.
| Function | type with:
|
| theFunc | objective function with gradient |
| theX | current point |
| theDir | search direction |
| theGrad | gradient at current point |
| theFx | function value at current point |
| theAlphaInit | initial step size |
| theC1 | Armijo parameter (default 1e-4) |
| theC2 | curvature parameter (default 0.9) |
| theMaxIter | maximum iterations |
Armijo condition constant (sufficient decrease). Step accepted if: f(x + alpha*d) <= f(x) + c1 * alpha * grad_f . d.
Machine epsilon for double precision.
Golden ratio for optimization algorithms.
Inverse golden ratio (1 - 1/phi).
Absolute tolerance for Hessian degeneracy detection. Minimum threshold for determinant comparison.
Relative tolerance for Hessian degeneracy detection. If |det(H)| < threshold * |H_ii| * |H_jj|, Hessian is considered degenerate.
Squared gradient threshold below which point is considered a critical point. When |grad|^2 < threshold, Newton iteration is at a stationary point.
Domain extension factor for soft boundary handling. Solutions slightly outside domain (by this fraction) are allowed if converged. Reduced to 1e-4 to closely match old algorithm behavior.
Maximum line search backtracking iterations.
Maximum step size as fraction of domain size for bounded Newton iterations.
Maximum relaxation factor for solutions at max iterations.
Determinant threshold below which 2x2 Jacobian matrix is considered singular. When |det(J)| < threshold, falls back to SVD or gradient descent.
Relative step threshold below which line search is skipped. If step^2 / domain^2 < threshold, Newton is converging well.
Number of iterations without progress before declaring stagnation.
Tolerance relaxation factor for accepting stagnated solutions. Stagnated solution accepted if |F| < tolerance * factor.
Squared function tolerance for Newton iterations (matching math_FunctionSetRoot Eps).
Default maximum iterations for Newton solvers.
Relative parametric step tolerance factor for Newton iterations.