![]() |
Open CASCADE Technology Reference Manual 8.0.0
|
Optimization algorithms for scalar and vector functions. More...
Data Structures | |
| struct | FRPRConfig |
| Configuration for FRPR conjugate gradient method. More... | |
| struct | GlobalConfig |
| Configuration for global optimization. More... | |
| struct | NewtonConfig |
| Configuration for Newton minimization with Hessian. More... | |
| struct | PSOConfig |
| Configuration for Particle Swarm Optimization. More... | |
| struct | PSOSeedParticle |
| Seed particle for PSO initialization. More... | |
| struct | PSOStats |
| Statistics collected during PSO execution. More... | |
| struct | UzawaConfig |
| Configuration for Uzawa algorithm. More... | |
| struct | UzawaResult |
| Result for Uzawa constrained optimization. More... | |
Enumerations | |
| enum class | ConjugateGradientFormula { FletcherReeves , PolakRibiere , HestenesStiefel , DaiYuan } |
| Conjugate gradient formula selection. More... | |
| enum class | PSOInitMode { RandomOnly , SeededOnly , SeededPlusRandom } |
| Initialization mode for PSO particles. More... | |
| enum class | PSOBoundaryMode { Clamp , Reflect , Wrap } |
| Boundary handling mode for particles leaving the search space. More... | |
| enum class | PSOInertiaSchedule { Constant , LinearDecay } |
| Inertia weight schedule. More... | |
| enum class | GlobalStrategy { PSO , MultiStart , PSOHybrid , DifferentialEvolution } |
| Global optimization strategy selection. More... | |
Functions | |
| template<typename Function > | |
| ScalarResult | Brent (Function &theFunc, double theLower, double theUpper, const Config &theConfig=Config()) |
| Brent's method for 1D minimization. Combines golden section search with parabolic interpolation. Guaranteed to converge for unimodal functions within the given interval. | |
| template<typename Function > | |
| ScalarResult | Golden (Function &theFunc, double theLower, double theUpper, const Config &theConfig=Config()) |
| Golden section search for 1D minimization. Simpler than Brent but with guaranteed linear convergence. Does not attempt parabolic interpolation. | |
| template<typename Function > | |
| ScalarResult | BrentWithBracket (Function &theFunc, double theGuess, double theStep=1.0, const Config &theConfig=Config()) |
| Brent's method with automatic bracket search. First attempts to bracket a minimum, then applies Brent's method. | |
| template<typename Function > | |
| VectorResult | Powell (Function &theFunc, const math_Vector &theStartingPoint, const Config &theConfig=Config()) |
| Powell's conjugate direction method for N-dimensional minimization. A gradient-free optimization algorithm that uses conjugate directions. | |
| template<typename Function > | |
| VectorResult | PowellWithDirections (Function &theFunc, const math_Vector &theStartingPoint, const math_Matrix &theInitialDirections, const Config &theConfig=Config()) |
| Powell's method with custom initial directions. Allows specifying the initial direction set instead of coordinate axes. | |
| template<typename Function > | |
| VectorResult | BFGS (Function &theFunc, const math_Vector &theStartingPoint, const Config &theConfig=Config()) |
| BFGS (Broyden-Fletcher-Goldfarb-Shanno) quasi-Newton method. One of the most effective algorithms for smooth unconstrained optimization. | |
| template<typename Function > | |
| VectorResult | BFGSNumerical (Function &theFunc, const math_Vector &theStartingPoint, double theGradStep=1.0e-8, const Config &theConfig=Config()) |
| BFGS with numerical gradient. Uses central differences to approximate gradient when analytical gradient is not available. | |
| template<typename Function > | |
| VectorResult | LBFGS (Function &theFunc, const math_Vector &theStartingPoint, int theMemorySize=10, const Config &theConfig=Config()) |
| L-BFGS (Limited-memory BFGS) for large-scale optimization. Uses only the m most recent {s, y} pairs instead of full Hessian. Memory: O(m*n) instead of O(n^2). | |
| template<typename Function > | |
| VectorResult | BFGSBounded (Function &theFunc, const math_Vector &theStartingPoint, const math_Vector &theLowerBounds, const math_Vector &theUpperBounds, const Config &theConfig=Config()) |
| BFGS with bound constraints (box constraints). | |
| template<typename Function > | |
| VectorResult | FRPR (Function &theFunc, const math_Vector &theStartingPoint, const FRPRConfig &theConfig=FRPRConfig()) |
| Fletcher-Reeves-Polak-Ribiere conjugate gradient method. | |
| template<typename Function > | |
| VectorResult | FRPRNumerical (Function &theFunc, const math_Vector &theStartingPoint, double theGradStep=1.0e-8, const FRPRConfig &theConfig=FRPRConfig()) |
| FRPR with numerical gradient. Uses central differences when analytical gradient is not available. | |
| template<typename Function > | |
| VectorResult | Newton (Function &theFunc, const math_Vector &theStartingPoint, const NewtonConfig &theConfig=NewtonConfig()) |
| Newton's method for N-dimensional minimization using Hessian. | |
| template<typename Function > | |
| VectorResult | NewtonModified (Function &theFunc, const math_Vector &theStartingPoint, const NewtonConfig &theConfig=NewtonConfig()) |
| Modified Newton's method with automatic Hessian regularization. Adds diagonal elements to ensure positive definiteness using an adaptive regularization strategy. | |
| template<typename Function > | |
| VectorResult | NewtonNumericalHessian (Function &theFunc, const math_Vector &theStartingPoint, double theHessStep=1.0e-6, const NewtonConfig &theConfig=NewtonConfig()) |
| Newton's method with numerical Hessian. Computes Hessian using finite differences when analytical Hessian is not available. | |
| template<typename Function > | |
| VectorResult | NewtonNumerical (Function &theFunc, const math_Vector &theStartingPoint, double theGradStep=1.0e-8, double theHessStep=1.0e-6, const NewtonConfig &theConfig=NewtonConfig()) |
| Newton's method with fully numerical derivatives. Computes both gradient and Hessian using finite differences. | |
| template<typename Function > | |
| VectorResult | NewtonBounded (Function &theFunc, const math_Vector &theStartingPoint, const math_Vector &theLowerBounds, const math_Vector &theUpperBounds, const NewtonConfig &theConfig=NewtonConfig()) |
| Newton's method with bound constraints. | |
| template<typename Function > | |
| void | PolishCoordinateWise (Function &theFunc, math_Vector &thePosition, double &theValue, const math_Vector &theLowerBounds, const math_Vector &theUpperBounds, double theTolerance, int theMaxPolishEvals, int &theEvalCount) |
| Coordinate-wise polishing using Brent's 1D minimization. For each dimension, performs Brent's method directly on the single coordinate, modifying only one element of the position vector per evaluation (zero-allocation). | |
| template<typename Function > | |
| VectorResult | PSO (Function &theFunc, const math_Vector &theLowerBounds, const math_Vector &theUpperBounds, const PSOConfig &theConfig, const NCollection_DynamicArray< PSOSeedParticle > *theSeeds, PSOStats *theStats=nullptr) |
| Particle Swarm Optimization for global minimization. | |
| template<typename Function > | |
| VectorResult | PSO (Function &theFunc, const math_Vector &theLowerBounds, const math_Vector &theUpperBounds, const PSOConfig &theConfig=PSOConfig()) |
| Particle Swarm Optimization for global minimization (basic overload). | |
| template<typename Function > | |
| VectorResult | DifferentialEvolution (Function &theFunc, const math_Vector &theLowerBounds, const math_Vector &theUpperBounds, const GlobalConfig &theConfig=GlobalConfig()) |
| Differential Evolution algorithm for global optimization. | |
| template<typename Function > | |
| VectorResult | MultiStart (Function &theFunc, const math_Vector &theLowerBounds, const math_Vector &theUpperBounds, const GlobalConfig &theConfig=GlobalConfig()) |
| Multi-start local optimization. | |
| template<typename Function > | |
| VectorResult | GlobalMinimum (Function &theFunc, const math_Vector &theLowerBounds, const math_Vector &theUpperBounds, const GlobalConfig &theConfig=GlobalConfig()) |
| Unified global optimization interface. | |
| template<typename Function > | |
| VectorResult | GlobalMinimum (Function &theFunc, const math_Vector &theLowerBounds, const math_Vector &theUpperBounds, const GlobalConfig &theConfig, const PSOConfig *thePSOConfig, const NCollection_DynamicArray< PSOSeedParticle > *theSeeds=nullptr, PSOStats *theStats=nullptr) |
| Unified global optimization interface with PSO-specific options. | |
| UzawaResult | Uzawa (const math_Matrix &theCont, const math_Vector &theSecont, const math_Vector &theStartingPoint, int theNce, int theNci, const UzawaConfig &theConfig=UzawaConfig()) |
| Solve constrained least squares using Uzawa algorithm. | |
| UzawaResult | UzawaEquality (const math_Matrix &theCont, const math_Vector &theSecont, const math_Vector &theStartingPoint, const UzawaConfig &theConfig=UzawaConfig()) |
| Solve constrained least squares with equality constraints only. | |
Optimization algorithms for scalar and vector functions.
Conjugate gradient formula selection.
|
strong |
|
strong |
|
strong |
|
strong |
| VectorResult MathOpt::BFGS | ( | Function & | theFunc, |
| const math_Vector & | theStartingPoint, | ||
| const Config & | theConfig = Config() ) |
BFGS (Broyden-Fletcher-Goldfarb-Shanno) quasi-Newton method. One of the most effective algorithms for smooth unconstrained optimization.
Algorithm:
The BFGS update maintains positive definiteness of H if started with a positive definite matrix and using proper line search.
Advantages:
| Function | type with:
|
| theFunc | function object with value and gradient |
| theStartingPoint | initial guess |
| theConfig | solver configuration |
| VectorResult MathOpt::BFGSBounded | ( | Function & | theFunc, |
| const math_Vector & | theStartingPoint, | ||
| const math_Vector & | theLowerBounds, | ||
| const math_Vector & | theUpperBounds, | ||
| const Config & | theConfig = Config() ) |
BFGS with bound constraints (box constraints).
Minimizes f(x) subject to theLowerBounds <= x <= theUpperBounds. Uses projected gradient approach:
| Function | type with Value and Gradient methods |
| theFunc | function object |
| theStartingPoint | initial guess |
| theLowerBounds | lower bounds for each variable |
| theUpperBounds | upper bounds for each variable |
| theConfig | solver configuration |
| VectorResult MathOpt::BFGSNumerical | ( | Function & | theFunc, |
| const math_Vector & | theStartingPoint, | ||
| double | theGradStep = 1.0e-8, | ||
| const Config & | theConfig = Config() ) |
BFGS with numerical gradient. Uses central differences to approximate gradient when analytical gradient is not available.
| Function | type with Value(const math_Vector&, double&) method only |
| theFunc | function object |
| theStartingPoint | initial guess |
| theGradStep | step size for numerical gradient (default 1e-8) |
| theConfig | solver configuration |
| ScalarResult MathOpt::Brent | ( | Function & | theFunc, |
| double | theLower, | ||
| double | theUpper, | ||
| const Config & | theConfig = Config() ) |
Brent's method for 1D minimization. Combines golden section search with parabolic interpolation. Guaranteed to converge for unimodal functions within the given interval.
Algorithm:
| Function | type with Value(double theX, double& theF) method |
| theFunc | function to minimize |
| theLower | lower bound of search interval |
| theUpper | upper bound of search interval |
| theConfig | solver configuration |
| ScalarResult MathOpt::BrentWithBracket | ( | Function & | theFunc, |
| double | theGuess, | ||
| double | theStep = 1.0, | ||
| const Config & | theConfig = Config() ) |
Brent's method with automatic bracket search. First attempts to bracket a minimum, then applies Brent's method.
| Function | type with Value(double theX, double& theF) method |
| theFunc | function to minimize |
| theGuess | initial guess |
| theStep | initial step size for bracket search |
| theConfig | solver configuration |
| VectorResult MathOpt::DifferentialEvolution | ( | Function & | theFunc, |
| const math_Vector & | theLowerBounds, | ||
| const math_Vector & | theUpperBounds, | ||
| const GlobalConfig & | theConfig = GlobalConfig() ) |
Differential Evolution algorithm for global optimization.
DE is a stochastic, population-based optimization algorithm. It uses mutation, crossover, and selection operations to evolve a population of candidate solutions.
| Function | type with Value(const math_Vector&, double&) method |
| theFunc | function to minimize |
| theLowerBounds | lower bounds |
| theUpperBounds | upper bounds |
| theConfig | optimization configuration |
| VectorResult MathOpt::FRPR | ( | Function & | theFunc, |
| const math_Vector & | theStartingPoint, | ||
| const FRPRConfig & | theConfig = FRPRConfig() ) |
Fletcher-Reeves-Polak-Ribiere conjugate gradient method.
Memory-efficient alternative to BFGS for large-scale optimization. Uses only O(n) storage compared to O(n^2) for BFGS.
Algorithm:
| Function | type with:
|
| theFunc | function object with value and gradient |
| theStartingPoint | initial guess |
| theConfig | solver configuration |
| VectorResult MathOpt::FRPRNumerical | ( | Function & | theFunc, |
| const math_Vector & | theStartingPoint, | ||
| double | theGradStep = 1.0e-8, | ||
| const FRPRConfig & | theConfig = FRPRConfig() ) |
FRPR with numerical gradient. Uses central differences when analytical gradient is not available.
| Function | type with Value(const math_Vector&, double&) method only |
| theFunc | function object |
| theStartingPoint | initial guess |
| theGradStep | step size for numerical gradient |
| theConfig | solver configuration |
| VectorResult MathOpt::GlobalMinimum | ( | Function & | theFunc, |
| const math_Vector & | theLowerBounds, | ||
| const math_Vector & | theUpperBounds, | ||
| const GlobalConfig & | theConfig, | ||
| const PSOConfig * | thePSOConfig, | ||
| const NCollection_DynamicArray< PSOSeedParticle > * | theSeeds = nullptr, | ||
| PSOStats * | theStats = nullptr ) |
Unified global optimization interface with PSO-specific options.
For PSO and PSOHybrid strategies, uses the provided PSO configuration, seed particles, and stats output. For other strategies, these are ignored.
| Function | type with Value(const math_Vector&, double&) method |
| theFunc | function to minimize |
| theLowerBounds | lower bounds for each variable |
| theUpperBounds | upper bounds for each variable |
| theConfig | solver configuration |
| thePSOConfig | optional PSO-specific configuration (overrides auto-generated) |
| theSeeds | optional seed particles for PSO initialization |
| theStats | optional PSO statistics output |
| VectorResult MathOpt::GlobalMinimum | ( | Function & | theFunc, |
| const math_Vector & | theLowerBounds, | ||
| const math_Vector & | theUpperBounds, | ||
| const GlobalConfig & | theConfig = GlobalConfig() ) |
Unified global optimization interface.
Selects appropriate algorithm based on configuration and provides a consistent interface for all global optimization methods.
| Function | type with Value(const math_Vector&, double&) method |
| theFunc | function to minimize |
| theLowerBounds | lower bounds for each variable |
| theUpperBounds | upper bounds for each variable |
| theConfig | solver configuration |
| ScalarResult MathOpt::Golden | ( | Function & | theFunc, |
| double | theLower, | ||
| double | theUpper, | ||
| const Config & | theConfig = Config() ) |
Golden section search for 1D minimization. Simpler than Brent but with guaranteed linear convergence. Does not attempt parabolic interpolation.
| Function | type with Value(double theX, double& theF) method |
| theFunc | function to minimize |
| theLower | lower bound of search interval |
| theUpper | upper bound of search interval |
| theConfig | solver configuration |
| VectorResult MathOpt::LBFGS | ( | Function & | theFunc, |
| const math_Vector & | theStartingPoint, | ||
| int | theMemorySize = 10, | ||
| const Config & | theConfig = Config() ) |
L-BFGS (Limited-memory BFGS) for large-scale optimization. Uses only the m most recent {s, y} pairs instead of full Hessian. Memory: O(m*n) instead of O(n^2).
| Function | type with Value and Gradient methods |
| theFunc | function object |
| theStartingPoint | initial guess |
| theMemorySize | number of gradient pairs to store (default 10) |
| theConfig | solver configuration |
| VectorResult MathOpt::MultiStart | ( | Function & | theFunc, |
| const math_Vector & | theLowerBounds, | ||
| const math_Vector & | theUpperBounds, | ||
| const GlobalConfig & | theConfig = GlobalConfig() ) |
Multi-start local optimization.
Runs multiple local optimizations from random starting points and returns the best result found. Uses Powell's method for local optimization (gradient-free).
| Function | type with Value(const math_Vector&, double&) method |
| theFunc | function to minimize |
| theLowerBounds | lower bounds |
| theUpperBounds | upper bounds |
| theConfig | optimization configuration |
| VectorResult MathOpt::Newton | ( | Function & | theFunc, |
| const math_Vector & | theStartingPoint, | ||
| const NewtonConfig & | theConfig = NewtonConfig() ) |
Newton's method for N-dimensional minimization using Hessian.
Fastest convergence near minimum (quadratic) but requires Hessian computation. Uses line search for global convergence and Hessian regularization when the Hessian is not positive definite.
Algorithm:
| Function | type with:
|
| theFunc | function object with value, gradient, and Hessian |
| theStartingPoint | initial guess |
| theConfig | solver configuration |
| VectorResult MathOpt::NewtonBounded | ( | Function & | theFunc, |
| const math_Vector & | theStartingPoint, | ||
| const math_Vector & | theLowerBounds, | ||
| const math_Vector & | theUpperBounds, | ||
| const NewtonConfig & | theConfig = NewtonConfig() ) |
Newton's method with bound constraints.
Minimizes f(x) subject to theLowerBounds <= x <= theUpperBounds. Uses projected gradient approach similar to BFGSBounded.
| Function | type with Value, Gradient, and Hessian methods |
| theFunc | function object |
| theStartingPoint | initial guess |
| theLowerBounds | lower bounds for each variable |
| theUpperBounds | upper bounds for each variable |
| theConfig | solver configuration |
| VectorResult MathOpt::NewtonModified | ( | Function & | theFunc, |
| const math_Vector & | theStartingPoint, | ||
| const NewtonConfig & | theConfig = NewtonConfig() ) |
Modified Newton's method with automatic Hessian regularization. Adds diagonal elements to ensure positive definiteness using an adaptive regularization strategy.
| Function | type with Value, Gradient, and Hessian methods |
| theFunc | function object |
| theStartingPoint | initial guess |
| theConfig | solver configuration |
| VectorResult MathOpt::NewtonNumerical | ( | Function & | theFunc, |
| const math_Vector & | theStartingPoint, | ||
| double | theGradStep = 1.0e-8, | ||
| double | theHessStep = 1.0e-6, | ||
| const NewtonConfig & | theConfig = NewtonConfig() ) |
Newton's method with fully numerical derivatives. Computes both gradient and Hessian using finite differences.
| Function | type with Value(const math_Vector&, double&) method only |
| theFunc | function object |
| theStartingPoint | initial guess |
| theGradStep | step size for numerical gradient |
| theHessStep | step size for numerical Hessian |
| theConfig | solver configuration |
| VectorResult MathOpt::NewtonNumericalHessian | ( | Function & | theFunc, |
| const math_Vector & | theStartingPoint, | ||
| double | theHessStep = 1.0e-6, | ||
| const NewtonConfig & | theConfig = NewtonConfig() ) |
Newton's method with numerical Hessian. Computes Hessian using finite differences when analytical Hessian is not available.
| Function | type with:
|
| theFunc | function object |
| theStartingPoint | initial guess |
| theHessStep | step size for numerical Hessian |
| theConfig | solver configuration |
| void MathOpt::PolishCoordinateWise | ( | Function & | theFunc, |
| math_Vector & | thePosition, | ||
| double & | theValue, | ||
| const math_Vector & | theLowerBounds, | ||
| const math_Vector & | theUpperBounds, | ||
| double | theTolerance, | ||
| int | theMaxPolishEvals, | ||
| int & | theEvalCount ) |
Coordinate-wise polishing using Brent's 1D minimization. For each dimension, performs Brent's method directly on the single coordinate, modifying only one element of the position vector per evaluation (zero-allocation).
| Function | type with Value(const math_Vector&, double&) method |
| theFunc | function to minimize |
| thePosition | current best position (updated in place) |
| theValue | current best value (updated in place) |
| theLowerBounds | lower bounds for each variable |
| theUpperBounds | upper bounds for each variable |
| theTolerance | convergence tolerance for Brent's method |
| theMaxPolishEvals | maximum total function evaluations for polishing |
| theEvalCount | output: number of function evaluations used |
| VectorResult MathOpt::Powell | ( | Function & | theFunc, |
| const math_Vector & | theStartingPoint, | ||
| const Config & | theConfig = Config() ) |
Powell's conjugate direction method for N-dimensional minimization. A gradient-free optimization algorithm that uses conjugate directions.
Algorithm:
Advantages:
Disadvantages:
| Function | type with Value(const math_Vector&, double&) method |
| theFunc | function to minimize |
| theStartingPoint | initial guess (N-dimensional) |
| theConfig | solver configuration |
| VectorResult MathOpt::PowellWithDirections | ( | Function & | theFunc, |
| const math_Vector & | theStartingPoint, | ||
| const math_Matrix & | theInitialDirections, | ||
| const Config & | theConfig = Config() ) |
Powell's method with custom initial directions. Allows specifying the initial direction set instead of coordinate axes.
| Function | type with Value(const math_Vector&, double&) method |
| theFunc | function to minimize |
| theStartingPoint | initial guess |
| theInitialDirections | initial direction set (N x N matrix, directions as rows) |
| theConfig | solver configuration |
| VectorResult MathOpt::PSO | ( | Function & | theFunc, |
| const math_Vector & | theLowerBounds, | ||
| const math_Vector & | theUpperBounds, | ||
| const PSOConfig & | theConfig, | ||
| const NCollection_DynamicArray< PSOSeedParticle > * | theSeeds, | ||
| PSOStats * | theStats = nullptr ) |
Particle Swarm Optimization for global minimization.
PSO is a stochastic, population-based optimization algorithm inspired by social behavior of bird flocking or fish schooling.
Algorithm:
Properties:
| Function | type with Value(const math_Vector&, double&) method |
| theFunc | function to minimize |
| theLowerBounds | lower bounds for each variable |
| theUpperBounds | upper bounds for each variable |
| theConfig | PSO configuration |
| theSeeds | optional seed particles for initialization |
| theStats | optional output statistics |
| VectorResult MathOpt::PSO | ( | Function & | theFunc, |
| const math_Vector & | theLowerBounds, | ||
| const math_Vector & | theUpperBounds, | ||
| const PSOConfig & | theConfig = PSOConfig() ) |
Particle Swarm Optimization for global minimization (basic overload).
| Function | type with Value(const math_Vector&, double&) method |
| theFunc | function to minimize |
| theLowerBounds | lower bounds for each variable |
| theUpperBounds | upper bounds for each variable |
| theConfig | PSO configuration |
|
inline |
Solve constrained least squares using Uzawa algorithm.
Solves: min ||X - X0||^2 subject to C*X = S
For equality constraints only, uses direct Crout decomposition. For mixed equality/inequality constraints, uses iterative Uzawa method.
The Uzawa algorithm is a dual decomposition method that:
| theCont | constraint matrix C (Nce+Nci rows x N cols) |
| theSecont | right-hand side S |
| theStartingPoint | initial point X0 |
| theNce | number of equality constraints (first rows) |
| theNci | number of inequality constraints (last rows, C*X <= S) |
| theConfig | algorithm configuration |
|
inline |
Solve constrained least squares with equality constraints only.
Convenience function for C*X = S with min ||X - X0||.
| theCont | constraint matrix C |
| theSecont | right-hand side S |
| theStartingPoint | initial point X0 |
| theConfig | algorithm configuration |