Open CASCADE Technology Reference Manual 8.0.0
Loading...
Searching...
No Matches
Data Structures | Enumerations | Functions
MathOpt Namespace Reference

Optimization algorithms for scalar and vector functions. More...

Data Structures

struct  FRPRConfig
 Configuration for FRPR conjugate gradient method. More...
 
struct  GlobalConfig
 Configuration for global optimization. More...
 
struct  NewtonConfig
 Configuration for Newton minimization with Hessian. More...
 
struct  PSOConfig
 Configuration for Particle Swarm Optimization. More...
 
struct  PSOSeedParticle
 Seed particle for PSO initialization. More...
 
struct  PSOStats
 Statistics collected during PSO execution. More...
 
struct  UzawaConfig
 Configuration for Uzawa algorithm. More...
 
struct  UzawaResult
 Result for Uzawa constrained optimization. More...
 

Enumerations

enum class  ConjugateGradientFormula { FletcherReeves , PolakRibiere , HestenesStiefel , DaiYuan }
 Conjugate gradient formula selection. More...
 
enum class  PSOInitMode { RandomOnly , SeededOnly , SeededPlusRandom }
 Initialization mode for PSO particles. More...
 
enum class  PSOBoundaryMode { Clamp , Reflect , Wrap }
 Boundary handling mode for particles leaving the search space. More...
 
enum class  PSOInertiaSchedule { Constant , LinearDecay }
 Inertia weight schedule. More...
 
enum class  GlobalStrategy { PSO , MultiStart , PSOHybrid , DifferentialEvolution }
 Global optimization strategy selection. More...
 

Functions

template<typename Function >
ScalarResult Brent (Function &theFunc, double theLower, double theUpper, const Config &theConfig=Config())
 Brent's method for 1D minimization. Combines golden section search with parabolic interpolation. Guaranteed to converge for unimodal functions within the given interval.
 
template<typename Function >
ScalarResult Golden (Function &theFunc, double theLower, double theUpper, const Config &theConfig=Config())
 Golden section search for 1D minimization. Simpler than Brent but with guaranteed linear convergence. Does not attempt parabolic interpolation.
 
template<typename Function >
ScalarResult BrentWithBracket (Function &theFunc, double theGuess, double theStep=1.0, const Config &theConfig=Config())
 Brent's method with automatic bracket search. First attempts to bracket a minimum, then applies Brent's method.
 
template<typename Function >
VectorResult Powell (Function &theFunc, const math_Vector &theStartingPoint, const Config &theConfig=Config())
 Powell's conjugate direction method for N-dimensional minimization. A gradient-free optimization algorithm that uses conjugate directions.
 
template<typename Function >
VectorResult PowellWithDirections (Function &theFunc, const math_Vector &theStartingPoint, const math_Matrix &theInitialDirections, const Config &theConfig=Config())
 Powell's method with custom initial directions. Allows specifying the initial direction set instead of coordinate axes.
 
template<typename Function >
VectorResult BFGS (Function &theFunc, const math_Vector &theStartingPoint, const Config &theConfig=Config())
 BFGS (Broyden-Fletcher-Goldfarb-Shanno) quasi-Newton method. One of the most effective algorithms for smooth unconstrained optimization.
 
template<typename Function >
VectorResult BFGSNumerical (Function &theFunc, const math_Vector &theStartingPoint, double theGradStep=1.0e-8, const Config &theConfig=Config())
 BFGS with numerical gradient. Uses central differences to approximate gradient when analytical gradient is not available.
 
template<typename Function >
VectorResult LBFGS (Function &theFunc, const math_Vector &theStartingPoint, int theMemorySize=10, const Config &theConfig=Config())
 L-BFGS (Limited-memory BFGS) for large-scale optimization. Uses only the m most recent {s, y} pairs instead of full Hessian. Memory: O(m*n) instead of O(n^2).
 
template<typename Function >
VectorResult BFGSBounded (Function &theFunc, const math_Vector &theStartingPoint, const math_Vector &theLowerBounds, const math_Vector &theUpperBounds, const Config &theConfig=Config())
 BFGS with bound constraints (box constraints).
 
template<typename Function >
VectorResult FRPR (Function &theFunc, const math_Vector &theStartingPoint, const FRPRConfig &theConfig=FRPRConfig())
 Fletcher-Reeves-Polak-Ribiere conjugate gradient method.
 
template<typename Function >
VectorResult FRPRNumerical (Function &theFunc, const math_Vector &theStartingPoint, double theGradStep=1.0e-8, const FRPRConfig &theConfig=FRPRConfig())
 FRPR with numerical gradient. Uses central differences when analytical gradient is not available.
 
template<typename Function >
VectorResult Newton (Function &theFunc, const math_Vector &theStartingPoint, const NewtonConfig &theConfig=NewtonConfig())
 Newton's method for N-dimensional minimization using Hessian.
 
template<typename Function >
VectorResult NewtonModified (Function &theFunc, const math_Vector &theStartingPoint, const NewtonConfig &theConfig=NewtonConfig())
 Modified Newton's method with automatic Hessian regularization. Adds diagonal elements to ensure positive definiteness using an adaptive regularization strategy.
 
template<typename Function >
VectorResult NewtonNumericalHessian (Function &theFunc, const math_Vector &theStartingPoint, double theHessStep=1.0e-6, const NewtonConfig &theConfig=NewtonConfig())
 Newton's method with numerical Hessian. Computes Hessian using finite differences when analytical Hessian is not available.
 
template<typename Function >
VectorResult NewtonNumerical (Function &theFunc, const math_Vector &theStartingPoint, double theGradStep=1.0e-8, double theHessStep=1.0e-6, const NewtonConfig &theConfig=NewtonConfig())
 Newton's method with fully numerical derivatives. Computes both gradient and Hessian using finite differences.
 
template<typename Function >
VectorResult NewtonBounded (Function &theFunc, const math_Vector &theStartingPoint, const math_Vector &theLowerBounds, const math_Vector &theUpperBounds, const NewtonConfig &theConfig=NewtonConfig())
 Newton's method with bound constraints.
 
template<typename Function >
void PolishCoordinateWise (Function &theFunc, math_Vector &thePosition, double &theValue, const math_Vector &theLowerBounds, const math_Vector &theUpperBounds, double theTolerance, int theMaxPolishEvals, int &theEvalCount)
 Coordinate-wise polishing using Brent's 1D minimization. For each dimension, performs Brent's method directly on the single coordinate, modifying only one element of the position vector per evaluation (zero-allocation).
 
template<typename Function >
VectorResult PSO (Function &theFunc, const math_Vector &theLowerBounds, const math_Vector &theUpperBounds, const PSOConfig &theConfig, const NCollection_DynamicArray< PSOSeedParticle > *theSeeds, PSOStats *theStats=nullptr)
 Particle Swarm Optimization for global minimization.
 
template<typename Function >
VectorResult PSO (Function &theFunc, const math_Vector &theLowerBounds, const math_Vector &theUpperBounds, const PSOConfig &theConfig=PSOConfig())
 Particle Swarm Optimization for global minimization (basic overload).
 
template<typename Function >
VectorResult DifferentialEvolution (Function &theFunc, const math_Vector &theLowerBounds, const math_Vector &theUpperBounds, const GlobalConfig &theConfig=GlobalConfig())
 Differential Evolution algorithm for global optimization.
 
template<typename Function >
VectorResult MultiStart (Function &theFunc, const math_Vector &theLowerBounds, const math_Vector &theUpperBounds, const GlobalConfig &theConfig=GlobalConfig())
 Multi-start local optimization.
 
template<typename Function >
VectorResult GlobalMinimum (Function &theFunc, const math_Vector &theLowerBounds, const math_Vector &theUpperBounds, const GlobalConfig &theConfig=GlobalConfig())
 Unified global optimization interface.
 
template<typename Function >
VectorResult GlobalMinimum (Function &theFunc, const math_Vector &theLowerBounds, const math_Vector &theUpperBounds, const GlobalConfig &theConfig, const PSOConfig *thePSOConfig, const NCollection_DynamicArray< PSOSeedParticle > *theSeeds=nullptr, PSOStats *theStats=nullptr)
 Unified global optimization interface with PSO-specific options.
 
UzawaResult Uzawa (const math_Matrix &theCont, const math_Vector &theSecont, const math_Vector &theStartingPoint, int theNce, int theNci, const UzawaConfig &theConfig=UzawaConfig())
 Solve constrained least squares using Uzawa algorithm.
 
UzawaResult UzawaEquality (const math_Matrix &theCont, const math_Vector &theSecont, const math_Vector &theStartingPoint, const UzawaConfig &theConfig=UzawaConfig())
 Solve constrained least squares with equality constraints only.
 

Detailed Description

Optimization algorithms for scalar and vector functions.

Enumeration Type Documentation

◆ ConjugateGradientFormula

Conjugate gradient formula selection.

Enumerator
FletcherReeves 

beta = g_new^T g_new / g^T g (original, guaranteed descent)

PolakRibiere 

beta = g_new^T (g_new - g) / g^T g (often faster, may need restarts)

HestenesStiefel 

beta = g_new^T (g_new - g) / d^T (g_new - g)

DaiYuan 

beta = g_new^T g_new / d^T (g_new - g)

◆ GlobalStrategy

Global optimization strategy selection.

Enumerator
PSO 

Particle Swarm Optimization only.

MultiStart 

Multiple local optimizations from random starts.

PSOHybrid 

PSO followed by local refinement.

DifferentialEvolution 

Differential Evolution algorithm.

◆ PSOBoundaryMode

Boundary handling mode for particles leaving the search space.

Enumerator
Clamp 

Clamp to bound, reverse velocity with -0.5 damping (default)

Reflect 

Reflect coordinate, damp velocity by -0.5.

Wrap 

Periodic wrap into bounds.

◆ PSOInertiaSchedule

Inertia weight schedule.

Enumerator
Constant 

Fixed omega (default)

LinearDecay 

Linear decay from Omega to OmegaMin over iterations.

◆ PSOInitMode

Initialization mode for PSO particles.

Enumerator
RandomOnly 

All particles randomly initialized (default)

SeededOnly 

Initialize from seed particles only.

SeededPlusRandom 

Seed particles + remaining random.

Function Documentation

◆ BFGS()

template<typename Function >
VectorResult MathOpt::BFGS ( Function & theFunc,
const math_Vector & theStartingPoint,
const Config & theConfig = Config() )

BFGS (Broyden-Fletcher-Goldfarb-Shanno) quasi-Newton method. One of the most effective algorithms for smooth unconstrained optimization.

Algorithm:

  1. Start with initial Hessian approximation H (usually identity)
  2. Compute search direction p = -H * gradient
  3. Perform line search to find step size alpha satisfying Wolfe conditions
  4. Update x = x + alpha * p
  5. Update Hessian approximation using BFGS formula
  6. Repeat until convergence

The BFGS update maintains positive definiteness of H if started with a positive definite matrix and using proper line search.

Advantages:

  • Superlinear convergence near minimum
  • Self-correcting Hessian approximation
  • No need to compute actual Hessian
Template Parameters
Functiontype with:
  • Value(const math_Vector&, double&) for function value
  • Gradient(const math_Vector&, math_Vector&) for gradient
Parameters
theFuncfunction object with value and gradient
theStartingPointinitial guess
theConfigsolver configuration
Returns
result containing minimum location and value

◆ BFGSBounded()

template<typename Function >
VectorResult MathOpt::BFGSBounded ( Function & theFunc,
const math_Vector & theStartingPoint,
const math_Vector & theLowerBounds,
const math_Vector & theUpperBounds,
const Config & theConfig = Config() )

BFGS with bound constraints (box constraints).

Minimizes f(x) subject to theLowerBounds <= x <= theUpperBounds. Uses projected gradient approach:

  • After each step, clamp x to bounds
  • Zero gradient components at bounds where they point outward
Template Parameters
Functiontype with Value and Gradient methods
Parameters
theFuncfunction object
theStartingPointinitial guess
theLowerBoundslower bounds for each variable
theUpperBoundsupper bounds for each variable
theConfigsolver configuration
Returns
result containing minimum location and value

◆ BFGSNumerical()

template<typename Function >
VectorResult MathOpt::BFGSNumerical ( Function & theFunc,
const math_Vector & theStartingPoint,
double theGradStep = 1.0e-8,
const Config & theConfig = Config() )

BFGS with numerical gradient. Uses central differences to approximate gradient when analytical gradient is not available.

Template Parameters
Functiontype with Value(const math_Vector&, double&) method only
Parameters
theFuncfunction object
theStartingPointinitial guess
theGradStepstep size for numerical gradient (default 1e-8)
theConfigsolver configuration
Returns
result containing minimum location and value

◆ Brent()

template<typename Function >
ScalarResult MathOpt::Brent ( Function & theFunc,
double theLower,
double theUpper,
const Config & theConfig = Config() )

Brent's method for 1D minimization. Combines golden section search with parabolic interpolation. Guaranteed to converge for unimodal functions within the given interval.

Algorithm:

  1. Maintain bracket [a, b] with interior point x where f(x) < f(a), f(x) < f(b)
  2. Try parabolic interpolation using three points
  3. If parabolic step is rejected, use golden section step
  4. Update bracket and repeat until convergence
Template Parameters
Functiontype with Value(double theX, double& theF) method
Parameters
theFuncfunction to minimize
theLowerlower bound of search interval
theUpperupper bound of search interval
theConfigsolver configuration
Returns
result containing minimum location and value

◆ BrentWithBracket()

template<typename Function >
ScalarResult MathOpt::BrentWithBracket ( Function & theFunc,
double theGuess,
double theStep = 1.0,
const Config & theConfig = Config() )

Brent's method with automatic bracket search. First attempts to bracket a minimum, then applies Brent's method.

Template Parameters
Functiontype with Value(double theX, double& theF) method
Parameters
theFuncfunction to minimize
theGuessinitial guess
theStepinitial step size for bracket search
theConfigsolver configuration
Returns
result containing minimum location and value

◆ DifferentialEvolution()

template<typename Function >
VectorResult MathOpt::DifferentialEvolution ( Function & theFunc,
const math_Vector & theLowerBounds,
const math_Vector & theUpperBounds,
const GlobalConfig & theConfig = GlobalConfig() )

Differential Evolution algorithm for global optimization.

DE is a stochastic, population-based optimization algorithm. It uses mutation, crossover, and selection operations to evolve a population of candidate solutions.

Template Parameters
Functiontype with Value(const math_Vector&, double&) method
Parameters
theFuncfunction to minimize
theLowerBoundslower bounds
theUpperBoundsupper bounds
theConfigoptimization configuration
Returns
result containing best solution found

◆ FRPR()

template<typename Function >
VectorResult MathOpt::FRPR ( Function & theFunc,
const math_Vector & theStartingPoint,
const FRPRConfig & theConfig = FRPRConfig() )

Fletcher-Reeves-Polak-Ribiere conjugate gradient method.

Memory-efficient alternative to BFGS for large-scale optimization. Uses only O(n) storage compared to O(n^2) for BFGS.

Algorithm:

  1. Compute gradient g at current point
  2. First iteration: search direction p = -g
  3. Perform line search along p
  4. Compute new gradient g_new
  5. Update: beta = (g_new . g_new) / (g . g) [Fletcher-Reeves] or beta = (g_new . (g_new - g)) / (g . g) [Polak-Ribiere]
  6. New direction: p = -g_new + beta * p
  7. Restart with steepest descent if beta < 0 or periodically
  8. Repeat until convergence
Template Parameters
Functiontype with:
  • Value(const math_Vector&, double&) for function value
  • Gradient(const math_Vector&, math_Vector&) for gradient
Parameters
theFuncfunction object with value and gradient
theStartingPointinitial guess
theConfigsolver configuration
Returns
result containing minimum location and value

◆ FRPRNumerical()

template<typename Function >
VectorResult MathOpt::FRPRNumerical ( Function & theFunc,
const math_Vector & theStartingPoint,
double theGradStep = 1.0e-8,
const FRPRConfig & theConfig = FRPRConfig() )

FRPR with numerical gradient. Uses central differences when analytical gradient is not available.

Template Parameters
Functiontype with Value(const math_Vector&, double&) method only
Parameters
theFuncfunction object
theStartingPointinitial guess
theGradStepstep size for numerical gradient
theConfigsolver configuration
Returns
result containing minimum location and value

◆ GlobalMinimum() [1/2]

template<typename Function >
VectorResult MathOpt::GlobalMinimum ( Function & theFunc,
const math_Vector & theLowerBounds,
const math_Vector & theUpperBounds,
const GlobalConfig & theConfig,
const PSOConfig * thePSOConfig,
const NCollection_DynamicArray< PSOSeedParticle > * theSeeds = nullptr,
PSOStats * theStats = nullptr )

Unified global optimization interface with PSO-specific options.

For PSO and PSOHybrid strategies, uses the provided PSO configuration, seed particles, and stats output. For other strategies, these are ignored.

Template Parameters
Functiontype with Value(const math_Vector&, double&) method
Parameters
theFuncfunction to minimize
theLowerBoundslower bounds for each variable
theUpperBoundsupper bounds for each variable
theConfigsolver configuration
thePSOConfigoptional PSO-specific configuration (overrides auto-generated)
theSeedsoptional seed particles for PSO initialization
theStatsoptional PSO statistics output
Returns
result containing best solution found

◆ GlobalMinimum() [2/2]

template<typename Function >
VectorResult MathOpt::GlobalMinimum ( Function & theFunc,
const math_Vector & theLowerBounds,
const math_Vector & theUpperBounds,
const GlobalConfig & theConfig = GlobalConfig() )

Unified global optimization interface.

Selects appropriate algorithm based on configuration and provides a consistent interface for all global optimization methods.

Template Parameters
Functiontype with Value(const math_Vector&, double&) method
Parameters
theFuncfunction to minimize
theLowerBoundslower bounds for each variable
theUpperBoundsupper bounds for each variable
theConfigsolver configuration
Returns
result containing best solution found

◆ Golden()

template<typename Function >
ScalarResult MathOpt::Golden ( Function & theFunc,
double theLower,
double theUpper,
const Config & theConfig = Config() )

Golden section search for 1D minimization. Simpler than Brent but with guaranteed linear convergence. Does not attempt parabolic interpolation.

Template Parameters
Functiontype with Value(double theX, double& theF) method
Parameters
theFuncfunction to minimize
theLowerlower bound of search interval
theUpperupper bound of search interval
theConfigsolver configuration
Returns
result containing minimum location and value

◆ LBFGS()

template<typename Function >
VectorResult MathOpt::LBFGS ( Function & theFunc,
const math_Vector & theStartingPoint,
int theMemorySize = 10,
const Config & theConfig = Config() )

L-BFGS (Limited-memory BFGS) for large-scale optimization. Uses only the m most recent {s, y} pairs instead of full Hessian. Memory: O(m*n) instead of O(n^2).

Template Parameters
Functiontype with Value and Gradient methods
Parameters
theFuncfunction object
theStartingPointinitial guess
theMemorySizenumber of gradient pairs to store (default 10)
theConfigsolver configuration
Returns
result containing minimum location and value

◆ MultiStart()

template<typename Function >
VectorResult MathOpt::MultiStart ( Function & theFunc,
const math_Vector & theLowerBounds,
const math_Vector & theUpperBounds,
const GlobalConfig & theConfig = GlobalConfig() )

Multi-start local optimization.

Runs multiple local optimizations from random starting points and returns the best result found. Uses Powell's method for local optimization (gradient-free).

Template Parameters
Functiontype with Value(const math_Vector&, double&) method
Parameters
theFuncfunction to minimize
theLowerBoundslower bounds
theUpperBoundsupper bounds
theConfigoptimization configuration
Returns
result containing best solution found

◆ Newton()

template<typename Function >
VectorResult MathOpt::Newton ( Function & theFunc,
const math_Vector & theStartingPoint,
const NewtonConfig & theConfig = NewtonConfig() )

Newton's method for N-dimensional minimization using Hessian.

Fastest convergence near minimum (quadratic) but requires Hessian computation. Uses line search for global convergence and Hessian regularization when the Hessian is not positive definite.

Algorithm:

  1. Compute gradient g and Hessian H at current point
  2. If H is not positive definite, regularize: H = H + lambda*I
  3. Solve H * p = -g for search direction p
  4. Perform line search along p
  5. Update x = x + alpha * p
  6. Repeat until convergence
Template Parameters
Functiontype with:
  • Value(const math_Vector&, double&) for function value
  • Gradient(const math_Vector&, math_Vector&) for gradient
  • Hessian(const math_Vector&, math_Matrix&) for Hessian
Parameters
theFuncfunction object with value, gradient, and Hessian
theStartingPointinitial guess
theConfigsolver configuration
Returns
result containing minimum location and value

◆ NewtonBounded()

template<typename Function >
VectorResult MathOpt::NewtonBounded ( Function & theFunc,
const math_Vector & theStartingPoint,
const math_Vector & theLowerBounds,
const math_Vector & theUpperBounds,
const NewtonConfig & theConfig = NewtonConfig() )

Newton's method with bound constraints.

Minimizes f(x) subject to theLowerBounds <= x <= theUpperBounds. Uses projected gradient approach similar to BFGSBounded.

Template Parameters
Functiontype with Value, Gradient, and Hessian methods
Parameters
theFuncfunction object
theStartingPointinitial guess
theLowerBoundslower bounds for each variable
theUpperBoundsupper bounds for each variable
theConfigsolver configuration
Returns
result containing minimum location and value

◆ NewtonModified()

template<typename Function >
VectorResult MathOpt::NewtonModified ( Function & theFunc,
const math_Vector & theStartingPoint,
const NewtonConfig & theConfig = NewtonConfig() )

Modified Newton's method with automatic Hessian regularization. Adds diagonal elements to ensure positive definiteness using an adaptive regularization strategy.

Template Parameters
Functiontype with Value, Gradient, and Hessian methods
Parameters
theFuncfunction object
theStartingPointinitial guess
theConfigsolver configuration
Returns
result containing minimum location and value

◆ NewtonNumerical()

template<typename Function >
VectorResult MathOpt::NewtonNumerical ( Function & theFunc,
const math_Vector & theStartingPoint,
double theGradStep = 1.0e-8,
double theHessStep = 1.0e-6,
const NewtonConfig & theConfig = NewtonConfig() )

Newton's method with fully numerical derivatives. Computes both gradient and Hessian using finite differences.

Template Parameters
Functiontype with Value(const math_Vector&, double&) method only
Parameters
theFuncfunction object
theStartingPointinitial guess
theGradStepstep size for numerical gradient
theHessStepstep size for numerical Hessian
theConfigsolver configuration
Returns
result containing minimum location and value

◆ NewtonNumericalHessian()

template<typename Function >
VectorResult MathOpt::NewtonNumericalHessian ( Function & theFunc,
const math_Vector & theStartingPoint,
double theHessStep = 1.0e-6,
const NewtonConfig & theConfig = NewtonConfig() )

Newton's method with numerical Hessian. Computes Hessian using finite differences when analytical Hessian is not available.

Template Parameters
Functiontype with:
  • Value(const math_Vector&, double&) for function value
  • Gradient(const math_Vector&, math_Vector&) for gradient
Parameters
theFuncfunction object
theStartingPointinitial guess
theHessStepstep size for numerical Hessian
theConfigsolver configuration
Returns
result containing minimum location and value

◆ PolishCoordinateWise()

template<typename Function >
void MathOpt::PolishCoordinateWise ( Function & theFunc,
math_Vector & thePosition,
double & theValue,
const math_Vector & theLowerBounds,
const math_Vector & theUpperBounds,
double theTolerance,
int theMaxPolishEvals,
int & theEvalCount )

Coordinate-wise polishing using Brent's 1D minimization. For each dimension, performs Brent's method directly on the single coordinate, modifying only one element of the position vector per evaluation (zero-allocation).

Template Parameters
Functiontype with Value(const math_Vector&, double&) method
Parameters
theFuncfunction to minimize
thePositioncurrent best position (updated in place)
theValuecurrent best value (updated in place)
theLowerBoundslower bounds for each variable
theUpperBoundsupper bounds for each variable
theToleranceconvergence tolerance for Brent's method
theMaxPolishEvalsmaximum total function evaluations for polishing
theEvalCountoutput: number of function evaluations used

◆ Powell()

template<typename Function >
VectorResult MathOpt::Powell ( Function & theFunc,
const math_Vector & theStartingPoint,
const Config & theConfig = Config() )

Powell's conjugate direction method for N-dimensional minimization. A gradient-free optimization algorithm that uses conjugate directions.

Algorithm:

  1. Start with N linearly independent directions (coordinate axes)
  2. Perform line minimization along each direction
  3. Replace one direction with the overall displacement direction
  4. Repeat until convergence

Advantages:

  • No gradient required
  • Generates conjugate directions for quadratic functions
  • Robust for non-smooth functions

Disadvantages:

  • Slower than gradient methods for smooth functions
  • May lose direction independence over iterations
Template Parameters
Functiontype with Value(const math_Vector&, double&) method
Parameters
theFuncfunction to minimize
theStartingPointinitial guess (N-dimensional)
theConfigsolver configuration
Returns
result containing minimum location and value

◆ PowellWithDirections()

template<typename Function >
VectorResult MathOpt::PowellWithDirections ( Function & theFunc,
const math_Vector & theStartingPoint,
const math_Matrix & theInitialDirections,
const Config & theConfig = Config() )

Powell's method with custom initial directions. Allows specifying the initial direction set instead of coordinate axes.

Template Parameters
Functiontype with Value(const math_Vector&, double&) method
Parameters
theFuncfunction to minimize
theStartingPointinitial guess
theInitialDirectionsinitial direction set (N x N matrix, directions as rows)
theConfigsolver configuration
Returns
result containing minimum location and value

◆ PSO() [1/2]

template<typename Function >
VectorResult MathOpt::PSO ( Function & theFunc,
const math_Vector & theLowerBounds,
const math_Vector & theUpperBounds,
const PSOConfig & theConfig,
const NCollection_DynamicArray< PSOSeedParticle > * theSeeds,
PSOStats * theStats = nullptr )

Particle Swarm Optimization for global minimization.

PSO is a stochastic, population-based optimization algorithm inspired by social behavior of bird flocking or fish schooling.

Algorithm:

  1. Initialize particles with random positions and velocities
  2. Evaluate fitness (function value) for each particle
  3. Update each particle's personal best if current position is better
  4. Update global best across all particles
  5. Update velocities: v = omega*v + c1*r1*(pbest-x) + c2*r2*(gbest-x)
  6. Update positions: x = x + v
  7. Repeat until convergence or max iterations

Properties:

  • Gradient-free: does not require derivatives
  • Global: can escape local minima
  • Stochastic: results depend on random initialization
  • Bound-constrained: handles box constraints naturally
Template Parameters
Functiontype with Value(const math_Vector&, double&) method
Parameters
theFuncfunction to minimize
theLowerBoundslower bounds for each variable
theUpperBoundsupper bounds for each variable
theConfigPSO configuration
theSeedsoptional seed particles for initialization
theStatsoptional output statistics
Returns
result containing best solution found

◆ PSO() [2/2]

template<typename Function >
VectorResult MathOpt::PSO ( Function & theFunc,
const math_Vector & theLowerBounds,
const math_Vector & theUpperBounds,
const PSOConfig & theConfig = PSOConfig() )

Particle Swarm Optimization for global minimization (basic overload).

Template Parameters
Functiontype with Value(const math_Vector&, double&) method
Parameters
theFuncfunction to minimize
theLowerBoundslower bounds for each variable
theUpperBoundsupper bounds for each variable
theConfigPSO configuration
Returns
result containing best solution found

◆ Uzawa()

UzawaResult MathOpt::Uzawa ( const math_Matrix & theCont,
const math_Vector & theSecont,
const math_Vector & theStartingPoint,
int theNce,
int theNci,
const UzawaConfig & theConfig = UzawaConfig() )
inline

Solve constrained least squares using Uzawa algorithm.

Solves: min ||X - X0||^2 subject to C*X = S

For equality constraints only, uses direct Crout decomposition. For mixed equality/inequality constraints, uses iterative Uzawa method.

The Uzawa algorithm is a dual decomposition method that:

  1. Updates primal variables X to minimize Lagrangian
  2. Updates dual variables (Lagrange multipliers) for constraint violations
Parameters
theContconstraint matrix C (Nce+Nci rows x N cols)
theSecontright-hand side S
theStartingPointinitial point X0
theNcenumber of equality constraints (first rows)
theNcinumber of inequality constraints (last rows, C*X <= S)
theConfigalgorithm configuration
Returns
UzawaResult with solution and auxiliary data

◆ UzawaEquality()

UzawaResult MathOpt::UzawaEquality ( const math_Matrix & theCont,
const math_Vector & theSecont,
const math_Vector & theStartingPoint,
const UzawaConfig & theConfig = UzawaConfig() )
inline

Solve constrained least squares with equality constraints only.

Convenience function for C*X = S with min ||X - X0||.

Parameters
theContconstraint matrix C
theSecontright-hand side S
theStartingPointinitial point X0
theConfigalgorithm configuration
Returns
UzawaResult with solution