Open CASCADE Technology Reference Manual 8.0.0
Loading...
Searching...
No Matches
Namespaces | Data Structures | Enumerations | Functions
MathLin Namespace Reference

Namespaces

namespace  Internal
 

Data Structures

struct  CroutResult
 Result for Crout LDL^T decomposition. Specialized for symmetric matrices. More...
 
struct  EigenResult
 Result for eigenvalue decomposition of tridiagonal matrix. More...
 
struct  LeastSquaresResult
 Result for least squares problems. More...
 
struct  LUResult
 Result for LU decomposition. More...
 
struct  QRResult
 Result for QR decomposition using Householder reflections. More...
 
struct  SVDResult
 Result for SVD decomposition. More...
 

Enumerations

enum class  LeastSquaresMethod { NormalEquations , QR , SVD }
 Method for solving least squares problems. More...
 

Functions

LUResult LU (const math_Matrix &theA, double theMinPivot=1.0e-20)
 Perform LU decomposition of matrix A with partial pivoting. Decomposes A into L*U where L is lower triangular with unit diagonal and U is upper triangular. The result stores L and U in a combined matrix.
 
LinearResult Solve (const math_Matrix &theA, const math_Vector &theB, double theMinPivot=1.0e-20)
 Solve linear system AX = B using LU decomposition.
 
LinearMultipleResult SolveMultiple (const math_Matrix &theA, const math_Matrix &theB, double theMinPivot=1.0e-20)
 Solve multiple linear systems AX = B where B is a matrix. Each column of B is a separate right-hand side.
 
LinearResult Determinant (const math_Matrix &theA, double theMinPivot=1.0e-20)
 Compute determinant of matrix A.
 
InverseResult Invert (const math_Matrix &theA, double theMinPivot=1.0e-20)
 Compute inverse of matrix A.
 
CroutResult Crout (const math_Matrix &theA, double theMinPivot=1.0e-20)
 Crout decomposition for symmetric matrices: A = L * D * L^T.
 
LinearResult SolveCrout (const math_Matrix &theA, const math_Vector &theB, double theMinPivot=1.0e-20)
 Solve symmetric linear system Ax = b using Crout decomposition.
 
InverseResult InvertCrout (const math_Matrix &theA, double theMinPivot=1.0e-20)
 Compute inverse of symmetric matrix using Crout decomposition.
 
SVDResult SVD (const math_Matrix &theA, double theTolerance=1.0e-15)
 Singular Value Decomposition: A = U * diag(S) * V^T.
 
LinearResult SolveSVD (const math_Matrix &theA, const math_Vector &theB, double theTolerance=1.0e-6)
 Solve linear system Ax = b using SVD decomposition. This is particularly useful for ill-conditioned or singular systems.
 
InverseResult PseudoInverse (const math_Matrix &theA, double theTolerance=1.0e-6)
 Compute pseudo-inverse (Moore-Penrose inverse) of matrix A. A^+ = V * diag(1/w) * U^T where singular values below threshold are set to 0.
 
double ConditionNumber (const math_Matrix &theA)
 Compute condition number of matrix using SVD. Condition number = sigma_max / sigma_min (ratio of largest to smallest singular value).
 
int NumericalRank (const math_Matrix &theA, double theTolerance=1.0e-15)
 Compute numerical rank of matrix using SVD. Rank is the number of singular values above the threshold.
 
QRResult QR (const math_Matrix &theA, double theTolerance=1.0e-20)
 QR decomposition using Householder reflections: A = Q * R.
 
LinearResult SolveQR (const math_Matrix &theA, const math_Vector &theB, double theTolerance=1.0e-20)
 Solve overdetermined system Ax = b using QR decomposition (least squares).
 
LinearMultipleResult SolveQRMultiple (const math_Matrix &theA, const math_Matrix &theB, double theTolerance=1.0e-20)
 Solve multiple right-hand sides using QR decomposition.
 
EigenResult Jacobi (const math_Matrix &theA, bool theSortDescending=true)
 Compute eigenvalues and eigenvectors of a symmetric matrix using the Jacobi iterative method.
 
EigenResult EigenValues (const math_Matrix &theA, bool theSortDescending=true)
 Compute only eigenvalues of a symmetric matrix (faster).
 
EigenResult SpectralDecomposition (const math_Matrix &theA)
 Compute spectral decomposition A = V * D * V^T.
 
std::optional< math_MatrixMatrixPower (const math_Matrix &theA, double thePower)
 Compute matrix power A^p for symmetric positive semi-definite matrix.
 
std::optional< math_MatrixMatrixSqrt (const math_Matrix &theA)
 Compute matrix square root of symmetric positive semi-definite matrix.
 
std::optional< math_MatrixMatrixInvSqrt (const math_Matrix &theA)
 Compute matrix inverse square root of symmetric positive definite matrix.
 
LeastSquaresResult LeastSquares (const math_Matrix &theA, const math_Vector &theB, LeastSquaresMethod theMethod=LeastSquaresMethod::QR, double theTolerance=1.0e-15)
 Solve overdetermined linear least squares: minimize ||Ax - b||_2.
 
LeastSquaresResult WeightedLeastSquares (const math_Matrix &theA, const math_Vector &theB, const math_Vector &theW, LeastSquaresMethod theMethod=LeastSquaresMethod::QR, double theTolerance=1.0e-15)
 Solve weighted least squares: minimize ||W^{1/2}(Ax - b)||_2.
 
LeastSquaresResult RegularizedLeastSquares (const math_Matrix &theA, const math_Vector &theB, double theLambda, double theTolerance=1.0e-15)
 Solve regularized least squares (Tikhonov/Ridge regression): minimize ||Ax - b||_2^2 + lambda*||x||_2^2.
 
double OptimalRegularization (const math_Matrix &theA, const math_Vector &theB, double theLambdaMin=1.0e-10, double theLambdaMax=1.0e2, int theNbPoints=20)
 Compute optimal regularization parameter using Leave-One-Out Cross-Validation.
 
EigenResult EigenTridiagonal (const math_Vector &theDiagonal, const math_Vector &theSubdiagonal, int theMaxIterations=30)
 Eigenvalue decomposition of symmetric tridiagonal matrix using QL algorithm.
 
math_Vector GetEigenVector (const EigenResult &theResult, int theIndex)
 Get a single eigenvector from the result.
 

Enumeration Type Documentation

◆ LeastSquaresMethod

Method for solving least squares problems.

Enumerator
NormalEquations 

A^T*A*x = A^T*b (fast but less stable)

QR 

Householder QR (good balance of speed and stability)

SVD 

SVD (most stable, handles rank-deficient matrices)

Function Documentation

◆ ConditionNumber()

double MathLin::ConditionNumber ( const math_Matrix & theA)
inline

Compute condition number of matrix using SVD. Condition number = sigma_max / sigma_min (ratio of largest to smallest singular value).

High condition number (> 1e10) indicates ill-conditioned matrix.

Parameters
theAinput matrix
Returns
condition number (infinity if matrix is singular)

◆ Crout()

CroutResult MathLin::Crout ( const math_Matrix & theA,
double theMinPivot = 1.0e-20 )
inline

Crout decomposition for symmetric matrices: A = L * D * L^T.

This algorithm decomposes a symmetric matrix A into:

  • L: lower triangular matrix with unit diagonal
  • D: diagonal matrix

Properties:

  • Only the lower triangle of A is used
  • Faster than general LU for symmetric matrices
  • Computes inverse efficiently
  • Requires positive definiteness for stability
Parameters
theAinput symmetric matrix (only lower triangle used)
theMinPivotminimum pivot value (smaller treated as singular)
Returns
Crout decomposition result

◆ Determinant()

LinearResult MathLin::Determinant ( const math_Matrix & theA,
double theMinPivot = 1.0e-20 )
inline

Compute determinant of matrix A.

Parameters
theAinput square matrix
theMinPivotminimum pivot value
Returns
result containing determinant value

◆ EigenTridiagonal()

EigenResult MathLin::EigenTridiagonal ( const math_Vector & theDiagonal,
const math_Vector & theSubdiagonal,
int theMaxIterations = 30 )
inline

Eigenvalue decomposition of symmetric tridiagonal matrix using QL algorithm.

The QL algorithm with implicit Wilkinson shifts finds all eigenvalues and eigenvectors of a symmetric tridiagonal matrix T = Q * D * Q^T.

Properties:

  • All eigenvalues are real (matrix is symmetric)
  • Eigenvectors are orthonormal
  • Numerically stable with implicit shifts
Parameters
theDiagonaldiagonal elements of the tridiagonal matrix
theSubdiagonalsubdiagonal elements (one less than diagonal)
theMaxIterationsmaximum iterations per eigenvalue (default 30)
Returns
EigenResult containing eigenvalues and eigenvector matrix

◆ EigenValues()

EigenResult MathLin::EigenValues ( const math_Matrix & theA,
bool theSortDescending = true )
inline

Compute only eigenvalues of a symmetric matrix (faster).

Uses the same Jacobi method but may be optimized to not store eigenvectors if not needed.

Parameters
theAinput symmetric matrix (n x n)
theSortDescendingif true, eigenvalues are sorted in descending order
Returns
eigenvalue result (only EigenValues is set)

◆ GetEigenVector()

math_Vector MathLin::GetEigenVector ( const EigenResult & theResult,
int theIndex )
inline

Get a single eigenvector from the result.

Parameters
theResulteigenvalue decomposition result
theIndex1-based index of eigenvector
Returns
eigenvector as math_Vector

◆ Invert()

InverseResult MathLin::Invert ( const math_Matrix & theA,
double theMinPivot = 1.0e-20 )
inline

Compute inverse of matrix A.

Parameters
theAinput square matrix
theMinPivotminimum pivot value
Returns
result containing inverse matrix

◆ InvertCrout()

InverseResult MathLin::InvertCrout ( const math_Matrix & theA,
double theMinPivot = 1.0e-20 )
inline

Compute inverse of symmetric matrix using Crout decomposition.

Parameters
theAinput symmetric matrix
theMinPivotminimum pivot value
Returns
result containing full symmetric inverse matrix

◆ Jacobi()

EigenResult MathLin::Jacobi ( const math_Matrix & theA,
bool theSortDescending = true )
inline

Compute eigenvalues and eigenvectors of a symmetric matrix using the Jacobi iterative method.

The Jacobi method applies a sequence of plane rotations (Givens rotations) to diagonalize the symmetric matrix A: A' = R^T * A * R where R is a rotation that zeroes one off-diagonal element.

After convergence, A is diagonal with eigenvalues on the diagonal, and the accumulated rotations form the eigenvector matrix.

Properties:

  • Only works for symmetric matrices
  • Eigenvalues are always real for symmetric matrices
  • Eigenvectors are orthonormal
  • Numerically stable

Complexity: O(n^3) per sweep, typically needs 5-10 sweeps.

Parameters
theAinput symmetric matrix (n x n)
theSortDescendingif true, eigenvalues are sorted in descending order
Returns
eigenvalue result

◆ LeastSquares()

LeastSquaresResult MathLin::LeastSquares ( const math_Matrix & theA,
const math_Vector & theB,
LeastSquaresMethod theMethod = LeastSquaresMethod::QR,
double theTolerance = 1.0e-15 )
inline

Solve overdetermined linear least squares: minimize ||Ax - b||_2.

Given m x n matrix A (m >= n) and m-vector b, finds n-vector x that minimizes the 2-norm of the residual r = Ax - b.

Methods:

  • NormalEquations: Solves A^T*A*x = A^T*b (fastest, may lose precision)
  • QR: Uses Householder QR decomposition (good general choice)
  • SVD: Most robust, handles rank-deficient systems
Parameters
theAcoefficient matrix (m x n, m >= n)
theBright-hand side vector (length m)
theMethodsolution method (default: QR)
theTolerancefor rank/singularity detection
Returns
least squares result

◆ LU()

LUResult MathLin::LU ( const math_Matrix & theA,
double theMinPivot = 1.0e-20 )
inline

Perform LU decomposition of matrix A with partial pivoting. Decomposes A into L*U where L is lower triangular with unit diagonal and U is upper triangular. The result stores L and U in a combined matrix.

Parameters
theAinput square matrix
theMinPivotminimum pivot value (smaller treated as singular)
Returns
LU decomposition result

◆ MatrixInvSqrt()

std::optional< math_Matrix > MathLin::MatrixInvSqrt ( const math_Matrix & theA)
inline

Compute matrix inverse square root of symmetric positive definite matrix.

Parameters
theAinput symmetric positive definite matrix
Returns
A^(-1/2) such that A^(-1/2) * A * A^(-1/2) = I

◆ MatrixPower()

std::optional< math_Matrix > MathLin::MatrixPower ( const math_Matrix & theA,
double thePower )
inline

Compute matrix power A^p for symmetric positive semi-definite matrix.

Uses spectral decomposition: A^p = V * D^p * V^T where D^p is the diagonal matrix with eigenvalues raised to power p.

Parameters
theAinput symmetric positive semi-definite matrix
thePowerexponent (can be fractional, e.g., 0.5 for sqrt)
Returns
A^p matrix

◆ MatrixSqrt()

std::optional< math_Matrix > MathLin::MatrixSqrt ( const math_Matrix & theA)
inline

Compute matrix square root of symmetric positive semi-definite matrix.

Parameters
theAinput symmetric positive semi-definite matrix
Returns
sqrt(A) such that sqrt(A) * sqrt(A) = A

◆ NumericalRank()

int MathLin::NumericalRank ( const math_Matrix & theA,
double theTolerance = 1.0e-15 )
inline

Compute numerical rank of matrix using SVD. Rank is the number of singular values above the threshold.

Parameters
theAinput matrix
theTolerancerelative tolerance for singular values
Returns
numerical rank

◆ OptimalRegularization()

double MathLin::OptimalRegularization ( const math_Matrix & theA,
const math_Vector & theB,
double theLambdaMin = 1.0e-10,
double theLambdaMax = 1.0e2,
int theNbPoints = 20 )
inline

Compute optimal regularization parameter using Leave-One-Out Cross-Validation.

Minimizes the LOO-CV score: sum_i (a_i^T * x_{-i} - b_i)^2 where x_{-i} is the solution with the i-th observation removed.

Parameters
theAcoefficient matrix
theBright-hand side vector
theLambdaMinminimum lambda to consider
theLambdaMaxmaximum lambda to consider
theNbPointsnumber of lambda values to try
Returns
optimal regularization parameter

◆ PseudoInverse()

InverseResult MathLin::PseudoInverse ( const math_Matrix & theA,
double theTolerance = 1.0e-6 )
inline

Compute pseudo-inverse (Moore-Penrose inverse) of matrix A. A^+ = V * diag(1/w) * U^T where singular values below threshold are set to 0.

Properties:

  • A * A^+ * A = A
  • A^+ * A * A^+ = A^+
  • (A * A^+)^T = A * A^+
  • (A^+ * A)^T = A^+ * A
Parameters
theAinput matrix (m x n)
theTolerancefor singular value threshold
Returns
result containing pseudo-inverse matrix (n x m)

◆ QR()

QRResult MathLin::QR ( const math_Matrix & theA,
double theTolerance = 1.0e-20 )
inline

QR decomposition using Householder reflections: A = Q * R.

Decomposes an m x n matrix A (m >= n) into:

  • Q: m x m orthogonal matrix (Q^T * Q = I)
  • R: m x n upper triangular matrix

The Householder method applies orthogonal transformations to reduce A to upper triangular form. It is more numerically stable than Gram-Schmidt orthogonalization.

Uses: Least squares problems, orthogonalization, computing determinant sign.

Parameters
theAinput matrix A (m x n, m >= n)
theTolerancefor rank determination
Returns
QR decomposition result

◆ RegularizedLeastSquares()

LeastSquaresResult MathLin::RegularizedLeastSquares ( const math_Matrix & theA,
const math_Vector & theB,
double theLambda,
double theTolerance = 1.0e-15 )
inline

Solve regularized least squares (Tikhonov/Ridge regression): minimize ||Ax - b||_2^2 + lambda*||x||_2^2.

Adds regularization to stabilize ill-conditioned problems. The solution is: x = (A^T*A + lambda*I)^{-1} * A^T * b

Parameters
theAcoefficient matrix (m x n)
theBright-hand side vector (length m)
theLambdaregularization parameter (>= 0)
theTolerancefor singularity detection
Returns
regularized least squares result

◆ Solve()

LinearResult MathLin::Solve ( const math_Matrix & theA,
const math_Vector & theB,
double theMinPivot = 1.0e-20 )
inline

Solve linear system AX = B using LU decomposition.

Parameters
theAcoefficient matrix (square)
theBright-hand side vector
theMinPivotminimum pivot value
Returns
result containing solution vector

◆ SolveCrout()

LinearResult MathLin::SolveCrout ( const math_Matrix & theA,
const math_Vector & theB,
double theMinPivot = 1.0e-20 )
inline

Solve symmetric linear system Ax = b using Crout decomposition.

Uses precomputed Crout decomposition to solve for x. More efficient than LU for symmetric positive definite matrices.

Parameters
theAcoefficient matrix (symmetric)
theBright-hand side vector
theMinPivotminimum pivot value
Returns
result containing solution vector

◆ SolveMultiple()

LinearMultipleResult MathLin::SolveMultiple ( const math_Matrix & theA,
const math_Matrix & theB,
double theMinPivot = 1.0e-20 )
inline

Solve multiple linear systems AX = B where B is a matrix. Each column of B is a separate right-hand side.

Parameters
theAcoefficient matrix (square)
theBright-hand side matrix
theMinPivotminimum pivot value
Returns
result containing solution matrix

◆ SolveQR()

LinearResult MathLin::SolveQR ( const math_Matrix & theA,
const math_Vector & theB,
double theTolerance = 1.0e-20 )
inline

Solve overdetermined system Ax = b using QR decomposition (least squares).

For m x n system with m > n, finds x that minimizes ||Ax - b||_2.

Algorithm:

  1. Decompose A = Q * R
  2. Compute c = Q^T * b
  3. Solve R * x = c[1:n] (back substitution)
Parameters
theAcoefficient matrix (m x n, m >= n)
theBright-hand side vector (length m)
theTolerancefor singularity detection
Returns
result containing least squares solution

◆ SolveQRMultiple()

LinearMultipleResult MathLin::SolveQRMultiple ( const math_Matrix & theA,
const math_Matrix & theB,
double theTolerance = 1.0e-20 )
inline

Solve multiple right-hand sides using QR decomposition.

Parameters
theAcoefficient matrix (m x n, m >= n)
theBright-hand side matrix (m x p)
theTolerancefor singularity detection
Returns
result containing solution matrix (n x p)

◆ SolveSVD()

LinearResult MathLin::SolveSVD ( const math_Matrix & theA,
const math_Vector & theB,
double theTolerance = 1.0e-6 )
inline

Solve linear system Ax = b using SVD decomposition. This is particularly useful for ill-conditioned or singular systems.

For overdetermined systems (m > n), finds the least squares solution. For underdetermined systems (m < n), finds the minimum norm solution.

Parameters
theAcoefficient matrix (m x n)
theBright-hand side vector (length m)
theTolerancefor singular value threshold
Returns
result containing solution vector

◆ SpectralDecomposition()

EigenResult MathLin::SpectralDecomposition ( const math_Matrix & theA)
inline

Compute spectral decomposition A = V * D * V^T.

For symmetric matrix A, decomposes into:

  • V: orthogonal matrix of eigenvectors (columns)
  • D: diagonal matrix of eigenvalues

Such that A = V * D * V^T

Parameters
theAinput symmetric matrix (n x n)
Returns
eigenvalue result with EigenValues (diagonal of D) and EigenVectors (V)

◆ SVD()

SVDResult MathLin::SVD ( const math_Matrix & theA,
double theTolerance = 1.0e-15 )
inline

Singular Value Decomposition: A = U * diag(S) * V^T.

Decomposes an m x n matrix A into:

  • U: m x n matrix of left singular vectors (orthonormal columns)
  • S: n singular values in descending order
  • V: n x n matrix of right singular vectors (orthonormal)

Properties:

  • Works for any m x n matrix (m can be less, equal, or greater than n)
  • Singular values are always non-negative
  • Provides the best low-rank approximation of a matrix
  • Useful for solving ill-conditioned linear systems
Parameters
theAinput matrix A (m x n)
theTolerancefor rank determination (relative to largest singular value)
Returns
SVD decomposition result

◆ WeightedLeastSquares()

LeastSquaresResult MathLin::WeightedLeastSquares ( const math_Matrix & theA,
const math_Vector & theB,
const math_Vector & theW,
LeastSquaresMethod theMethod = LeastSquaresMethod::QR,
double theTolerance = 1.0e-15 )
inline

Solve weighted least squares: minimize ||W^{1/2}(Ax - b)||_2.

Equivalent to minimizing sum of w_i * (a_i^T * x - b_i)^2 where w_i are the weights.

Parameters
theAcoefficient matrix (m x n)
theBright-hand side vector (length m)
theWweight vector (length m, positive values)
theMethodsolution method
theTolerancefor rank detection
Returns
weighted least squares result