The goal of this project is to progressively write a comprehensive mathematical library for Jai, similar to BLAS, the Gnu Scientific Library, or Hipparchus (former Java Apache Math).
The focus lies in providing the functionality first, performance second, but no unnecessary performance costs if possible.
Feel free to add functionality and performance upgrades.
There was a major rewite. Now, almost all functions do not allocate, but rather the results are stored in arguments of the functions themselves. This improves some of the linear algebra functions drastically. It's basically the old-school way of doing things, and there was a reason to do so. The new (old-style) system is more general and performant.
To enable future optimizations, every funtion is basically just a relay to specialized functions, so foo(...) will actually call foo_default(...), foo_dense(...), etc. Currently, most functions have a default implementation for $V/VectorType or $M/MatrixType. It should be ease to extend the system with specialized, high-performance functions for special matrix/vector types (e.g. sparse, triagonal, hermitian, ...).
Almost all operator overloads were removed in that process and afterwards rewritten for Dense***/DenseHeap***.
Importand TODOs:
- Disallow (most of) the functions in
LinearAlgebraforQuaternion, since they don't commute, so they are likely wrong. I might implement a functionis_commuting :: (z: $T) -> boolfor that. - Implement more linear algebra. Since matrices might have complex entries, I have to be careful with implementing the algorithms (where do
conjugates need to go?).
Generally:
- Current work focuses on making everything as generic as possible to enable different types of matrices/vector, heap/stack allocated, various number types.
- I'm going through [2] now to improve the algorithms since that book is actually considering special properties of matrices early on and also writes out EVERY algorithm used.
- (I might use some BLAS naming conventions and functions)
- (maybe add
Octonion($T)etc. later)
There are more thoughts written down in the document outlining some of my thoughts.
Quaternion, complex and reals scalars, vectors, matrices interoperate automatically (in most cases). Many functions specialize during compile-time on the real, complex, or quaternion variant (e.g. all the elementary functions).
-
Utils
pool(#code); will create and release a Pool of memory for all the allocations in the Code block.
-
Complex($T)numbers+,-,*,/,strfor pretty printing
-
Quaternion($T)numbers+,-,*,/,strfor pretty printing- [more stuff]
-
Elementary (
n∈ ℂ, ℍ)sign(r)factorial(n)binomial(from, choose)conjugate(n)abs_sq(n)abs(n)arg(n)exp(n)log(n)pow(n)sqrt(n)phase(magnitude ∈ ℂ, angle ∈ ℂ)[need to update for quaternion]sin,cos,tan,cot,sec,cscon ℂ, ℍasin,acos,atan,acot,asec,acscon ℂ, ℍsinh,cosh,tanh,coth,sech,cschon ℂ, ℍasinh,acosh,atanh,acoth,asech,acschon ℂ, ℍ
-
polynomials
solve_quadratic-> ℂsolve_quadratic_real-> ℝpolynom(x, ..a)= a[0] x^n + a[1] x^{n-1} + ... + a[n] x^0, with a,x ∈ ℝ,ℂsynthetic_divisionrepeated_synthetic_division
-
vector:
VectorType; real or complex vectorThe following types are implemented:
DenseVector(Type, int)DenseHeapVector(Type)VectorView(Type, int)(behaves likeDenseVector)VectorHeapView(Type)(behaves likeDenseHeapVector)MatrixRowView(Type, int)(behvaes likeDenseVector)MatrixRowHeapView(Type)(behvaes likeDenseHeapVector)MatrixColumnView(Type, int)(behaves likeDenseVector)MatrixColumnHeapView(Type)(behaves likeDenseHeapVector)
The following functions are implemented:
str; for pretty printing- operators
==,+,-,*,/ - in-place functions add, sub, neg, mul, div
- multiple initialization functions (ones, basis, varargs, etc.)
conjugateouter_productreflectnorm(vec, n)with n ∈ ℝ, specialisationsnorm_2,norm_1,norm_infcross, for 3-dim vectorsanglepermuteswap
-
matrix:
MatrixType; real or complexThe following types are implemented:
DenseMatrix(Type, int, int)DenseHeapMatrix(Type)MatrixView(Type, int, int)(behaves likeDenseMatrix)MatrixHeapView(Type)(behaves likeDenseHeapMatrix)
The following functions are implemented:
pstr,strfor pretty printing- operators
==,+,-,*,/ row,column- in-place functions
add,sub,neg,mul,div - multiple initialization functions (1, ones, hadamard, varargs, etc.)
reflectorsubmatrixtransposeconjugateconjugate_transpose=daggertensornorm_1,norm_inf,norm_frobeniuspermute_rows,permute_columns,permuteswap_columns,swap_rows
-
checks
is_diagonal_unit(M)is_(left, right)\_triangular(M)is_right_quasi_triangular(M)is_unitary(M)is_(left, right)_trapezoidal(M)
-
linear algebra
solve_linear_2x2=sl2solve_linear_(left, right)_triangular=sl(l, r)tasolve_linear_right_quasi_triangular=slrqtasolve_linear_orthogonal_projection=solve_linear_unitary=slopsolve_linear_successive_orthogonal_projection=slsopsolve_linear_(left, right)_trapezoidal=sl(l, r)tzfor vectors and matricesdecompose_LRgaussian_factorization_(no, full, row)_pivotsolve_linear_gaussian_factorization_(no, full, row)_pivot=slgf_(n, f, r)p=solve_LR(n, f, r)pinverse(M); M quadratic matrix ∈ ℂdeterminant=det; ∈ ℂ
- linear algebra
- LAPACK Scaling for Gaussian factorization (Algorithm 3.9.1 [1])
- Iterative Improvement for Gaussian factorization (Algorithm 3.9.2 [1])
Give sources for algorithms written so that we can take a look and help debugging.
Write (at least some) tests contained in each file.
[1] Scientific Computing, Vol I: Linear and nonlinear equations, Texts in computational science and engineering 18, Springer
[2] Matrix Computations, 4th Edition; Gene H. Golub & Charles F. Van Loan; Johns Hopkins University Press, Baltimore; 2013