More on Matrices
Learn how we can perform some advanced matrix operations using JAX.
We'll cover the following
While the vast world of matrices and the implementation is more than even a dedicated course can cover, we’ll try to do some justice to the subject by concluding with another lesson covering a few important concepts.
Determinant
Like the norm, the determinant is a scalar value characterizing a square matrix. It is quite useful in defining some properties of the matrix.
For example, a matrix is singular if (and only if) its determinant is zero.
We can calculate it as:
a = linalg.det(A)
Inverse
The inverse of a matrix is defined as:
The linalg
sublibrary provides the function for inverse calculation as:
A_inv = linalg.inv(A)
Note: The
linalg.pinv()
can be used to calculate aand doesn’t support the pseudo-inverse While inverse is calculated only for square, full-rank matrices, Pseudo/Moore-Penrose inverse can be calculated for any mxn matrix. int
members.
Power
Similarly, we can calculate the power of A
using matrix_power()
as:
A_square = linalg.matrix_power(A,2)
A = jnp.array([[1, 2, 3],[ 4, 5, 6],[7, 8, 9]])print(A)B = linalg.matrix_power(A,-1)C = linalg.inv(A)d = linalg.det(A)print(B)print(C) # will have same answer as Bprint("Matrix's determinant is: ",d)
A
is
A = jnp.array([[1, -1, 3],[ 3, 5, 1],[7, 0, 9]])print(A)B = linalg.matrix_power(A,2)C = linalg.inv(A)D = linalg.inv(C) #Allowing rounding-off, D=Ae = linalg.det(A)print("Matrix's determinant is: ",e)print(B)print(C)print(D)
Eigenvectors and eigenvalues
If we recall, an eigenvector v of a square matrix A is defined as:
where λ is the corresponding eigenvalue.
lamba_values, eigen_vect = linalg.eig(A)
Compared to normal NumPy, the JAX version treats every value as complex, so don’t be surprised by the j we observe in the output values.
It may be tempting to use “lambda” as a variable identifier. Don’t do this since
lambda
is a keyword. Instead, we can useλ
itself as an identifier.
A = jnp.array([[2,1,5],[3,0,4],[2,1,-2]])λ, eigen_vectors = linalg.eig(A)print("The λ values are:,",λ)print("And Eigen vectors (as a matrix) are:",eigen_vectors)
Definiteness
Eigenvalues of a vector introduce another related concept.
We call a square matrix a positive definite matrix if all the eigenvalues are positive while a positive semidefinite matrix is one having all its eigenvalues either zero or positive.
We can extend this concept to negative definite and semidefinite matrices as well.
If a matrix has some positive eigenvalues and other negatives, it’s called an indefinite matrix. These matrices are pretty helpful as we will shortly see in the follow-up lessons.
Note: The concept of definiteness is usually restricted to only real-value square matrices, but can be expanded to complex matrices if needed.
def TestDefiniteness(X):if(jnp.all(linalg.eigvals(X) > 0)):print("It's Positive Definite")elif(jnp.all(linalg.eigvals(X) >= 0)):print("It's Positive Semidefinite")elif(jnp.all(linalg.eigvals(X) < 0)):print("It's Negative Definite")elif(jnp.all(linalg.eigvals(X) <= 0)):print("It's Negative Semidefinite")else:print("Its Indefinite")#Lets test some matricesA = jnp.array(([2.0,3.0,4.0],[1.2,2.4,5.2],[2.8,3.6,4.9]))B = jnp.array(([2.4,4.3],[6.1, 13.6]))TestDefiniteness(A)TestDefiniteness(B)
Singular Value Decomposition
We’ll conclude the lesson by performing a singular value decomposition (SVD) on a given matrix. If you recall, an SVD of matrix A is:
If A is , then the dimensions of the decomposed matrices are:
We can verify the dimensions in this example:
A = jnp.array([[2,1,5],[3,0,4],[2,1,-2],[4,0,7]])U, Sigma, V = linalg.svd(A)print("Original matrix shape:",A.shape)print("U shape:",U.shape)print("Σ shape:",Sigma.shape)print("V* shape:", V.shape)
If we look closely, while the shapes of and are consistent with the formula, is returned as a one-dimensional vector of singular values, which is consistent with NumPy.