Unit 2 of 77 min read

Linear Algebra

Vector spaces, matrices, eigenvalues, projections, SVD, LU decomposition.

Share:WhatsAppLinkedIn

Why this unit matters

Linear algebra is arguably the most frequently tested mathematical foundation in GATE DA. Eigenvalues appear in PCA, SVD, and spectral methods. Rank-nullity connects to solution existence in systems of equations. Determinants and matrix inverses underpin many closed-form solutions. Orthogonal projections are at the heart of regression. If you are comfortable with these ideas, the ML unit becomes significantly easier because you already understand the machinery beneath the algorithms.

Syllabus map

Sub-topic Key concepts
Vector spaces Subspaces, span, basis, dimension
Linear dependence Testing dependence, finding basis
Matrices Types, operations, transpose, inverse
Special matrices Orthogonal, idempotent, projection, partition
Quadratic forms Definiteness, relation to eigenvalues
Systems of equations Gaussian elimination, consistency
Eigenvalues/vectors Characteristic polynomial, diagonalisation
Rank and nullity Rank-nullity theorem
Decompositions LU decomposition, SVD

Vector spaces and subspaces

A vector space V over the reals is a set with addition and scalar multiplication satisfying 8 axioms (closure, associativity, identity elements, inverses, distributivity). The key examples for GATE DA are R^n and the space of m×n matrices.

A subspace must contain the zero vector, be closed under addition, and be closed under scalar multiplication.

Span and basis. The span of a set of vectors is all linear combinations of them. A basis is a minimal spanning set, linearly independent and spanning. The number of basis vectors is the dimension of the space.

Trap. A set of vectors with more vectors than the dimension of the space is always linearly dependent.

Linear dependence and independence

Vectors v1, v2, ..., vk are linearly independent if the only solution to c1v1 + c2v2 + ... + ck*vk = 0 is c1 = c2 = ... = ck = 0.

Practical test: form a matrix with the vectors as columns and row-reduce. If every column has a pivot, the vectors are independent.

GATE shortcut. If any vector in the set is a scalar multiple of another, the set is dependent, you can see this by inspection without row reduction.

Special matrices

Orthogonal matrix Q. Columns are orthonormal (pairwise orthogonal, each with unit length). Q^T * Q = I, so Q^(-1) = Q^T. Determinant is +1 or -1. Orthogonal transformations preserve lengths and angles.

Idempotent matrix P. P^2 = P. The eigenvalues of an idempotent matrix are only 0 and 1. Projection matrices (H = X(X^T X)^(-1) X^T in regression) are idempotent and symmetric.

Projection matrix. Projecting a vector b onto the column space of A: projection = A(A^T A)^(-1) A^T * b. The hat matrix H in linear regression is exactly this.

Quadratic form. x^T A x where A is symmetric. It is positive definite if x^T A x > 0 for all non-zero x, which happens if and only if all eigenvalues of A are positive.

Gaussian elimination and systems of equations

A system Ax = b is consistent (has at least one solution) iff rank(A) = rank(A|b), where (A|b) is the augmented matrix.

  • Unique solution: rank(A) = rank(A|b) = n (number of unknowns)., Infinitely many solutions: rank(A) = rank(A|b) < n., No solution: rank(A) < rank(A|b).

Rank-nullity theorem. rank(A) + nullity(A) = n (number of columns).

Nullity is the dimension of the null space, the number of free variables after row reduction.

Eigenvalues and eigenvectors

Av = lambda * v, where v is non-zero. Lambda is an eigenvalue; v is the corresponding eigenvector.

Find eigenvalues by solving the characteristic equation: det(A, lambda * I) = 0.

Important properties:

  • Trace(A) = sum of eigenvalues., det(A) = product of eigenvalues., Symmetric matrices always have real eigenvalues and orthogonal eigenvectors., If A is idempotent (P^2 = P), eigenvalues are 0 or 1., rank(A) = number of non-zero eigenvalues (for diagonalisable matrices).

Diagonalisation. A = P * D * P^(-1) where D is diagonal with eigenvalues and P has eigenvectors as columns. This is possible iff A has n linearly independent eigenvectors.

Trap. Eigenvalues of A^2 are the squares of eigenvalues of A. Eigenvalues of A^(-1) are reciprocals of eigenvalues of A.

LU decomposition

A = LU where L is lower triangular with 1s on the diagonal and U is upper triangular. Produced by Gaussian elimination without row swaps. Useful for solving multiple systems Ax = b with the same A (factor once, forward/back-substitute many times).

With row swaps: PA = LU where P is a permutation matrix.

Singular Value Decomposition

A = U * Sigma * V^T where:

  • A is an m × n matrix., U is m × m orthogonal (left singular vectors)., Sigma is m × n diagonal (singular values, non-negative, in decreasing order)., V is n × n orthogonal (right singular vectors).

Singular values sigma_i = sqrt(eigenvalues of A^T A).

Rank of A = number of non-zero singular values.

Why SVD matters for GATE DA. PCA uses SVD. Low-rank approximation (truncated SVD) is used in collaborative filtering and image compression. The pseudoinverse A^+ = V * Sigma^+ * U^T allows solving over-determined systems.

Trap. The singular values of A are not the same as the eigenvalues of A (unless A is symmetric positive semi-definite).

Worked examples

Example 1. A is a 5×3 matrix of rank 3. Find the nullity of A.

Rank-nullity: rank + nullity = number of columns = 3. Nullity = 3, 3 = 0. The null space is trivial (only the zero vector).

Example 2. A is a 3×3 matrix with eigenvalues 1, 2, 3. Find det(A) and trace(A).

det(A) = 1 * 2 * 3 = 6. trace(A) = 1 + 2 + 3 = 6.

Example 3. P is idempotent and symmetric. Show that I, P is also idempotent.

(I, P)^2 = I, P - P + P^2 = I, 2P + P = I, P. Yes, idempotent.

Example 4. Are the vectors [1, 2, 3], [2, 4, 6], [1, 0, 1] linearly independent?

The first two are proportional (second = 2 * first), so the set is linearly dependent.

Example 5. A 4×4 matrix has singular values 5, 3, 2, 0. What is the rank of A?

Rank = number of non-zero singular values = 3.

Quick-revision summary

  • Subspace must contain zero, be closed under addition and scalar multiplication., Rank-nullity: rank + nullity = number of columns., Consistent system: rank(A) = rank(A|b)., Trace = sum of eigenvalues. det = product of eigenvalues., Symmetric matrix: real eigenvalues, orthogonal eigenvectors., Idempotent matrix: eigenvalues are only 0 and 1., Orthogonal matrix Q: Q^T * Q = I, so Q^(-1) = Q^T., SVD: A = U Sigma V^T. Rank = number of non-zero singular values., Singular values are NOT generally the same as eigenvalues.

How to study this unit

  1. Begin with vector spaces (day 1): understand span, basis, dimension with concrete R^2 and R^3 examples. Avoid abstract axiomatic study at this stage.
  2. Practice Gaussian elimination on 3×3 and 4×4 systems until row reduction is automatic (day 2).
  3. Master the eigenvalue calculation: characteristic polynomial, solving for lambda, then back-substituting to find eigenvectors. Do at least 8 examples (day 3).
  4. Memorise the key trace/determinant/rank shortcuts, these appear as 1-mark MCQs constantly.
  5. Study SVD conceptually: know what U, Sigma, V^T represent and how to relate singular values to rank and to eigenvalues of A^T A.
  6. Do 5, 8 GATE CS and GATE DA previous-year linear algebra questions. Many are shared between the two papers.

Test Your Knowledge

Quick MCQ check on this unit

Start Quiz →

Prefer watching over reading?

Subscribe for free.

Subscribe on YouTube