In mathematics and especially linear algebra, an n-by-n matrix A is called invertible or non-singular if there exists another n-by-n matrix B such that
AB = BA = In,
where In denotes the n-by-n identity matrix and the multiplication used is ordinary matrix multiplication. If this is the case, then the matrix B is uniquely determined by A and is called the inverse of A, denoted by A−1. A square matrix that is not invertible is called singular. While the most common case is that of matrices over the real or complex numbers, all these definitions can be given for matrices over any ring.

Table of contents
1 Invertible Matrix Theorem
2 Further properties and facts
3 Generalizations

Invertible Matrix Theorem

Let A be a square n by n matrix over a field K (for example the field R of real numbers). The following statements are equivalent and must all be true for A to be invertible:

  • A is row equivalent to the n by n identity matrix In
  • A has n pivot positions
  • det A ≠ 0
  • rank A = n
  • The equation Ax = 0 has only the trivial solution x = 0 (i.e. Nul A = {0}).
  • The equation Ax = b has at most one solution for each b in Kn
  • The equation Ax = b has at least one solution for each b in Kn\
  • The equation Ax = b has exactly one solution for each b in Kn
  • The columns of A are linearly independent.
  • The columns of A span Kn (i.e. Col A = Kn)
  • The columns of A form a basis of Kn
  • The linear transformation x |-> Ax from Kn to Kn is one-to-one
  • The linear transformation x |-> Ax from Kn to Kn is onto
  • The linear transformation x |-> Ax from Kn to Kn is bijective
  • There is an n by n matrix B such that BA = In
  • There is an n by n matrix B such that AB = In
  • The transpose AT is an invertible matrix.
  • The number 0 is not an eigenvalue of A

Further properties and facts

To check whether a given matrix is invertible, and to compute the inverse in small examples, one typically uses the Gauss-Jordan elimination algorithm. Other methods are explained under matrix inversion.

The inverse of an invertible matrix A is itself invertible, with

(A−1)−1 = A.
The product of two invertible matrices A and B of the same size is again invertible, with the inverse given by
(AB)−1 = B−1A−1
(note that the order of the factors is reversed.) As a consequence, the set of invertible n-by-n matrices forms a group, known as the general linear group Gl(n).

As a rule of thumb, "almost all" matrices are invertible. Over the field of real numbers, this can be made precise as follows: the set of singular n-by-n matrices, considered as a subset of Rn×n, is a null set, i.e. has Lebesgue measure zero. Intuitively, this means that if you pick a random square matrix over the reals, the probability that it be singular is zero. The reason for this is that singular matrices can be thought of as the roots of the polynomial function given by the determinant.

A square matrix with entries from some commutative ring is invertible if and only if its determinant is a unit in that ring.


Some of the properties of inverse matrices are shared by pseudoinverses which can be defined for every matrix, even for non-square ones.