When you’re presented with a matrix in a math or physics class, you’ll often be asked to find its eigenvalues. If you aren’t sure what that means or how to do it, the task is daunting, and it involves a lot of confusing terminologies that makes matters even worse. However, the process of calculating eigenvalues isn’t too challenging if you’re comfortable with solving quadratic (or polynomial) equations, provided you learn the basics of matrices, eigenvalues and eigenvectors.

## Matrices, Eigenvalues and Eigenvectors: What They Mean

Matrices are arrays of numbers where A stands in for the name of a generic matrix, like this:

** **(** **1 3 )

**A**** **= ( 4 2 )

The numbers in each position vary, and there may even be algebraic expressions in their place. This is a 2 × 2 matrix, but they come in a variety of sizes and don’t always have equal numbers of rows and columns.

Dealing with matrices is different from dealing with ordinary numbers, and there are specific rules for multiplying, dividing, adding and subtracting them from one another. The terms “eigenvalue” and “eigenvector” are used in matrix algebra to refer to two characteristic quantities with regards to the matrix. This eigenvalue problem helps you understand what the term means:

**A**** **∙ **v** = λ ∙ **v**

**A** is a general matrix as before, **v** is some vector, and λ is a characteristic value. Look at the equation and notice that when you multiply the matrix by the vector **v**, the effect is to reproduce the same vector just multiplied by the value λ. This is unusual behavior and earns the vector **v** and quantity λ special names: the eigenvector and eigenvalue. These are characteristic values of the matrix because multiplying the matrix by the eigenvector leaves the vector unchanged apart from multiplication by a factor of the eigenvalue.

## How to Calculate Eigenvalues

If you have the eigenvalue problem for the matrix in some form, finding the eigenvalue is easy (because the result will be a vector the same as the original one except multiplied by a constant factor – the eigenvalue). The answer is found by solving the characteristic equation of the matrix:

det (**A** – λ** I**) = 0

Where **I** is the identity matrix, which is blank apart from a series of 1s running diagonally down the matrix. “Det” refers to the determinant of the matrix, which for a general matrix:

( a b )

**A**** ** = ( c d )

Is given by

det **A** = ad –bc

So the characteristic equation means:

( a – λ b )

det (**A** – λ** I**) = ( c d – λ )= (a – λ)(d – λ)− bc = 0

As an example matrix, let’s define **A** as:

( 0 1 )

**A**** **= (−2 −3 )

So that means:

det (**A** – λ** I**) = (0 – λ)(−3 – λ)− (1 ×−2)= 0

= −λ (−3 – λ) + 2

= λ^{2} + 3 λ + 2 = 0

The solutions for λ are the eigenvalues, and you solve this like any quadratic equation. The solutions are λ = − 1 and λ = − 2.

#### TL;DR (Too Long; Didn't Read)

In simple cases, the eigenvalues are easier to find. For example, if the elements of the matrix are all zero apart from a row on the leading diagonal (from the top left to bottom right), the diagonal elements work out to be the eigenvalues. However, the method above always works.

## Finding Eigenvectors

Finding the eigenvectors is a similar process. Using the equation:

(**A** – λ) ∙ **v** = 0

with each of the eigenvalues you’ve found in turn. This means:

( a – λ b ) ( v_{1} ) (a – λ) v_{1} + b v_{2} (0)

(**A** – λ) ∙ **v** = ( c d – λ ) ∙ ( v_{2} ) = c v_{1} + (d – λ) v_{2} = (0)

You can solve this by considering each row in turn. You only need the ratio of *v*_{1} to *v*_{2}, because there will be infinitely many potential solutions for *v*_{1} and *v*_{2}.