Eigenvalue

In linear algebra an eigenvalue of a square matrix $$A$$ is a number $$\lambda$$ that satisfies the eigenvalue equation,
 * $$\text{det}(A-\lambda I)=0\ ,$$

where det means the determinant, $$I$$ is the identity matrix of the same dimension as $$A$$, and in general $$\lambda$$ can be complex. The left-hand side is the characteristic polynomial of A. The origin of this equation is the eigenvalue problem, which is to find the eigenvalues and associated eigenvectors of $$A$$. That is, to find a number $$\lambda$$ and a vector $$\scriptstyle\vec{v}$$ that together satisfy
 * $$A\vec{v}=\lambda\vec{v}\ .$$

What this equation says is that even though $$A$$ is a matrix its action on $$\scriptstyle\vec{v}$$ is the same as multiplying the vector by the number $$\lambda$$. This means that the vector $$\scriptstyle\vec{v}$$ and the vector $$\scriptstyle A\vec{v}$$ are parallel (or anti-parallel if $$\lambda$$ is negative). Note that generally this will not be true. This is most easily seen with a quick example. Suppose
 * $$A=\begin{pmatrix}a_{11} & a_{12} \\ a_{21} & a_{22}\end{pmatrix}$$ and $$\vec{v}=\begin{pmatrix} v_1 \\ v_2 \end{pmatrix}\ .$$

Then
 * $$A\vec{v}=\begin{pmatrix}a_{11} & a_{12} \\ a_{21} & a_{22}\end{pmatrix}\begin{pmatrix} v_1 \\ v_2 \end{pmatrix}

=\begin{pmatrix}a_{11}v_1+a_{12}v_2 \\ a_{21}v_1+a_{22}v_2 \end{pmatrix}$$ whereas
 * $$\lambda\vec{v}=\begin{pmatrix} \lambda v_1 \\ \lambda v_2 \end{pmatrix}\ .$$

In general the ratio $$( a_{11}v_1+a_{12}v_2 ) / ( a_{21}v_1+a_{22}v_2 )$$ is different from $$ v_1 / v_2 $$, thus, no $$\lambda$$ fits.

The eigenvalue equation
So where did the eigenvalue equation $$\text{det}(A-\lambda I)=0$$ come from? Well, we assume that we know the matrix $$A$$ and want to find a number $$\lambda$$ and a non-zero vector $$\scriptstyle\vec{v}$$ so that $$\scriptstyle A\vec{v}=\lambda\vec{v}$$. (Note that if $$\scriptstyle\vec{v}=\vec{0}$$ then the equation is always true, and therefore uninteresting.) So now we have $$\scriptstyle A\vec{v}-\lambda\vec{v}=\vec{0}$$. It doesn't make sense to subtract a number from a matrix, but we can factor out the vector if we first multiply the right-hand term by the identity, giving us
 * $$(A-\lambda I)\vec{v}=\vec{0}\ .$$

Now we have to remember the fact that $$A-\lambda I$$ is a square matrix, and so it might be invertible. If it was invertible then we could simply multiply on the left by its inverse to get
 * $$\vec{v}=(A-\lambda I)^{-1}\vec{0}=\vec{0}$$

but we have already said that $$\scriptstyle\vec{v}$$ can't be the zero vector! The only way around this is if $$A-\lambda I$$ is in fact non-invertible. It can be shown that a square matrix is non-invertible if and only if its determinant is zero. That is, we require
 * $$\text{det}(A-\lambda I)=0\ ,$$

which is the eigenvalue equation stated above.

A more technical approach
So far we have looked eigenvalues in terms of square matrices. As usual in mathematics though we like things to be as general as possible, since then anything we prove will be true in as many different applications as possible. So instead we can define eigenvalues in the following way.

Definition: Let $$V$$ be a vector space over a field $$F$$, and let $$\scriptstyle A:V\to V$$ be a linear map. An eigenvalue of $$A$$ is an element $$\scriptstyle\lambda\in F$$ for which there exists a non-zero vector $$\scriptstyle\vec{v}\in V$$ such that
 * $$A(\vec{v})=\lambda\vec{v}\ .$$

Then $$\scriptstyle\vec{v}$$ is called the eigenvector of $$A$$ associated with $$\lambda$$.