Egwald Mathematics: Linear Algebra
Systems of Linear Differential Equations
by
Elmer G. Wiens
Egwald's popular web pages are provided without cost to users. Please show your support by joining Egwald Web Services as a Facebook Fan:
Follow Elmer Wiens on Twitter:
linear differential equations  matrix representation  eigenvector basis  example problems  general solution  eigenvector basis matrix exponential  eigenvector & generalized eigenvector basis  general solution  eigenvector & generalized eigenvector basis example problems  general matrix phase diagram  equilibrium classification  summary stability diagram  references
A System of Linear Differential Equations.
A system of linear differential equations can be expressed as:
dx_{1}/dt = a_{1,1} x_{1} + a_{1,2} x_{2} + . . . + a_{1,n} x_{n}
dx_{2}/dt = a_{2,1} x_{1} + a_{2,2} x_{2} + . . . + a_{2,n} x_{n}
. . . . . . . . . . .
dx_{n}/dt = a_{n,1} x_{1} + a_{n,2} x_{2} + . . . + a_{n,n} x_{n}

where x_{i}(t) is a function of time, i = 1, . . . n, and the matrix of constant coefficient is A = [a_{i,j}]. This system of linear differential equations is called autonomous because the coefficients of A are not explicit functions of time.
Example.
dx_{1}/dt =  4 * x_{1} +  2 * x_{2} 
dx_{2}/dt =  0 * x_{1} +  2 * x_{2} 

Objective: obtain formulae for the functions x_{i}(t), i = 1, . . . n.
To obtain explicit formulae for the functions x_{i}(t), i = 1, . . . n, one must find the complete set of eigenvalues and eigenvectors of the matrix A. If A has repeated eigenvalues, one might also need to compute their generalized eigenvectors. The eigenvalues and eigenvectors of A can be real numbers and vectors in R^{n}, or complex numbers and vectors in C^{n}.
Matrix Representation.
The same problem expressed in matrix and vector form is:
where x(t)^{T} = (x_{1}(t), x_{2}(t), . . . . x_{n}(t)) is the vector of unknown functions of time, and A = [a_{i, j}] is the matrix of coefficients.
x(t) = 
x_{1}(t) 
x_{2}(t) 
. 
. 
. 
x_{n}(t) 


dx(t)/dt = 
dx_{1}(t)/dt 
dx_{2}(t)/dt 
. 
. 
. 
dx_{n}(t)/dt 


A = 
a_{1,1} a_{1,2} . . . . a_{1,n} 
a_{2,1} a_{2,2} . . . . a_{2,n} 
. . . . 
. . . . 
. . . . 
a_{n,1} a_{n,2} . . . . a_{n,n} 

The Eigenvectors of A Form a Basis of R^{n}.
If the eigenvectors of the matrix A of dimension n form a basis of R^{n}, then A can be diagonalized as:
A = S * D * S^{(1)}
where D is a diagonal matrix formed from the eigenvalues of A, and S is matrix whose columns are their associated eigenvectors listed in the same order as the eigenvalues in D.
Example Problem #1: A with two negative eigenvalues.
Find equations x^{T}(t) = (x_{1}(t), x_{2}(t)) for the system dx/dt = A * x, with x(0) = x0 = (x0_{1}, x0_{2}), where
x(t) = 


dx(t)/dt = 
dx_{1}(t)/dt 
dx_{2}(t)/dt 


A = 

In matrix form, this system of linear differential equations is:
where I dropped the reference to time t.
The values of dx^{T}/dt = (dx_{1}/dt, dx_{2}/dt) depend on the matrix A, and on the values of x^{T} = (x_{1}, x_{2}). For example:
In the following diagram, the values of dx/dt and x are plotted for four sets of values in the x_{1}x_{2} plane. The vector dx/dt begins at x with its direction and magnitude provided by the coefficients of the vector dx/dt.
If one starts at t = 0 at the initial point x0^{T} = (x0_{1}, x0_{2}), one can follow the trajectory of x(t) as time t proceeds as given by dx^{T}(t)/dt = (dx_{1}(t)/dt, dx_{2}(t)/dt).
The next diagram shows the normalized vectors of dx(t)/dt at points of x^{T} = (x_{1}, x_{2}). These vectors represented as arrows provide a picture of a vector field in the x_{1}x_{2} plane generated by the system of linear differential equations.
To obtain a rough trajectory of the solution vector x(t), start at an initial point x0, (t = 0), and follow the arrows as t increases in value.
The pattern that the arrows make in the diagram depends on the values of the coefficients of the matrix A. From any initial point x0, the trajectory of x^{T}(t) = (x_{1}(t), x_{2}(t)) converges over time to the origin o^{T} = (0, 0) for this particular matrix A.
The origin is an equilibrium point for any matrix since A * o = o. For this linear differential equation system, the origin is a stable node because any trajectory proceeds to the origin over time.
Solution to dx(t)/dt = A * x(t).
The solution to a system of linear differential equations involves the eigenvalues and eigenvectors of the matrix A.
This matrix has two distinct eigenvalues, µ_{1} = 4 and µ_{2} = 2, with corresponding linearly independent eigenvectors, v_{1}^{T} = (1, 0) and v_{2}^{T} = (1, 1).
The general solution for given constants c_{1} and c_{2} is:
x(t) = c_{1} * e^{µ1*t} * v_{1} + c_{2} * e^{µ2*t} * v_{2}, or
x(t) = c_{1} * e^{4*t} * v_{1} + c_{2} * e^{2*t} * v_{2}.
where e^{f(t)} is the exponential of the function f(t). One uses the constant vector c^{T} = (c_{1}, c_{2}) to determine the initial point x0^{T} = (x0_{1}, x0_{2}) of a particular solution trajectory, x^{T}(t) = (x_{1}(t), x_{2}(t)). Expanding:

= c_{1} * e ^{4 * t} * 

+ c_{2} * e ^{2 * t} * 

Considering each variable with the initial conditions x^{T}(0) = x0^{T} = (9, 9):
x_{1}(t) = c_{1} * e^{4 * t} + c_{2} * e^{2 * t} 
; x_{1}(0) = c_{1} * e^{0} + c_{2} * e^{0} = c_{1} + c_{2} = 9 
x_{2}(t) = c_{2} * e^{2 * t}  ; x_{2}(0) = c_{2} * e^{0} = c_{2} = 9 
Solving for the constants yields c_{1} = 18, c_{2} = 9.
x_{1}(t) = 18 * e^{4 * t} + 9 * e^{2 * t} 
x_{2}(t) = 9 * e^{2 * t} 
Starting at the initial point x0^{T} = (9, 9), the evolution of x_{1}(t) in blue and x_{2}(t) in red appear below. Both curves converge to 0.
Their formulae involve the exponential of the two negative eigenvalues of A, µ_{1} = 4 and µ_{2} = 2, multiplied by the time variable t. As t increases in value, x_{1}(t) and x_{2}(t) are both forced to 0, because for any positive constant k, e^{k*t} converges to 0 as the variable t increases in value. Moreover, e^{k*t} converges to 0 more quickly if the magnitude of k is increased.
The phase portrait of the solution vector x^{T}(t) = (x_{1}(t), x_{2}(t)) appears in the next diagram. The trajectories from eight initial points are curves in the x_{1}x_{2} plane that terminate at the origin.
Trajectories that begin on the yellow lines through the eigenvectors, v_{1}^{T} = (1, 0) and v_{2}^{T} = (1, 1) move directly toward the origin.
Example Problem #2. A with one negative and one positive eigenvalue.
Find equations x^{T}(t) = (x_{1}(t), x_{2}(t)) for the system dx/dt = A * x, with x(0) = x0 = (x0_{1}, x0_{2}), where
This matrix has two distinct eigenvalues, µ_{1} = 4 and µ_{2} = 4, with corresponding linearly independent eigenvectors, v_{1}^{T} = (3, 1) and v_{2}^{T} = (1, 3).
The general solution for given constants c_{1} and c_{2} is:
x(t) = c_{1} * e^{4*t} * v_{1} + c_{2} * e^{4*t} * v_{2}.
Considering each variable with the initial conditions x^{T}(0) = x0^{T} = (x0_{1}, x0_{2}):
x_{1}(t) = 3 * c_{1} * e^{4 * t} + c_{2} * e^{4 * t} 
; x_{1}(0) = 3 * c_{1} * e^{0} + c_{2} * e^{0} = 3 * c_{1} + c_{2} = x0_{1} 
x_{2}(t) = c_{1} * e^{4 * t} + 3 * c_{2} * e^{4 * t}  ; x_{2}(0) = c_{1} * e^{0} + 3 * c_{2} * e^{0} = c_{1} + 3 * c_{2} = x0_{2} 
The phase diagram for this system is:
A trajectory that starts along the yellow line passing through the eigenvector v_{1}^{T} = (3, 1) associated with the positive
eigenvalue µ_{1} = 4 moves directly away from the origin. A trajectory that starts along the yellow line passing through the
eigenvector v_{2}^{T} = (1, 3) associated with the negative eigenvalue µ_{2} = 4 moves directly to the origin.
All other trajectories diverge from the origin. The line passing through the eigenvector v_{2}^{T} = (1, 3) is called the separatrix.
The origin in this situation as determined by the matrix A is called a saddle point.
Example Problem #3. A with two positive eigenvalues.
Find equations x^{T}(t) = (x_{1}(t), x_{2}(t)) for the system dx/dt = A * x, with x(0) = x0 = (x0_{1}, x0_{2}), where
This matrix has two distinct eigenvalues, µ_{1} = 4 and µ_{2} = 2, with corresponding linearly independent eigenvectors, v_{1}^{T} = (1, 1) and v_{2}^{T} = (1, 3).
The general solution for given constants c_{1} and c_{2} is:
x(t) = c_{1} * e^{4*t} * v_{1} + c_{2} * e^{2*t} * v_{2}.
Considering each variable with the initial conditions x^{T}(0) = x0^{T} = (x0_{1}, x0_{2}):
x_{1}(t) = c_{1} * e^{4 * t} + c_{2} * e^{2 * t} 
; x_{1}(0) = c_{1} * e^{0} + c_{2} * e^{0} = c_{1} + c_{2} = x0_{1} 
x_{2}(t) = c_{1} * e^{4 * t} + 3 * c_{2} * e^{2 * t}  ; x_{2}(0) = c_{1} * e^{0} + 3 * c_{2} * e^{0} = c_{1} + 3 * c_{2} = x0_{2} 
The phase diagram for this system is:
A trajectory that starts along the yellow line passing through either eigenvector v_{1}^{T} = (1, 1) or v_{2}^{T} = (1, 3) moves directly away from the origin. All other trajectories diverge from the origin. The origin in this situation as determined by the matrix A is called an unstable node.
General Solution: The Eigenvectors of A Form a Basis of R^{n}.
If the eigenvectors of the matrix A of dimension n form a basis of R^{n}, then A can be diagonalized as:
A = S * D * S^{(1)}
where D is a diagonal matrix formed from the eigenvalues of A, and S is a matrix whose columns are their associated eigenvectors listed in the same order as the eigenvalues in D.
The system of linear differential equations can be expressed as:
dx/dt = A *x = S * D * S^{(1)} * x
x(0) = x0

Define the new vector of variables y(t) as:
y(t) = S^{(1)} * x(t)
y(0) = S^{(1)} * x0 = y0

Since:
S^{(1)}dx/dt = S^{(1)}*A *x = S^{(1)}*S * D * S^{(1)} *x = D * y

and
the decoupled system of differential equations is:
If {µ_{1}, µ_{2}, . . . µ_{n}) are the eigenvalues of A, then:
D = 
µ_{1}  0  . . .  0 
0  µ_{2}  . . .  0 
0  0  . . .  0 
0  0  . . .  µ_{n} 

and the individual equations of the decoupled system of dimension n are:
dy_{1}/dt = µ_{1} * y_{1}; y_{1}(0) = y0_{1} 
dy_{2}/dt = µ_{2} * y_{2}; y_{2}(0) = y0_{2} 
. . . . . 
dy_{n}/dt = µ_{n} * y_{n}; y_{n}(0) = y0_{n} 
Any onedimensional equation of the form
dz(t)/dt = k * z(t), with z(0) = z_{0}
has the solution
z(t) = e^{k*t} * z_{0}
So the decoupled system has the solution:
y_{1}(t) = e^{µ1 * t} * y0_{1} 
y_{2}(t) = e^{µ2 * t} * y0_{2} 
. . . . . 
y_{n}(t) = e^{µn * t} * y0_{n} 
In matrix form this is:
y(t) = 
e^{µ1*t}  0  . . .  0 
0  e^{µ2*t}  . . .  0 
0  0  . . .  0 
0  0  . . .  e^{µn*t} 

* yo 
To get the general solution for the system dx/dt = A *x, x(0) = x0, define the vector of constants c^{T} = (c_{1}, c_{2}, . . . c_{n}) as:
c = S^{(1)} * x0 = y0.
Also, let {v_{1}, v_{2}, . . . v_{n}} be the set of linearly independent eigenvectors associated with the eigenvalues {µ_{1}, µ_{2}, . . . µ_{n}), so that:
S = [v_{1}  v_{2}  . . .  v_{n}].
Multiplying the following equation by S:
y(t) = S^{(1)} * x(t) = 
e^{µ1*t}  0  . . .  0 
0  e^{µ2*t}  . . .  0 
0  0  . . .  0 
0  0  . . .  e^{µn*t} 

* c 
yields:
x(t) = [v_{1}  v_{2}  . . .  v_{n}] * 
e^{µ1*t}  0  . . .  0 
0  e^{µ2*t}  . . .  0 
0  0  . . .  0 
0  0  . . .  e^{µn*t} 

* c 
The general solution of the system of linear differential equations is:
x(t) = c_{1} * e^{µ1*t} * v_{1} + c_{2} * e^{µ2*t} * v_{2} + . . . + c_{n} * e^{µn*t} * v_{n}
x(0) = c_{1} * v_{1} + c_{2} * v_{2} + . . . + c_{n} * v_{n} = x0.

The Matrix Exponential of a Diagonalizable Matrix
The square matrix A of dimension n is diagonalizable if:
A = S * D * S^{(1)}
where D is the diagonal matrix of eigenvalues {µ_{1}, µ_{2}, . . . µ_{n}), and S = [v_{1}  v_{2}  . . .  v_{n}], ie the columns of S are the linearly independent eigenvectors.
Powers of A: A^{k}.
The kth power of A can be written as:
A^{k} = (S * D * S^{(1)}) * (S * D * S^{(1)}) * . . . (S * D * S^{(1)}) = S * D^{k} * S^{(1)}.
Power Series of e^{t}.
The power series expansion of e^{t} is:
e^{t} = 1 + t + t^{t}/2! + t^{3}/3! + . . .
Power Series of e^{A*t}.
The power series expansion of e^{A*t} is:
e^{A*t} = 1 + A*t + (A*t)^{2}/2! + (A*t)^{3}/3! + . . .
e^{A*t} = I + (S * (D*t) * S^{(1)}) + (S * (D*t)^{2} * S^{(1)})/2! + (S * (D*t)^{3} * S^{(1)})/3! + . . .
e^{A*t} = S * (I + (D*t) + (D*t)^{2}/2! + (D*t)^{3}/3! + . . . ) * S^{(1)} = S * e^{D*t} * S^{(1)}
The general solution of the system of linear differential equations in terms of the matrix exponential of A is:
x(t) = S * e^{D*t} * S^{(1)} * x0 = e^{A*t} * x0,

where the matrix exponential of the diagonal matrix D*t is:
e^{D*t} = 
e^{µ1*t}  0  . . .  0 
0  e^{µ2*t}  . . .  0 
0  0  . . .  0 
0  0  . . .  e^{µn*t} 

The Eigenvectors and Generalized Eigenvectors of A Form a Basis of R^{n}.
The Matrix Exponential of a Jordan Matrix.
Suppose A is a square matrix of dimension 2, with a repeated eigenvalue µ, an eigenvector v_{1}, and a generalized eigenvector v_{2}. Form the matrix S = [v_{1}  v_{2}], ie its columns are the linearly independent vectors v_{1} and v_{2}. Let J be the Jordan matrix:
The Jordan Normal decomposition of A is:
A = S * J * S^{(1)}.
The power series expansion of e^{A*t} is:
e^{A*t} = S * (I + (J*t) + (J*t)^{2}/2! + (J*t)^{3}/3! + . . . ) * S^{(1)} = S * e^{J*t} * S^{(1)}
where the matrix exponential of the Jordan matrix J*t is:
e^{J*t} = 
e^{µ*t}  t * e^{µ*t} 
0  e^{µ*t} 

General Solution: The Eigenvectors and Generalized Eigenvectors of A Form a Basis of R^{n}.
The general solution of the system of linear differential equations in terms of the matrix exponential of A is:
x(t) = e^{A*t} * x0 = S * e^{J*t} * S^{(1)} * x0.

Defining the vector of constants c^{T} = (c_{1}, c_{2}, . . . c_{n}) as:
c = S^{(1)} * x0.
the general solution is:
x(t) = S * e^{J*t} * c ,
x(0) = S * c,

since e^{J * 0} = I, the identity matrix.
Example Problem #4. A with a repeated negative eigenvalue.
Find equations x^{T}(t) = (x_{1}(t), x_{2}(t)) for the system dx/dt = A * x, with x(0) = x0 = (x0_{1}, x0_{2}), where
This matrix has the characteristic equation:
f(µ) = µ^{2}  trace(A) * µ + det(A) = µ^{2} + 4 * µ + 4,
with a repeated eigenvalue root, µ_{1} = 2 and µ_{2} = 2.
Compute the eigenvector(s) v_{1, 2} by solving the system of equations (A  (2) * I) * v = 0:
(A  µ_{1} * I) * v = (A  (2) * I) * v = 

* 

= 


Using the echelon algorithm, or just noticing that equations e_{1} and e_{2} are the same, one determines that matrix A has only one eigenvector, v_{1}^{T} = (1, 1).
To find the general solution for this problem, one must also compute a generalized eigenvector v_{2}^{T} associated with v_{1}^{T}. Solve:
(A  (2) * I)*v_{2} = v_{1}, or
(A  (2) * I  A )* v = v_{1} 

* 

= 
 1*v_{1} + 1*v_{2} = 1 
 1*v_{1} + 1*v_{2} = 1 


Using the echelon algorithm form, one obtains:
v  =  v_{p} 
+ s1 *  v_{h1} 

= 

+ s1 * 

The particular solution v^{T}_{p} = (1, 0) = v_{2} is the generalized eigenvalue of the matrix A associated with the pair of equal eigenvalues µ_{1,2} = 2, while the homogeneous solution v_{h} is the eigenvector v_{1}.
Therefore:
e^{J*t} = 
e^{(2*t)}  t*e^{(2*t)} 
0  e^{(2*t)} 

The general solution for given constants c^{T} = (c_{1}, c_{2}) is:
x(t) = S * e^{J*t} * c
Considering each variable with the initial conditions x^{T}(0) = x0^{T} = (x0_{1}, x0_{2}):
x_{1}(t) = c_{1} * e^{2 * t} + c_{2} * (t * e^{2 * t}  e^{2 * t}) 
; x_{1}(0) = c_{1} * e^{0} + c_{2} (0  e^{0}) = c_{1}  c_{2} = x0_{1} 
x_{2}(t) = c_{1} * e^{2 * t} + c_{2} * t * e^{2*t}  ; x_{2}(0) = c_{1} * e^{0} + c_{2} * 0 = c_{1} = x0_{2} 
The phase diagram for this system is:
A trajectory that starts along the yellow line passing through the eigenvector v_{1}^{T} = (1, 1) moves directly towards the origin. All other trajectories converge to the origin over time, including trajectories that start along the yellow line through the generalized eigenvector v_{2}^{T} = (1, 0). The origin in this situation as determined by the matrix A is called a stable degenerate node.
Example Problem #5. A with a repeated positive eigenvalue.
Find equations x^{T}(t) = (x_{1}(t), x_{2}(t)) for the system dx/dt = A * x, with x(0) = x0 = (x0_{1}, x0_{2}), where
This matrix has a repeated eigenvalue, µ_{1} = 2 and µ_{2} = 2, with only one eigenvector, v_{1}^{T} = (1, 1). To find the general solution for this problem one must compute a generalized eigenvector associated with v_{1}^{T}.
On the matrix web page, I determined that the generalized eigenvector v_{2}^{T} = (1, 0) for this particular matrix A.
Therefore:
e^{J*t} = 
e^{(2*t)}  t*e^{(2*t)} 
0  e^{(2*t)} 

The general solution for given constants c^{T} = (c_{1}, c_{2}) is:
x(t) = S * e^{J*t} * c
Considering each variable with the initial conditions x^{T}(0) = x0^{T} = (x0_{1}, x0_{2}):
x_{1}(t) = c_{1} * e^{2 * t} + c_{2} * (t * e^{2 * t} + e^{2 * t}) 
; x_{1}(0) = c_{1} * e^{0} + c_{2} * e^{0} = c_{1} + c_{2} = x0_{1} 
x_{2}(t) = c_{1} * e^{2 * t}  c_{2} * t * e^{2 * t}  ; x_{2}(0) = c_{1} * e^{0}  c_{2} * 0 = c_{1} = x0_{2} 
The phase diagram for this system is:
A trajectory that starts along the yellow line passing through the eigenvector v_{1}^{T} = (1, 1) moves directly away from the origin. All other trajectories diverge from the origin, including trajectories that start along the yellow line through the generalized eigenvector v_{2}^{T} = (1, 0). The origin in this situation as determined by the matrix A is called an unstable degenerate node.
Phase Diagram of a General Two by Two Matrix
The matrices in the examples above had real eigenvalues and real eigenvectors. However, matrices can also have complex valued eigenvalues and complex valued eigenvectors. The patterns of the phase diagrams with complex eigenvalues differ from the ones with real eigenvalues.
This matrix has the characteristic equation:
f(µ) = µ^{2}  0 * µ + 1
with eigenvalues:
µ_{1} = 0 + 1 * i,
µ_{2} = 0 + 1 * i
The phase diagram for this system is:
Use the form below to plot the phase diagram of some other 2 by 2 matrix.
Classification of the Equilibrium Point at the Origin
The origin is an equilibrium point for any system of linear differential equations with coefficient matrix A because A * o = o. One can examine the behaviour of the solution vector x(t) near the origin o by analyzing the eigenvalues and eigenvectors of A.
The general 2 by 2 matrix:
has trace(A) = a + d, and det(A) = a*d  b*c. Its characteristic equations is:
f(µ) = µ^{2}  trace(A) * µ + det(A)
Factoring this quadratic equation yields the pair of eigenvalues, µ_{1} and µ_{2}, where:
µ_{1} = {trace(A) + sqrt[trace(A) * trace(A)  4 * det(A)]} / 2 , and
µ_{2} = {trace(A)  sqrt[trace(A) * trace(A)  4 * det(A)]} / 2;
det(A) = µ_{1} * µ_{2}, and
trace(A) = µ_{1} + µ_{2}
If the discriminant:
disc(A) = trace(A) * trace(A)  4 * det(A)
is positive, µ_{1} and µ_{2} are real and distinct. Also, if det(A) > 0, µ_{1}
and µ_{2} have the same sign. Furthermore, if trace(A) is negative, µ_{1} and µ_{2}
are negative, and the origin is a stable node.. But, if trace(A) is positive, µ_{1} and µ_{2}
are positive, and the origin is an unstable node.
Stable Node  Example A 
disc(A) > 0
det(A) > 0
trace(A) < 0
µ_{1} < µ_{2} < 0



Unstable Node  Example A 
disc(A) > 0
det(A) > 0
trace(A) > 0
µ_{1} > µ_{2} > 0



If the determinant of A, det(A), is negative, then disc(A) > 0 and µ_{1} and µ_{2} are real and have opposite signs. The origin is a saddle point.
Saddle Point  Example A 
disc(A) > 0
det(A) < 0
µ_{1} > 0 > µ_{2}



If the discriminant:
disc(A) = trace(A) * trace(A)  4 * det(A)
is negative, the eigenvalues of A, µ_{1} and µ_{2}, are complex conjugates:
µ_{1} = ρ + i * ω
µ_{2} = ρ  i * ω
If real(µ_{1,2}) = ρ < 0, then the origin is a stable focus with trajectories spiralling inwards toward o;
Stable Focus — Clockwise Spiral  Example A 
disc(A) < 0
trace(A) = 2 * ρ < 0
component c of A < 0



Stable Focus — Counter Clockwise Spiral  Example A 
disc(A) < 0
trace(A) = 2 * ρ < 0
component c of A > 0



If real(µ_{1,2}) = ρ > 0, then the origin is an unstable focus with trajectories spiralling outward from o.
Unstable Focus — Clockwise Spiral  Example A 
disc(A) < 0
trace(A) = 2 * ρ > 0
component c of A < 0



Unstable Focus — Counter Clockwise Spiral  Example A 
disc(A) < 0
trace(A) = 2 * ρ > 0
component c of A > 0



If real(µ_{1,2}) = ρ = 0, the origin is an neutrally stable center with trajectories ellipses about the origin o.
Neutrally Stable Center — Clockwise Ellipse  Example A 
disc(A) < 0
trace(A) = 0
component c of A < 0



Neutrally Stable Center — Counter Clockwise Ellipse  Example A 
disc(A) < 0
trace(A) = 0
component c of A > 0



If the discriminant:
disc(A) = trace(A) * trace(A)  4 * det(A)
is zero, the eigenvalues of A, µ_{1} and µ_{2}, are real and equal. If the matrix A has only one distinct eigenvector direction, the origin is a degenerate node.
In the examples below, with µ_{1,2} = 2, the origin is a degenerate stable node with trajectories that arc toward the origin o. With µ_{1,2} = 2, the origin is a degenerate unstable node with trajectories that arc away from the origin o.
Degenerate Stable Node — Clockwise Rotation  Example A 
disc(A) = 0
trace(A) < 0
component c of A < 0



Degenerate Unstable Node — Counter Clockwise Rotation  Example A 
disc(A) = 0
trace(A) > 0
component c of A > 0



If the discriminant:
disc(A) = trace(A) * trace(A)  4 * det(A)
is zero, the eigenvalues of A, µ_{1} and µ_{2}, are real and equal. If the matrix A has two distinct eigenvector directions, the origin is a star node.
In the examples below, with µ_{1,2} = 2, the origin is a stable star node with trajectories that move directly toward the origin o. With µ_{1,2} = 2, the origin is a unstable star node with trajectories that move directly away from the origin o.
Stable Star Node  Example A 
disc(A) = 0
trace(A) < 0
component c of A = 0



Unstable Star Node  Example A 
disc(A) = 0
trace(A) > 0
component c of A = 0



If the determinant of A, det(A), is zero, at least one the eigenvalues is zero.
If µ_{1} = 0 and µ_{2} is not 0, there is a line of equilibrium points passing through the eigenvector corresponding to the zero eigenvector. All other trajectories are parallel to the line through the other eigenvector. If µ_{2} < 0, the line of equilibrium points is stable. If µ_{2} > 0, the line of equilibrium points is unstable.
Stable Line of Equilibrium Points  Example A 
det(A) = 0
trace(A) < 0
component c of A < 0



Unstable Line of Equilibrium Points  Example A 
det(A) = 0
trace(A) > 0
component c of A > 0



If the determinant of A is zero and µ_{1} = 0 and µ_{2} = 0, two possibilities exist. If A is the zero matrix, every point is an equilibrium point. Otherwise, there is a line of equilibrium points passing through the eigenvector corresponding to one zero eigenvalue. All other trajectories are parallel to this line, running in opposite directions on either side of the line of equilibrium points.
Line of Equilibrium Points  Example A 
det(A) = 0
trace(A) = 0
component c of A > 0



Click to pop a new window with a summary of this analysis: Classification of the Equilibrium Point at the Origin.
References.
 Atkins, R. Linear Algebra: Mathematics 307 Lecture Notes. Vancouver: University of B.C., 1998.
 Ayres, Frank Jr. Matrices. New York: Schaum McGrawHill, 1962.
 Ayres, Frank Jr. Modern Algebra. New York: Schaum McGrawHill 1965.
 Bretscher, Otto. Linear Algebra with Applications. Upper Saddle River: Prentice Hall, 1997.
 Burden, Richard L. and J. Douglas Faires. Numerical Analysis. 6th ed. Pacific Grove: Brooks/Cole, 1997.
 Cohn, P. M. Linear Equations. London: Routledge, 1964.
 Demmel, James W. Applied Numerical Linear Algebra. Philadelphia: Siam, 1997.
 Dowling, Edward T. Mathematics for Economists. New York: Schaum McGrawHill, 1980.
 Hock, Kai Meng and Andrzej Wolski. "Time evolution of coupledbunch modes from beta function variation in storage rings". Physical Review Special Topics  Accelerators and Beams. 10, 084401 (2007).
 Kaplan, Wilfred. Ordinary Differential Equations. Reading: AddisonWesley, 1958.
 Lipschutz, Seymour. Linear Algebra. New York: Schaum McGrawHill, 1968.

Loewen, Philip D. Optimal Control: Math 403 Lecture Notes. Department of Mathematics, University of British Columbia. 4 Apr 2003. http://www.math.ubc.ca/~loew/m403/.
 Mathews, John H. and Kurtis D. Fink. Numerical Methods Using MATLAB. 3rd ed. Upper Saddle River: Prentice Hall, 1999.
 Press, William H., Brian P. Flannery, Saul A. Teukolsky, and William T. Vetterling. Numerical Recipes: The Art of Scientific Computing. Cambridge: Cambridge UP, 1989.
 Strang, Gilbert. Linear Algebra and Its Applications. 3d ed. San Diego: Harcourt, 1976.
 Varah, James. Numerical Linear Algebra: Computer Science 402 Lecture Notes. Vancouver: University of B.C., 2000.
 Watkins, David S. Fundamentals of Matrix Computations. New York: John Wiley, 1991.
