Source
%matplotlib widget
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from scipy.integrate import odeintMatplotlib is building the font cache; this may take a moment.
Support material¶
Background¶
The problem of solving differential equations is not just a mathematical concern. Many physical equations that describe the evolution and equilibrium of very diverse systems are differential equations. There are two types of differential equations:
Ordinary Differential Equations (ODE) in which the function being sought depends only on one variable, usually time .
Partial Differential Equations in which it also depends on other variables, usually space.
In this course, we are only interested in ordinary differential equations. PDEs are usually solved by the finite element method which requires a course on its own. The notions discussed and the observations made are also largely applicable to partial differential equations.
Problem formulation¶
An ODE can be described by several characteristics. In particular:
Its order.
Its number of equations: depending on the problem, only one here but many ODEs can be presented as systems of equations.
The explicit time dependence. If this one does not appear, the equation is said to be autonomous.
Linearity: if the unknown function and its derivatives appear only multiplied by constants, then the equation is linear.
On the contrary, for example if they appear to a power other than 1 or in a sine, then the equation is non-linear. Here the equation is linear.
Depending on these characteristics, an equation will or will not have an exact known solution. Only a minority of them have such a solution. Although they are not the subject of this course, the exact solutions will serve as a reference to test the approximate solutions.
To address ODEs on a systematic basis, a generic formulation is usually used that takes the form:
Where:
is the unknown function.
is the ODE
This formulation is both simple and surprising, it implies that all ODEs can be written as first order equations. Of course, this advantage has a price: the decrease of the order goes through an increase of the number of equations through the creation of intermediate functions.
Phase space, vector flow and integral curves¶
This reformulation is not just a mathematical trick. It also informs us about the physical nature of the problem being modeled. Indeed, the vector describes the state variables of the system. Thus, if and are known, the state of the system is fully determined. This has several essential consequences:
The solution of an ODE is related to the choice of initial conditions of .
One can therefore plot the solutions of an equation in the space called phase space.
The ODE describes the evolution of at any point in phase space and at any time. By analogy, we can see as a position and as the velocity of a particle in the phase space. The vector field defined by thus behaves like the velocity field associated with a fluid flow, so it is called the vector flow.
The solution of an ODE is called the integral curve of the flow and behaves like a stream-line of it.
Source
w0 = 1.0
def f(X, t):
"""
Undamped harmonic oscillator ODE.
"""
out = np.zeros_like(X)
out[0] = X[1]
out[1] = -(w0**2) * X[0]
return out
rmax = 10.0
nr, ntheta = 11, 10
r = np.linspace(0.0, rmax, nr)
theta = np.linspace(0.0, 2.0 * np.pi, ntheta, endpoint=False)
r, theta = np.meshgrid(r, theta)
x = r * np.cos(theta)
dotx = r * np.sin(theta)
x = x.flatten()
dotx = dotx.flatten()
X = np.array([x, dotx]).T
dotX = np.zeros_like(X)
t = 0.0
for p in range(len(X)):
dotX[p] = f(X[p], t)
amin = 0.0
amax = rmax
na = 6
a = np.linspace(amin, amax, na)
tv = np.linspace(0.0, 2.0 * np.pi / w0, 200)
fig, axes = plt.subplots(figsize=(8, 6))
axes.set_aspect("equal")
label = "Integral curve"
for i in range(na):
"""
xs = a[i] * np.cos(w0 * tv)
dotxs = -a[i] * w0 * np.sin(w0*tv)
"""
Xs = odeint(f, [a[i], 0.0], tv)
xs, dotxs = Xs.T
plt.plot(xs, dotxs, "r-", label=label)
label = None
plt.quiver(
x,
dotx,
dotX[:, 0],
dotX[:, 1],
width=0.005,
headwidth=4.0,
headlength=4.0,
pivot="mid",
label="Vector flow",
)
plt.scatter(x, dotx, marker="o", c="b", s=2)
plt.grid()
plt.title("Undamped Free harmonic oscillator: Phase space")
plt.xlabel("Position, $x$ [m]")
plt.ylabel("Speed, $\dot x$ [m/s]")
plt.legend(loc="best")
plt.show()<>:65: SyntaxWarning: invalid escape sequence '\d'
<>:65: SyntaxWarning: invalid escape sequence '\d'
/tmp/ipykernel_2895/2348361219.py:65: SyntaxWarning: invalid escape sequence '\d'
plt.ylabel("Speed, $\dot x$ [m/s]")
Approximate solutions¶
Background¶
The solution of an ODE in the general case is not known. However, we can calculate numerical solutions. These are fundamentally different from the exact solutions. Suppose that we look for the solution from a starting point and on a time interval :
The numerical solutions are approximate, i.e. they imply a certain amount of error which must be made negligible depending on the problem being addressed.
They are discrete in time, which means that the solution is described only at certain times where and . The distance between these instants is called time step and noted . The values of X associated to these time steps are noted
Problem formulation¶
To solve this problem, we can use integration schemes to determine from . From a starting point , they allow to determine all the values step by step on the desired duration. From a mathematical point of view, the problem is therefore to determine:
This integral is of course not known and must be approximated efficiently.
Naive solution: Forward Euler¶
The simplest approach to solve this problem is to evaluate , the slope of in and follow this direction to . This leads to the following integration scheme:
This approach means that:
This is equivalent to approximating the integral by the method of left rectangles (or left Riemann sum). This method of integration is called the explicit Euler method or the forward Euler method. It is attributed to Leonhard Euler. This method has the merit of being easy to write:
def Euler(func, X0, t):
"""
Euler solver.
"""
dt = t[1] - t[0]
nt = len(t)
X = np.zeros([nt, len(X0)])
X[0] = X0
for i in range(nt - 1):
X[i + 1] = X[i] + func(X[i], t[i]) * dt
return XSource
# SETTINGS
w0 = 2.0 * np.pi
T0 = 2.0 * np.pi / w0
nt = 10 # NUMBER OF TIME STEPS
dt = T0 / 4 / nt # TIME STEP
x0 = 1.0
X0 = np.array([x0, 0.0])
# NUMERICAL SOLUTION
t = np.arange(nt + 1) * dt
X = Euler(f, X0, t)
# EXACT SOLUTION
ts = np.linspace(t.min(), t.max(), 1000)
Xs = np.array([x0 * np.cos(w0 * ts), -x0 * w0 * np.sin(w0 * ts)]).T
fig, ax = plt.subplots(figsize=(8, 6))
# ax.set_aspect("equal")
plt.plot(ts, Xs[:, 0], "-k", label="Exact Solution: $X$")
plt.plot(t, X[:, 0], "or--", label="Approx. Sol: $X_k$")
plt.quiver(
t,
X[:, 0],
np.ones_like(t) * dt,
X[:, 1] * dt,
color="b",
angles="xy",
scale_units="xy",
zorder=2,
width=0.005,
headwidth=4.0,
headlength=4.0,
label="Slope: $\dot X_k = f(X_k, t_k)$",
)
plt.legend()
plt.xticks(t, [f"$t_{{{k}}}$" for k in range(len(t))])
plt.grid()
plt.xlabel("Time, $t$ [s]")
plt.ylabel("Position, $x$ [m]")
plt.show()<>:34: SyntaxWarning: invalid escape sequence '\d'
<>:34: SyntaxWarning: invalid escape sequence '\d'
/tmp/ipykernel_2895/3278389237.py:34: SyntaxWarning: invalid escape sequence '\d'
label="Slope: $\dot X_k = f(X_k, t_k)$",
The above example shows that the error between the exact solution and the approximate solution is clearly visible and is probably not negligible. We can reduce the time step to limit this error.
Source
# SETTINGS
w0 = 2.0 * np.pi
T0 = 2.0 * np.pi / w0
x0 = 1.0
X0 = np.array([x0, 0.0])
fig, ax = plt.subplots(figsize=(8, 6))
# ax.set_aspect("equal")
plt.plot(ts, Xs[:, 0], "-k", label="Exact Solution: $X$")
for nt in [2, 5, 10, 20]:
dt = T0 / 4 / nt # TIME STEP
t = np.arange(nt + 1) * dt
X = Euler(f, X0, t)
plt.plot(t, X[:, 0], "+-", label=f"dt={dt:.2e} s")
plt.legend()
plt.xticks(t, [f"$t_{{{k}}}$" for k in range(len(t))])
plt.grid()
plt.xlabel("Time, $t$ [s]")
plt.ylabel("Position, $x$ [m]")
plt.show()Reducing does give a better approximation. We say that the algorithm converges to the solution when tends to 0. This rate of convergence can be shown by measuring the error:
Where is the exact solution and the error. Let’s plot this error as a function of :
Source
# SETTINGS
w0 = 2.0 * np.pi
T0 = 2.0 * np.pi / w0
x0 = 1.0
X0 = np.array([x0, 0.0])
nt_values = np.array([10, 50, 100, 500, 1000])
err = np.zeros(len(nt_values))
dt_values = T0 / 4 / nt_values
for i in range(len(dt_values)):
dt = dt_values[i]
t = np.arange(nt + 1) * dt
X = Euler(f, X0, t)
res = ((x0 * np.cos(w0 * t) - X[:, 0]) ** 2)[1:]
res /= len(res)
err[i] = np.sqrt(res.sum())
fig, ax = plt.subplots(figsize=(8, 6))
plt.plot(dt_values, err, "o-", label="Forward Euler")
plt.legend()
plt.yscale("log")
plt.xscale("log")
plt.grid()
plt.ylabel("Error on position, $e(dt)$ [m]")
plt.xlabel("Time step, $dt$ [s]")
plt.title("Forward Euler convergence to solution")
plt.show()We see that Euler’s method converges to the exact solution when decreases. We can therefore chose the time step to limit the error to a value of our choice.
Runge-Kutta 4 method¶
There are many improvements of the Euler forward method. The Runge and Kutta methods are an example widely used in the literature, in particular the Runge-Kutta 4 (forward) algorithm which is written as follows:
def RK4(func, X0, t):
"""
Runge Kutta 4 solver.
"""
dt = t[1] - t[0]
nt = len(t)
X = np.zeros([nt, len(X0)])
X[0] = X0
for i in range(nt - 1):
k1 = func(X[i], t[i])
k2 = func(X[i] + dt / 2.0 * k1, t[i] + dt / 2.0)
k3 = func(X[i] + dt / 2.0 * k2, t[i] + dt / 2.0)
k4 = func(X[i] + dt * k3, t[i] + dt)
X[i + 1] = X[i] + dt / 6.0 * (k1 + 2.0 * k2 + 2.0 * k3 + k4)
return XBuilt-in methods¶
Also built-in integration methods are available. The odeint function of Scipy is an example. We propose to compare it with the two others.
from scipy.integrate import odeintSource
# SETTINGS
w0 = 2.0 * np.pi
T0 = 2.0 * np.pi / w0
x0 = 1.0
X0 = np.array([x0, 0.0])
nt_values = np.array([10, 50, 100, 500, 1000])
methods = {"Euler": Euler, "RK4": RK4, "odeint": odeint}
err = {meth: np.zeros(len(nt_values)) for meth, _ in methods.items()}
dt_values = T0 / 4 / nt_values
for i in range(len(dt_values)):
dt = dt_values[i]
t = np.arange(nt + 1) * dt
for key, integrator in methods.items():
X = integrator(f, X0, t)
res = ((x0 * np.cos(w0 * t) - X[:, 0]) ** 2)[1:]
res /= len(res)
err[key][i] = np.sqrt(res.sum())
fig, ax = plt.subplots(figsize=(8, 6))
for key, e in err.items():
plt.plot(dt_values, e, "o-", label=key)
plt.legend()
plt.yscale("log")
plt.xscale("log")
plt.grid()
plt.ylabel("Error on position, $e(dt)$ [m]")
plt.xlabel("Time step, $dt$ [s]")
plt.title("Convergence of multiple algorithms")
plt.show()The graph above shows that the error levels and convergence rates of the 3 methods above are very different. We can first compare the forward Euler and RK4 methods. The latter evaluates the ODE 4 times per iteration and can therefore be considered as four times slower that the Forward Euler method. However, its error level is several orders of magnitude lower and its convergence rate is much better than forward Euler. Concerning odeint, we notice that this last algorithm has a much lower error level than Euler but a comparable convergence rate. In fact it uses an error control system which we will not detail here. However, we must remember that odeint is fast and reliable in many cases.