Bellaard.com

Finite difference approximations

By: Gijs Bellaard

Suppose you have some sufficiently smooth black-box function \(f: \mathbb{R} \rightarrow \mathbb{R} \) of which you want to know the \(m\)'th derivative at the point \(x\). Because \(f\) is a black-box there is no way you can just differentiate it and get your answer. This is where finite differences come into play; it is a way to approximate derivatives using only evaluations of \(f\).

Approximating the first derivative

Let's start simple with approximating the first derivative. Pick some small number \(h\). We introduce the following shorthand: \[f = f(x) \] \[f_i = f(x+ih)\] \[[k_0 \; k_1 \; k_2 \; \ldots] = k_0f + k_1f'h + k_2\frac{1}{2}f''h^2 + \cdots\] Write down the taylor series of both \(f_1\) and \(f_{-1}\) centered around \(x\): \[\begin{matrix} f_1 &= [1 &1 &1 &1 &\dots]\\ f_{-1} &= [1 &-1 &1 &-1 &\dots] \end{matrix}\] Then by subtracting the second from the first we get: \[ f_1 - f_{-1} = [0 \; 2 \; 0 \; 2 \; \ldots] = 2f'h + \mathcal{O}(h^3)\] Dividing with \(2h\) gives: \[ \frac{f_1 - f_{-1}}{2h} = f' + \mathcal{O}(h^2)\] Now, \(\mathcal{O}(h^2)\) is relatively insignificant thus we may write: \[ f' \approx \frac{f_1 - f_{-1}}{2h}\] So, with just two evaluations of \(f\) we can approximate \(f'\) with an error of \(\mathcal{O}(h^2)\)!

Approximating the first derivative even better

By taking even more evaluations in consideration we can make the order of the error even better. We write down the taylor series once again: \[ \begin{matrix} f_{-2} &= [1 & -2 & 4 & -8 & 16 & -32 & \ldots]\\ f_{-1} &= [1 & -1 & 1 & -1 & 1 & -1 & \ldots]\\ f_1 &= [1 & 1 & 1 & 1 & 1 & 1 & \ldots]\\ f_2 &= [1 & 2 & 4 & 8 & 16 & 32 & \ldots] \end{matrix} \] We see that if we want to isolate the first derivative we should again combine the taylor series in a specific way: \[ f_{-2} - 8f_{-1} + 8f_1 - f_{2} = [0, 12, 0, 0, 0, -48, \ldots] = 12f'h + \mathcal{O}(h^5)\] And so: \[ f' \approx \frac{f_{-2} - 8f_{-1} + 8f_1 - f_{2}}{12h}\] Thus, with just four evaluations of \(f\) we can approximate \(f'\) with an error of \(\mathcal{O}(h^4)\).

Aproximating any derivative in any way

In full generality we have \(n>m\) samples \( f_{s_1}, f_{s_2}, \ldots, f_{s_n} \) which correspond with \(n\) taylor-series-vectors of the form: \( [s_i^0,s_i^1,\ldots,s_i^{n-1}] \) that are linearly combined to get a vector of all zeros except for the \(m+1\) element which should be \(m!\). The the error is almost always of \(\mathcal{O}(h^{n-m})\); it may be better by "coincidence" as can be seen in the previous two examples.

So, for example, to approximate the \(m=4\)'th derivative with the points \( -4,-3,1,2,5 \) we would want to solve the following linear system: \[ x \begin{bmatrix}1&-4&16&-64&256\\1&-3&9&-27&81\\1&1&1&1&1\\1&2&4&8&16\\1&5&25&125&625\end{bmatrix} = \begin{bmatrix} 0 & 0 & 0 & 0 & 24 \end{bmatrix} = f^{(4)}h^4 \] of which the solution is \(x = \left(\begin{matrix} 16 & -27 & 54 & -48 & 5 \end{matrix}\right)/180 \), and such: \[ f^{(4)} \approx \frac{16f_{-4}-27f_{-3}+54f_{1}-48f_{2}+5f_{5}}{180h^{4}}\] Also there is no reason for us to choose \(s_i\) as integers; approximating the first derivative with \(s_1 = -1.5\) and \(s_2 = \pi\) is totally valid.