Some Solution to Midterm Practice

Table of Contents

1

1 Integrating factors

Omitted. (You may refer to the lecture notes, or this Wikipedia page.)

2 Spectral method for PDE

2.1

Omitted.

2.2

Omitted.

2.3

Omitted.

2.4

Using separation of variables, say u(t, x) = T(t) X(x), we have the following.

$$T'(t) X(x) - T(t) X''(x) = 0 \quad \Rightarrow \quad \frac{T'(t)}{T(t)} = \frac{X''(x)}{X(x)} = \lambda$$

Solving \(X'' = \lambda X\) gives \(X(x) = A \cos (\sqrt{-\lambda} x) + B \sin (\sqrt{-\lambda} x)\) for \(\lambda < 0\). From the boundary condition, it follows that \(X'(0) = X'(2\pi) = 0\). This means \(B = 0\) and \(\lambda = -\frac{n^2}{4}\) for \(n = 0, 1, 2, \dots\).

For each possible solution \(X(x)\), we can solve \(T' = \lambda T\) and get \(T(t) = C e^{-n^2t/4}\). Hence, a general solution can be given by the following series.

$$u(t, x) = \sum_{n=0}^{\infty} a_n e^{-n^2t/4} \cos (\frac{n}{2} x)$$

Then, by the initial condition, we solve for \(a_n\).

$$u_0(x) = u(0, x) = \sum_{n=0}^{\infty} a_n \cos (\frac{n}{2} x)$$ $$a_0 = \frac{1}{2\pi} \int_{0}^{2\pi} u_0(x) \, dx , \quad a_n = \frac{1}{\pi} \int_{0}^{2\pi} u_0(x) \cos(\frac{n}{2} x) \, dx \quad \text{ for } n = 1, 2, ...$$

3 Computing Fourier series

3.1

Please refer to your Homework 1.

3.2

\begin{align} c_k & = \frac{1}{2\pi} \int_{-\pi}^{\pi} f(x) e^{-ikx} \, dx \\ & = \frac{1}{2\pi} \int_{-\pi}^{0} \pi e^{-ikx} \,dx + \frac{1}{2\pi} \int_{0}^{\pi} x e^{-ikx} \, dx \\ & = \begin{cases} \frac{e^{-i\pi k} + i\pi k - 1}{2\pi k^2} & \text{ if } k \not= 0 \\ \frac{3\pi}{4} & \text{ if } k = 0 \end{cases} \end{align}

3.3

\begin{align} c_k & = \frac{1}{2\pi} \int_{-\pi}^{\pi} f(x) e^{-ikx} \, dx \\ & = \frac{1}{2\pi} \int_{-\pi}^{0} (-x-\pi) e^{-ikx} \,dx + \frac{1}{2\pi} \int_{0}^{\pi} (-x+\pi) e^{-ikx} \, dx \\ & = \frac{1}{2\pi} \int_{-\pi}^{\pi} -x e^{-ikx} \, dx + \frac{1}{2\pi} \int_{-\pi}^{0} -\pi e^{-ikx} \, dx + \frac{1}{2\pi} \int_{0}^{\pi} \pi e^{-ikx} \, dx \\ & = \begin{cases} -i/k & \text{ if } k \not= 0 \\ 0 & \text{ if } k = 0 \end{cases} \end{align}

4 Computing Fourier transform

4.1

Assume \(a > 0\). To solve this one, you should solve the next one first.

From the next exercise, we have the following by inverse Fourier transform.

$$e^{-a|x|} = \frac{1}{2\pi} \int_{-\infty}^{\infty} \frac{2a}{a^2+k^2} e^{ikx} \, dk$$

Then, by a simple substitution, we have the following.

$$\frac{\pi}{a} e^{-a|x|} = \frac{\pi}{a} e^{-a|-x|} = \int_{-\infty}^{\infty} \frac{e^{-ikx}}{a^2+k^2} \, dk$$

Hence, the Fourier transform of \(f(x) = (a^2+x^2)^{-1}\) is given by the following.

$$\hat{f}(k) = \int_{-\infty}^{\infty} \frac{e^{-ikx}}{a^2+x^2} \, dx = \frac{\pi}{a} e^{-a|k|}$$

You can also have a look at this answer from StackExchange.

4.2

You should have solved this one in one of the homework. Nevertheless, we include a solution here to be self-contained.

\begin{align} \hat{f}(k) & = \int_{\mathbb{R}} e^{-a|x|} e^{-ikx} \, dx \\ & = \int_{-\infty}^{0} e^{-a(-x)} e^{-ikx} \, dx + \int_{0}^{\infty} e^{-a(x)} e^{-ikx} \, dx \\ & = \int_{-\infty}^{0} e^{-(ik-a)x} \, dx + \int_{0}^{\infty} e^{-(ik+a)x} \, dx \\ & = \left[ -\frac{e^{-(ik-a)x}}{ik-a} \right]_{-\infty}^{0} + \left[ -\frac{e^{-(ik+a)x}}{ik+a} \right]_{0}^{\infty} \\ & = -\frac{1}{ik-a} + \frac{1}{ik+a} \\ & = \frac{2a}{a^2+k^2} \end{align}

4.3

\begin{align} \hat{f}(k) & = \int_{\mathbb{R}} f(x) e^{-ikx} \, dx \\ & = \int_{-a}^{0} (-x) e^{-ikx} \, dx + \int_{0}^{a} x e^{-ikx} \, dx \\ & = \int_{0}^{a} x e^{ikx} \, dx + \int_{0}^{a} x e^{-ikx} \, dx \\ & = 2 \int_{0}^{a} x \cos kx \, dx \\ & = 2a \frac{\sin ka}{k} + 2 \frac{\cos ka}{k^2} - 2 \frac{1}{k^2} \end{align}

5 Discrete Fourier transform and numerical PDE

5.1

We divide the interval \([0, 2\pi]\) into \(N = 8\) subintervals of the same size \(h = \frac{\pi}{4}\), which results 9 node points, say \(x_0, ..., x_8\). (But under periodic boundary condition, we identify \(x_8\) with \(x_0\), so let's omitted \(x_8\)). Further, we take \(u_0 = u(x_0)\), …, \(u_7 = u(x_7)\) and use the spectral method to approximate these function values.

Using central difference scheme, we have the following approximation.

\[ \frac{u_{j-1} -2u_j + u_{j+1}}{h^2} \approx x_j e^{x_j} := f_j \quad \text{ for } j = 0, ..., 7 \]

This results the following linear system by writing \(\vec{u} = (u_0, ..., u_7)^T\) and \(\vec{f} = (f_0, ..., f_7)^T\).

$$D^2 \vec{u} = \vec{f}, \quad \text{ where } D^2 = \frac{1}{h^2} \begin{pmatrix} -2 & 1 & & & 1 \\ 1 & -2 & 1 \\ & \ddots & \ddots & \ddots \\ & & 1 & -2 & 1 \\ 1 & & & 1 & -2 \end{pmatrix}$$

Note that \(\vec{\omega}_k\) for \(k = 0, 1, ..., 7\) form an eigenbasis of \(D^2\) in \(\mathbb{C}^n\) with eigenvalues \(-\lambda_k^2\) respectively.

$$\vec{\omega}_k = (e^{ikx_0}, e^{ikx_1}, ..., e^{ikx_7}), \quad -\lambda_k^2 := \frac{-4 \sin^2 \frac{kh}{2}}{h^2}$$

In other words, the discrete Fourier transform diagonalizes \(D^2\). So, we could express \(\vec{u}\) and \(\vec{f}\) using \(\{\vec{\omega}_k\}_{k=0}^{7}\).

$$\vec{u} = \sum_{k=0}^{7} \hat{u}_k \vec{\omega}_k, \quad \vec{f} = \sum_{k=0}^{7} \hat{f}_k \vec{\omega}_k$$

In particular, the coefficients \(\hat{f}_k\) can be determined by the discrete Fourier transform.

$$\hat{f}_k = \frac{1}{8} \sum_{j=0}^{7} f_j e^{-ikx_j}, \quad \text{ for } k = 0, 1, ..., 7$$

Then putting every back to the matrix equation \(D^2 \vec{u} = \vec{f}\), we have a (much) simpler system of linear equations.

$$-\lambda_k^2 \hat{u}_k = \hat{f}_k, \quad \text{ for } k = 0, 1, ..., 7$$

Note that \(\hat{u}_k\) are uniquely determined for \(k = 1, 2, ..., 7\), but \(\hat{u}_0\) can be arbitrary as \(-\lambda_0^2 = 0\).

Let \(\hat{u}_0 = c\) be arbitrary, then we have the following solution.

\[ u = \sum_{k=0}^{7} \hat{u}_k \vec{e^{ikx}} = c\begin{pmatrix} 1 \\ 1 \\ \vdots \\ 1 \end{pmatrix} + \sum_{k=1}^{7} \hat{u}_k \vec{e^{ikx}}, \qquad \text{where } \hat{u}_k = \hat{f}_k / (-\lambda_k^2) \]

5.2

First, we notice that the transformation of the points is the essentially same in each dimension of \(\mathbb{R}^N\). So, without loss of generality, we assume \(N = 1\).

Now, we express the \(n\) points, or \(n\) real values, as a vector \(\vec{p}^{(t)}\) in \(\mathbb{R}^n\).

$$\vec{p}^{(t)} = (p^{(t)}_1, p^{(t)}_2, ..., p^{(t)}_n)^T$$

Then the operation of taking mid-points can be formulated using a matrix operator.

$$\vec{p}^{(t)} = L \vec{p}^{(t-1)}, \quad \text{ where } L := \frac{1}{2} \begin{pmatrix} 1 & 1 \\ & 1 & 1 \\ & & \ddots & \ddots \\ & & & 1 & 1 \\ 1 & & & & 1 \end{pmatrix}$$

Note that \(L\) is a circulant matrix, so it can be diagonalized by the discrete Fourier transform. (If you don't know the idea, you should have a look at this tutorial!)

More concretely, one could check that \(\{ \vec{\omega}_k \}_{k=0}^{n-1}\) forms a basis of \(\mathbb{C}^n\) and \(\vec{\omega}_k\) are eigenvectors of \(L\) with eigenvalue \(\lambda_k\).

$$\vec{\omega}_k = (e^{i(0)k\frac{2\pi}{n}}, e^{i(1)k\frac{2\pi}{n}}, ..., e^{i(n-1)k\frac{2\pi}{n}})^T, \quad \lambda_k = \frac{1 + e^{ik \frac{2\pi}{n}}}{2}$$

$$\left( L\vec{\omega}_k \right)_j = \frac{1}{2} ( e^{i(j-1)k \frac{2\pi}{n}} + e^{ijk \frac{2\pi}{n}} ) = \left( \frac{1 + e^{ik \frac{2\pi}{n}}}{2} \right) e^{i(j-1)k \frac{2\pi}{n}}$$

Now, the vector \(\vec{p}^{(0)}\) can now be expressed using the basis \(\{ \vec{\omega}_{k} \}_{k=0}^{n-1}\).

$$\vec{p}^{(0)} = \sum_{k=0}^{n-1} a_k \vec{\omega}_k$$

In particular, we have \(a_0 = 0\) by the given assumption that the mass of the points is centered at the origin.

$$0 = \sum_{j=1}^{n} p_j^{(0)} = \sum_{j=1}^{n} \sum_{k=0}^{n-1} a_k e^{i(j-1)k \frac{2\pi}{n}} = \sum_{k=0}^{n-1} a_k \sum_{j=1}^{n} e^{i(j-1)k \frac{2\pi}{n}} = n a_0$$

Further, we have \(|\lambda_k| < 1\) for \(k = 1, 2, ..., n-1\). Now, the convergence is obvious.

$$\vec{p}^{(t)} = L^t \sum_{k=1}^{n-1} a_k \vec{\omega}_k = \sum_{k=1}^{n-1} a_k \lambda_k^t \vec{\omega}_k \rightarrow \vec{0} \text{ as } t \rightarrow \infty$$

This shows that each coordinate of the points will converge to zero.

5.3

Note that the autocorrelation is essentially a discrete convolution of \(\bar{v}\) and \(v\).

$$r_j = \sum_{m=1}^{N} \bar{v}_{j-m} v_{m}$$

Then, we could apply the discrete Fourier transform on \(\vec{r}\).

\begin{align} \hat{r}_k & = \frac{1}{N} \sum_{j=0}^{N-1} r_j e^{-ikj \frac{2\pi}{N}} \\ & = \frac{1}{N} \sum_{j=0}^{N-1} \sum_{m=1}^{N} \bar{v}_{j-m} v_{m} e^{-ikj \frac{2\pi}{N}} \\ & = \frac{1}{N} \sum_{j=0}^{N-1} \sum_{m=1}^{N} \bar{v}_{j-m} e^{-ik(j-m) \frac{2\pi}{N}} v_{m} e^{-ikm \frac{2\pi}{N}} \end{align}

Under periodic indexing and periodicity of exponencial function, we simply have the following.

$$\hat{r}_k = N \cdot \left[ \frac{1}{N} \sum_{j=1}^{N} \bar{v}_{j} e^{-ikj \frac{2\pi}{N}} \right] \left[ \frac{1}{N} \sum_{m=1}^{N} v_{m} e^{-ikm \frac{2\pi}{N}} \right] = N \cdot \left[ \mathrm{DFT}(\bar{v}) \right]_k \cdot \left[ \mathrm{DFT}(v) \right]_k$$

In other words, the discrete Fourier transform of the autocorrelation is just given by the entrywise product of the discrete Fourier transform of the signal and that of the complex conjugate of the signal. It also illustrates an efficient algorithm for computing autocorrelation with \(\mathcal{O} (N \log N)\) (using fast Fourier transform), which is more efficient than the naive method with \(\mathcal{O} (N^2)\) that computes the summation according to the definition.

6 Iterative methods

6.1

Obviously, we have the following.

\[ B = \begin{pmatrix} 0 & -\frac{b}{a} \\ -\frac{d}{c} & 0 \end{pmatrix} \]

So, the eigenvalue of \(B\) is given by the solutions to \(\lambda^2 - \frac{bc}{ad} = 0\).

Now, if \(A\) is symmetric, \(A\) is diagonalizable. Further, if \(A\) is positive definite, the eigenvalues of \(A\) are positive and hence the determinant of \(A\) is positive, that is, \(ad - bc > 0\). In other words, we have \(|\lambda|^2 = \left| \frac{bc}{ad} \right| < 1\).

To solve \(A\vec{x} = \vec{b}\), the Jacobi method iterates \(\vec{x} \leftarrow B \vec{x} + M^{-1} \vec{b}\). We now have \(\rho(B) < 1\). Hence, the iteration converges.

6.2

Omitted.

6.3

Let \(A\vec{x} = \vec{b}\). Suppose we split \(A\) into \(M + (A-M)\), that is, we have \(M \vec{x} = (M-A) \vec{x} + \vec{b}\). So, an iterative scheme will iterate \(\vec{x} \leftarrow M^{-1} (M-A) \vec{x} + M^{-1} \vec{b}\).

Jacobi method

\[ M = \begin{pmatrix} 1 & 0 & 0 \\ 1 & 1 & 0 \\ 2 & 2 & 1 \end{pmatrix} \quad \text{ and } \quad B = \begin{pmatrix} 0 & -2 & 2 \\ -1 & 0 & -1 \\ -2 & -2 & 0 \end{pmatrix} \]

From the characteristic polynomial of \(B\), which is \(-\lambda^3\), we see that the eigenvalues of \(B\) are all zero. Thus, the spectral radius of \(B\) is actually zero, which implies a (fast) convergence.

Gauss-Seidel method

\[ M = \begin{pmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{pmatrix} \quad \text{ and } \quad B = \begin{pmatrix} 0 & -2 & 2 \\ 0 & 2 & -3 \\ 0 & 0 & 2 \end{pmatrix} \]

From the characteristic polynomial of \(B\), which is \(-\lambda^3 + 4\lambda^2 - 4\lambda\), we see that \(2\) is an eigenvalue. Thus, the spectral radius is larger than \(1\), which means the method diverges.

7 Extra questions

7.1

Omitted.

7.2

Note that the convolution for periodic functions is given by the following.

\[ (f \ast g) (x) = \int_{[0, 2\pi]} f(x-y) g(y) \, dy\]

Then, by definition, we have the following.

\begin{align} \widehat{f \ast g} (n) & = \frac{1}{2\pi} \int_{[0, 2\pi]} (f \ast g) (x) e^{-inx} \,dx \\ & = \frac{1}{2\pi} \int_{[0, 2\pi]} \int_{[0, 2\pi]} f(x-y) g(y) e^{-inx} \, dy \, dx \\ & = \frac{1}{2\pi} \int_{[0, 2\pi]} g(y) e^{-iny} \int_{[0, 2\pi]} f(x-y) e^{-in(x-y)} \, dx \, dy \\ & = \frac{1}{2\pi} \int_{[0, 2\pi]} g(y) e^{-iny} \, dy \int_{[0, 2\pi]} f(x) e^{-inx} \, dx \\ & = \frac{1}{2\pi} ( 2\pi \hat{g}(n) )( 2\pi \hat{f}(n) ) \\ & = 2\pi \, \hat{f}(n) \, \hat{g}(n) \end{align}

7.3

The two equality follows immediately from the definitions of Fourier series and Fourier transform. More importantly, you should understand the meaning behind the zero-th Fourier coefficient and \(\hat{f}(0)\).

7.4

Suppose we discretize the domain into \(N\) intervals of size \(h\), so we get \(N+1\) node points, say \(x_0\), \(x_1\), …, \(x_N\). Then a homogeneous Dirichlet boundary condition means \(u(x_0) = u(x_N) = 0\).

Using central difference scheme, we have \[ u''(x_j) \approx \frac{u(x_{j-1}) - 2u(x_j) + u(x_{j+1})}{h^2} \quad \text{ for } j = 1, ..., N-1 . \]

Hence, we have the following matrix equation.

\[ \begin{pmatrix} u''(x_1) \\ u''(x_2) \\ \vdots \\ u''(x_{N-1}) \end{pmatrix} \approx D^2 \begin{pmatrix} u(x_1) \\ u(x_2) \\ \vdots \\ u(x_{N-1} \end{pmatrix}, \quad \text{ where } D^2 = \frac{1}{h^2} \begin{pmatrix} -2 & 1 \\ 1 & -2 & 1 \\ & \ddots & \ddots & \ddots \\ & & 1 & -2 & 1 \\ & & & 1 & -2 \end{pmatrix} \in \mathbf{M}_{(N-1) \times (N-1)} (\mathbb{R}) \]

Unfortunately, \(D^2\) is not a circulant matrix. So, we cannot diagonalize it with discrete Fourier transform.

7.5

Suppose we discretize the domain into \(N\) intervals of size \(h\), so we get \(N+1\) node points, say \(x_0\), \(x_1\), …, \(x_N\). Then a homogeneous Neumann boundary condition means \(u'(x_0) = u'(x_N) = 0\).

By a central difference approximation, we have $$\frac{u(x_1) - u(x_{-1})}{2h} = 0 \quad \text{ and } \quad \frac{u(x_{N+1}) - u(x_{N-1})}{2h} = 0,$$ which implies \(u(x_{-1}) = u(x_1)\) and \(u(x_{N+1}) = u(x_{N-1})\). Using central difference scheme again, we now have \[ u''(x_j) \approx \frac{u(x_{j-1}) - 2u(x_j) + u(x_{j+1})}{h^2} \quad \text{ for } j = 0, ..., N . \]

Hence, we have the following matrix equation.

\[ \begin{pmatrix} u''(x_0) \\ u''(x_1) \\ \vdots \\ u''(x_N) \end{pmatrix} \approx D^2 \begin{pmatrix} u(x_0) \\ u(x_1) \\ \vdots \\ u(x_N) \end{pmatrix}, \quad \text{ where } D^2 = \frac{1}{h^2} \begin{pmatrix} -2 & 2 \\ 1 & -2 & 1 \\ & \ddots & \ddots & \ddots \\ & & 1 & -2 & 1 \\ & & & 2 & -2 \end{pmatrix} \in \mathbf{M}_{(N+1) \times (N+1)} (\mathbb{R}) \]

Again, \(D^2\) is not a circulant matrix. So, we cannot diagonalize it with discrete Fourier transform.

Footnotes:

1

This file is written in Org mode, exported to HTML and styled with org-spec

Created: 2018-03-12 Mon 12:01

Emacs 25.1.1 (Org mode 8.2.10)

Validate