On the periodic-plus-smooth decomposition of an image, part 7: improved implementation of Moisan's algorithm

In the previous instalment of this series, we implemented Moisan's (2011) efficient algorithm to compute the periodic-plus-smooth decomposition of an image. This algorithm relies heavily on the discrete Fourier transform, and already improves quite a lot over our previous conjugate gradient-based implementation. In the present post, we will show that performance of the implementation can be slightly improved with very little effort. This post is the seventh in a series in seven parts:

  1. Introduction
  2. Defining the decomposition
  3. The energy as a quadratic form
  4. Implementing the linear operators
  5. Minimizing the energy, the clumsy way
  6. Minimizing the energy, the clever way
  7. Improved implementation of Moisan's algorithm

The code discussed in this series is available as a Python module on GitHub.

Computing the DFT of the intensity gaps

In the previous instalment of this series, we showed that the DFT $\hat{s}$ of the smooth component $s$ of a $m\times n$ image $u$ can be deduced from the DFT $\hat{v}$ of the image $v$ which, according to Moisan (2011) “captures the intensity gaps of $u$ across its borders”: $v=v^\mathrm h+v^\mathrm v$, with

$$v^\mathrm h_{ij}= \left\{ \begin{array}{ll} u_{i, n-1}-u_{i, 0} & \text{if }j=0,\\ u_{i, 0}-u_{i, n-1} & \text{if }j=n-1,\\ 0 & \text{otherwise}, \end{array} \right.$$

and

$$v^\mathrm v_{ij}=\left\{\begin{array}{ll} u_{m-1, j}-u_{0, j} & \text{if }i=0,\\ u_{0, j}-u_{m-1, j} & \text{if }i=m-1,\\ 0 & \text{otherwise}. \end{array}\right.$$

In our first implementation of Moisan's algorigthm, we computed $\hat{v}$ as a two-dimensional DFT. While correct and simple to implement, this is unnecessarily expensive. Indeed, we readily find that

$$\hat{v}_{\alpha\beta}^\mathrm h=\sum_{i=0}^{m-1}\sum_{j=0}^{n-1}v_{ij}\exp\Bigl[-2\pi\mathrm i\Bigl(\frac{\alpha i}m+\frac{\beta j}n\Bigr)\Bigr]$$

$$=\sum_{i=0}^{m-1}\bigl(u_{i, n-1}-u_{i, 0}\bigr)\Bigl\{\exp\Bigl[-2\pi\mathrm i\Bigl(\frac{\alpha i}m\Bigr)\Bigr]-\exp\Bigl[-2\pi\mathrm i\Bigl(\frac{\alpha i}m+\frac{\beta(n-1)}n\Bigr)\Bigr]\Bigr\}$$

$$=\Bigl(1-\exp\frac{2\pi\mathrm i\beta}n\Bigr)\sum_{i=0}^{m-1}\bigl(u_{i, n-1}-u_{i, 0}\bigr)\exp\Bigl[-2\pi\mathrm i\Bigl(\frac{\alpha i}m\Bigr)\Bigr],$$

and the sum turns out to be the one-dimensional DFT of the $\bigl(u_{i, n-1}-u_{i, 0}\bigr)_{i=0,\ldots, m-1}$.

This leads to the following implementation of function per (compare with the implementation of _per in the previous post).

def per(u, inverse_dft=True):
    """Compute the periodic component of the 2D image u.

    This function returns the periodic-plus-smooth decomposition of
    the 2D array-like u.

    If inverse_dft is True, then the pair (p, s) is returned
    (p: periodic component; s: smooth component).

    If inverse_dft is False, then the pair

        (numpy.fft.fft2(p), numpy.fft.fft2(s))

    is returned.

    This function implements Algorithm 1.
    """
    u = np.asarray(u, dtype=np.float64)

    m, n = u.shape

    arg = 2.*np.pi*np.fft.fftfreq(m, 1.)
    cos_m, sin_m = np.cos(arg), np.sin(arg)
    one_minus_exp_m = 1.0-cos_m-1j*sin_m

    arg = 2.*np.pi*np.fft.fftfreq(n, 1.)
    cos_n, sin_n = np.cos(arg), np.sin(arg)
    one_minus_exp_n = 1.0-cos_n-1j*sin_n

    w1 = u[:, -1]-u[:, 0]
    w1_dft = np.fft.fft(w1)
    v_dft = w1_dft[:, None]*one_minus_exp_n[None, :]

    w2 = u[-1, :]-u[0, :]
    w2_dft = np.fft.fft(w2)
    v_dft += one_minus_exp_m[:, None]*w2_dft[None, :]

    denom = 2.0*(cos_m[:, None]+cos_n[None, :]-2.0)
    denom[0, 0] = 1.0
    s_dft = v_dft/denom
    s_dft[0, 0] = 0.0

    if inverse_dft:
        s = np.fft.ifft2(s_dft)
        return u-s, s
    else:
        u_dft = np.fft.fft2(u)
        return u_dft-s_dft, s_dft

We can test the new implementation.

import numpy as np

from skimage.io import imread, imsave

u = imread(DATA_DIR+'hut-648x364.png')
u = u.astype(np.float64)

p_exp, s_exp = _per(u, inverse_dft=True)
p_act, s_act = per(u, inverse_dft=True)
print('Error in L2-norm:')
print('  - on p: {}'.format(np.linalg.norm(p_act-p_exp)))
print('  - on s: {}'.format(np.linalg.norm(s_act-s_exp)))
print()
print('Maximum absolute error')
print('  - on p: {}'.format(np.max(np.abs(p_act-p_exp))))
print('  - on s: {}'.format(np.max(np.abs(s_act-s_exp))))
print()
print('Maximum relative error')
print('  - on p: {}'.format(np.max(np.abs(2*(p_act-p_exp)/(p_act+p_exp)))))
print('  - on s: {}'.format(np.max(np.abs(2*(s_act-s_exp)/(s_act+s_exp)))))
Error in L2-norm:
  - on p: 5.806665557170608e-10
  - on s: 5.806574416763999e-10

Maximum absolute error
  - on p: 4.272036107027685e-12
  - on s: 4.272491077379704e-12

Maximum relative error
  - on p: 8.610429016084161e-11
  - on s: 7.129469799120442e-08

Which validates this new implementation. Let us check how much we gained, speed-wise.

import timeit
t1 = timeit.timeit('p, s = _per(u, inverse_dft=True)',
                   number=100, globals=globals())
t2 = timeit.timeit('p, s = per(u, inverse_dft=True)',
                   number=100, globals=globals())
print('Timings:')
print('  - _per  : {}'.format(t1))
print('  - per   : {}'.format(t2))
print('  - ratio : {}'.format(t1/t2))
Timings:
  - _per  : 4.946549984680452
  - per   : 3.9524611252226656
  - ratio : 1.2515113565859002

So the new implementation is about 1.3× faster than the old one! Do you think we can do better? Wait and see!

Moisans's algorithm for real images

In our previous implementation, we have overlooked an important fact: $u$ is (often) a real image. Its DFT ought to be computed through the numpy.fft.rfft2 function (documentation) rather than numpy.fft.fft2 (documentation). This is what is done below.

def rper(u, inverse_dft=True):
    """Compute the periodic component of the 2D, real image u.

    This function returns the periodic-plus-smooth decomposition of
    the 2D array-like u. The image must be real.

    If inverse_dft is True, then the pair (p, s) is returned
    (p: periodic component; s: smooth component).

    If inverse_dft is False, then the pair

        (numpy.fft.rfft2(p), numpy.fft.rfft2(s))

    is returned.

    This function implements Algorithm 1.
    """
    u = np.asarray(u, dtype=np.float64)
    m, n = u.shape

    arg = 2.*np.pi*np.fft.fftfreq(m, 1.)
    cos_m, sin_m = np.cos(arg), np.sin(arg)
    one_minus_exp_m = 1.0-cos_m-1j*sin_m

    arg = 2.*np.pi*np.fft.rfftfreq(n, 1.)
    cos_n, sin_n = np.cos(arg), np.sin(arg)
    one_minus_exp_n = 1.0-cos_n-1j*sin_n

    w1 = u[:, -1]-u[:, 0]
    w1_dft = np.fft.fft(w1)
    # Use complex fft because irfft2 needs all modes in the first direction
    v1_dft = w1_dft[:, None]*one_minus_exp_n[None, :]

    w2 = u[-1, :]-u[0, :]
    w2_dft = np.fft.rfft(w2)
    v2_dft = one_minus_exp_m[:, None]*w2_dft[None, :]

    k_dft = 2.0*(cos_m[:, None]+cos_n[None, :]-2.0)
    k_dft[0, 0] = 1.0
    s_dft = (v1_dft+v2_dft)/k_dft
    s_dft[0, 0] = 0.0

    if inverse_dft:
        s = np.fft.irfft2(s_dft, u.shape)
        return u-s, s
    else:
        u_dft = np.fft.rfft2(u)
        return u_dft-s_dft, s_dft

And we can again test this new implementation

p_act, s_act = rper(u, inverse_dft=True)

print('Error in L2-norm:')
print('  - on p: {}'.format(np.linalg.norm(p_act-p_exp)))
print('  - on s: {}'.format(np.linalg.norm(s_act-s_exp)))
print()
print('Maximum absolute error')
print('  - on p: {}'.format(np.max(np.abs(p_act-p_exp))))
print('  - on s: {}'.format(np.max(np.abs(s_act-s_exp))))
print()
print('Maximum relative error')
print('  - on p: {}'.format(np.max(np.abs(2*(p_act-p_exp)/(p_act+p_exp)))))
print('  - on s: {}'.format(np.max(np.abs(2*(s_act-s_exp)/(s_act+s_exp)))))
Error in L2-norm:
  - on p: 5.810100441650991e-10
  - on s: 5.809921679853175e-10

Maximum absolute error
  - on p: 4.1807243312143156e-12
  - on s: 4.181406578613701e-12

Maximum relative error
  - on p: 8.68682621178305e-11
  - on s: 7.30527816192651e-08

Which is again quite satisfactory! Let us time the new implementation.

t3 = timeit.timeit('p, s = rper(u, inverse_dft=True)',
number=100, globals=globals())
print('Timings:')
print('  - _per : {}'.format(t1))
print('  - per  : {}'.format(t2))
print('  - rper : {}'.format(t3))
print('Ratios:')
print('  - _per/per  : {}'.format(t1/t2))
print('  - _per/rper : {}'.format(t1/t3))
Timings:
  - _per : 4.946549984680452
  - per  : 3.9524611252226656
  - rper : 2.1560062129698423
Ratios:
  - _per/per  : 1.2515113565859002
  - _per/rper : 2.2943115631687854

… and we are now about 2.3× faster!

Conclusion

This is the end of the story. We now have a good implementation of Moisan's algorithm. We have optimized its implementation, but the code did not lose in clarity.

If you are interested by my implementation of Moisan's algorithm, go to the GitHub repository of the moisan2011 Python module!

Comments

Please send your comments to sebastien [dot] brisard [at] univ [dash] eiffel [dot] fr. Your comments will be inserted below. Your email address will not appear.