Non-linear tomographic reconstruction

Non-linear tomographic reconstruction

Presentation by Joshua Greenhalgh

The following seminar was given by cohort one student Joshua Greenhalgh about his current research in tomography.

Tomography is the technique in which an object is reconstructed from a series of line integrals. Johann Radon laid down the mathematical framework for tomography in 1917 with the Radon transform, and now the techniques have been developed they are used with great success in processes such as CT scans.

By rotating an object through all angles and capturing its information using filtered back projection (FBP), the internal structure of the object can be determined. Joshua notes a handful of problems with using FBP:

  • The method is heavily grounded in mathematical theory, and hence does not easily translate well to physical systems
  • The locations of the source and detector are restricted to simple locations, thus limiting reconstruction to only simple geometries

Linear Reconstuction

Iterative methods are used instead of the direct methods previously discussed to combat the limitations of FBP methods. These make use of the solution of Beer’s Law, which in turn are used to compute the values of pixels. This requires solving a set of linear equations, using techniques such as Algebraic Reconstruction (ART) or other variants (e.g. SART). Beer’s law describes how the intensity of attenuated light varies along a linepath. This is interpretted as the number of x-ray photons detected through the sample. The linear model assumes that every photon has the same probability of being attenuated – this is not true in reality. The probability is a function of, among other details, the energy of the photon being attenuated – in reality, a source has a particular spectrum of energies. This false assumption results in so-called artifacts – these are imperfections in the reconstruction of the sample.

Nonlinear Reconstruction

The dependance of the throughput of radiation on photon energy introduces a nonlinearity of the system of equations to be solved. The system of equations to solve in the nonlinear case has an increased number of unknowns compared to that of the linear case, whilst maintaining the amount of known data – the system is highly nondetermined. Consider a real example where a reconstruction may be comprised of 512×512 pixels – the matrix to solve would have of the order 1010 elements and require up to 1TB of memory to store if the matrix is sparsly populated. Even if this matrix could be stored, calculations would be slow to say the least! To solve this problem, Joshua uses GPUs to perform forward and backward projections (via matrix multiplication) without any need to store the matricies, giving far more efficient calculations.

Optimisation Methods

There are four methods Joshua considers to solve the system of equations:

  • Gradient Descent
  • Conjugate Gradient Method
  • Levenberg-Marquardt-Fletcher Method
  • Newton’s Method

The structure generated from these methods of root finding gives a good spatial match when compared with the real sample, but the energy scales are a poor approximation. In order to narrow in on the correct scales, a good initial guess is required. One way of ensuring a good start is to regularise the solution – that is, to constrain it within some bounds that encode prior beliefs about the solution. An example would be the total variation of the attenuation: If we know the sample to be mainly smooth, a constant attenuation is to be expected throughout most of the sample with only a small number of non-zero derivatives of the attenuation.

Written by Alex Wright