3.1 Image Reconstruction and Restoration Technique

Image Reconstruction and Restoration Technique

There are two basic techniques used to recover spatial information in images while preserving the signal-to-noise ratio (SNR):

  • Reconstruction: which attempts to recreate the image after it's been convolved with the instrumental Point Spread Function (PSF)
  • Deconvolution: which tries to remove the effects of the PSF imposed on the "ideal" image by enhancing high frequency components which were suppressed by the optics and the detector.

The primary aim of these techniques is to recover image resolution while preserving the SNR. These goals are unfortunately not fully compatible. For example, non-linear image restoration procedures that enhance high frequencies in the image, such as the Richardson-Lucy (Richardson 1972; Lucy 1974; Lucy & Hook 1991) and maximum-entropy methods (Gull & Daniel 1978; Wier & Djorgovski 1990) directly exchange signal-to-noise for resolution, thus performing best on bright objects that have ample signal-to-noise.

Implementations of the Richardson-Lucy method are available online (e.g. in skimage). However, this technique is unable to handle large dithers, and is limited by typical computing capabilities to combining either small regions of many images, or the entire image of only a few dithers. Furthermore, the task is unable to accommodate geometric distortions and the changing shape of the PSF across the field of view. This technique, like all non-linear techniques, produces final images with noise properties that are difficult to quantify. In particular, this method has a strong tendency to clump noise into the shape of the input PSF.

The rest of this section focuses on a family of linear reconstruction techniques that, at two opposite extremes, are represented by the interlacing and shift-and-add techniques, with the Drizzle algorithm representing a continuum between these two extremes.

 Interlacing

If the dithers are particularly well-placed, one can simply interlace the pixels from the images onto a finer grid. In the interlacing method, pixels from the independent input images are placed in alternate pixels on the output image according to the alignment of the pixel centers in the original images. 

For example, the image in the lower right of Figure 3.1 was restored by interlacing a 3 × 3 array of dithered images. However, due to occasional small positioning errors by the telescope, and non-uniform shifts in pixel space across the detector caused by geometric distortion of the optics, true interlacing of images is generally not feasible.

Figure 3.1: The Drizzle 'Eye Chart' Illustrating Convolution and Sub-Sampling 


The effects of image convolution and subsampling: the upper left image represents a "true" image, as seen by a telescope of infinite aperture.
The upper right image has be convolved with the HST/WFPC2 PSF. The effect of the sampling it with the WF2 CCD, as seen in the lower left image, shows even more loss of spatial information. The lower right image has been reconstructed using the Drizzle algorithm.

 Shift and Add

Another standard simple linear technique for combining shifted images, descriptively named "shift-and-add", has been used for many years to combine dithered infrared data onto finer grids. Each input pixel is block-replicated onto a finer subsampled grid, shifted into place, and added to the output image. 

Shift-and-add has the advantage of being able to easily handle arbitrary dither positions. However, it convolves the image yet again with the original pixel, thus adding to the blurring of the image and to the correlation of noise in the image. Furthermore, it is difficult to use shift-and-add in the presence of missing data (e.g., from cosmic rays) and geometric distortion.

 Drizzle

In response to the limitations of the two techniques described above, an improved method known formally as variable-pixel linear reconstruction, and more commonly referred to as Drizzle, was developed by Andy Fruchter and Richard Hook (Fruchter and Hook 1997), initially for the purposes of combining dithered images of the Hubble Deep Field North (HDF-N). This algorithm can be thought of as a continuous set of linear functions that vary smoothly between the optimum linear combination technique (interlacing) and shift-and-add. This often allows an improvement in resolution and a reduction in correlated noise, compared with images produced by only using shift-and-add. 

The degree to which the algorithm departs from interlacing and moves towards shift-and-add depends upon how well the PSF is subsampled by the shifts in the input images. In practice, the behavior of the Drizzle algorithm is controlled through the use of a parameter called pixfrac, which can be set to a value ranging from 0 to 1, that represents the amount by which input pixels are shrunk before being mapped onto the output image plane.

A key to understanding the use of pixfrac is to realize that a CCD image can be thought of as the true image convolved first by the optics, then by the pixel response function (ideally a square the size of a pixel), and then sampled by a delta-function at the center of each pixel. A CCD image is thus a set of point samples of a continuous two-dimensional function.

Hence the natural value of pixfrac is 0, which corresponds to pure interlacing. Setting pixfrac to values greater than 0 causes additional broadening of the output PSF by convolving the original PSF with pixels of non-zero size. Thus, setting pixfrac to its maximum value of 1 is equivalent to shift-and-add, the other extreme of linear combination, in which the output image PSF has been smeared by a convolution with the full size of the original input pixels.

The Drizzle algorithm has also been designed to handle large dithers, where geometric distortion causes non-uniform subsampling across the field, and takes into account missing data resulting from cosmic rays and bad pixels. Other useful discussions on the reconstruction of Nyquist images from undersampled data, as well as the merits of various types of dither patterns, are presented by Lauer (1999a, 1999b), Arendt, Fixsen and Moseley (2000), and Anderson and King (2000). It is beyond the scope of the present documentation to provide an extensive discussion on the levels comparable to these papers, therefore we refer interested readers to these papers instead.