Blur in image data hampers the utilisation of image acquisition methods
in many fields of application. Blur describes the fact that information
belonging to a single object point is spread over a certain region
in the image, instead of being sharply localised. Common sources
of blur include motion of objects during the recording of an image,
motion of the camera during recording, defocussing, aberration and
other optical imperfections of the camera, and atmospheric perturbations.
Methods that can remove or reduce blur, and thereby alleviate the
interpretation of blurred images, are called deblurring or
deconvolution. However, deconvolution is a highly ill-posed
problem. Error amplification in deblurring procedures can easily
deteriorate the sharpened image or even render it completely unusable.
The vast variety of deconvolution methods developped over decades of
research in the field can be classified according to several criteria:
-
Spatial structure of blur:
If all parts of an image are blurred essentially in the same way (i.e.
the so-called point-spread function that describes how a sharp object
point is pictured in the observed image looks the same at all locations
of the image), one has spatially invariant blur. In this case the
mathematical model for blur is based on a simple convolution, which
is advantageous for the mathematical and algorithmical treatment.
Otherwise, one speaks of spatially variant blur.
-
Knowledge about the blurring process:
In some contexts, the blurring process is known and can be used as
input information for the deblurring algorithm. Such algorithms are
called non-blind. In contrast, blind deconvolution algorithms aim at
estimating the point-spread function along with the sharp image from
the observed degraded image. Some blind deconvolution methods
proceed by first estimating the point-spread function and applying
a non-blind method afterwards, while other approaches integrate
both steps into a common procedure. So-called semi-blind methods
assume a partial knowledge of the blurring process, typically that
the point-spread function belongs to a specific class of functions
with one or few parameters (like motion or defocus blur with unknown
scale parameter).
-
Discretisation level:
As digital images are based on discrete measurements taken from a
continuous reality (described by the laws of optics), there are two
different modelling approaches: First, discrete models which
describe a discrete blurry image as degradation of a discrete
sharp image, and can therefore capitalise on matrix algebra and
other space-discrete theories. Here, the blur process must be
reformulated in a discrete manner.
Secondly, continuous models which
capture better the continuous nature of the optic blur process
itself and its symmetries, and typically involve (integro-)differential
equations and calculus of variations. In this class of models,
discretisation is postponed until the numeric evaluation.
Our work is focussed on variational deconvolution approaches.
In this class of approaches, the sharpening of the image
is pursued by minimising an energy ansatz combining the blur model
(i.e. the observed image should be recovered when
blurring the sought image with the known point spread function)
with regularity assumptions on the sought image. Both kinds of
conditions are encoded in penalty terms.
-
Systematic investigation of edge-preserving and
edge-enhancing regularisation terms, continuation strategy for
smoothness weight
[1,
2]
-
Use of robust data terms for uncertain and spatially
variant blurs
[1,
3]
-
Inequality constraints (positivity, interval constraints)
in robust deblurring, and robust deblurring of matrix-valued
images with positive definiteness constraint
[5]
-
Parallelisation of deconvolution algorithms derived from variational
models, in cooperation with partners at
Interstate University of Applied Sciences of Technology Buchs NTB,
Switzerland
[7]
-
Applications of variational and other image deblurring techniques
in the context of application problems arising in astrophotography,
superresolution, cryptography
[4,
6,
8]
-
D. Theis:
Dekonvolution digitaler Bilder.
(Deconvolution of digital images, in German).
Diploma thesis, Saarland University, Dept. of Computer Science,
Saarbrücken, 2004.
-
M. Welk, D. Theis, T. Brox, J. Weickert:
PDE-based deconvolution with forward-backward diffusivities and diffusion
tensors.
In R. Kimmel, N. Sochen, J. Weickert (Eds.):
Scale-Space and PDE Methods in Computer Vision.
Lecture Notes in Computer Science, Vol. 3459, Springer, Berlin,
585–597, 2005.
-
M. Welk, D. Theis, J. Weickert:
Variational deblurring of images with uncertain and spatially
variant blurs.
In W. Kropatsch, R. Sablatnig, A. Hanbury (Eds.): Pattern Recognition.
Lecture Notes in Computer Science, Vol. 3663, 485-492,
Springer, Berlin, 2005.
-
A. Steinel:
Bildverbesserung von Sternspuraufnahmen.
(Image enhancement for star trail photographs, in German.)
Bachelor thesis, Saarland University, Dept. of Computer Science,
Saarbrücken, 2006.
-
M. Welk, J. G. Nagy:
Variational deconvolution of multi-channel images with inequality
constraints.
In J. Martí, J. M. Benedí, A. M. Mendonça, J. Serrat (Eds.):
Pattern Recognition and Image Analysis. Part I.
Lecture Notes in Computer Science, Vol. 4477, 386-393,
Springer, Berlin, 2007.
-
N. Persch:
Multiframe superresolution image synthesis by variational optic flow
compensation and Wiener filtering.
Bachelor thesis, Saarland University, Dept. of Computer Science,
Saarbrücken, 2007.
-
R. Nadig, T. Spirig:
Parallelisierung auf dem Cell-Prozessor.
(Parallelisation using a Cell processor, in German).
Diploma thesis, Interstate University of Applied Sciences of Technology Buchs NTB,
Switzerland, 2007.
-
M. Backes, T. Chen, M. Dürmuth, H. Lensch, M. Welk:
Tempest in a teapot: compromising reflections revisited.
Proc. 30th IEEE Symposium on Security and Privacy, Oakland, USA,
315–327. IEEE Computer Society, 2009.
Martin Welk/19 May 2009
|