## Introduction

Several approaches and algorithms have been proposed for image deconvolution, an important topic in imaging science. The purpose is to improve the quality of images degraded by blurring and noise. The ill-posedness of the problem is the basic reason of the profusion of methods: many different formulations are introduced, based on different statistical models of the noise affecting the data, and many different priors are considered for regularizing the problem, for instance, in a Bayesian approach. As a result, deconvolution is usually formulated in one of several possible variational forms, and for a given formulation, several different minimization algorithms, in general iterative, are proposed.

For the specific application to microscopy, we mention a few methods, without pretending to be exhaustive. Under the assumption of a Gaussian white noise, the approach based on the Tikhonov regularization theory (Tikhonov & Arsenin, 1977; Engl *et al*., 1996) and the methods proposed for computing non-negative minimizers of the corresponding functional, such as the method of Carrington (1990) or the iterative method proposed in van der Voort & Strasters (1995), are worth mentioning. On the other hand, in the case of Poisson statistics (noise dominated by photon counting), the classical Richardson–Lucy (RL) algorithm (Richardson, 1972; Lucy, 1974), derived from a maximum likelihood approach (Shepp & Vardi, 1982), can be routinely used. A quantitative comparison of these methods is given in van Kempen *et al*. (1997) and van Kempen & van Vliet (2000b). They were applied mainly to wide-field and confocal microscopy, but the cases of the 4Pi and two-photon excitation microscopy were also investigated (Schrader *et al*., 1998; Mondal *et al*., 2008). We also remark that in some applications, the images are affected by both Poisson and Gaussian read-out noise. For treating this case, a more refined model was proposed by Snyder *et al*. (1993), with a related expectation maximization (EM) algorithm. However, as confirmed by a recent analysis (Benvenuto *et al*., 2008), this model improvement does not produce a significant improvement in the image restoration, so the assumption of Poisson statistics is, in general, satisfactory.

As is known, early stopping of the RL method (RLM) provides a regularization effect (Bertero & Boccacci, 1998), even if in some cases, the restoration is not satisfactory. To this purpose, a regularization of the Poisson likelihood by means of the Tikhonov functional is proposed in Conchello & McNally (1996). This approach is known as the RL–Conchello algorithm (van Kempen & van Vliet, 2000a). In a more recent paper, Dey *et al*. (2006) investigate the regularization based on the total variation (TV) functional, introduced by Rudin *et al*. (1992) for image denoising. They also propose an iterative algorithm derived from the one-step-late (OSL) method of Green (1990). OSL is introduced as a modified EM algorithm, but its structure is that of a scaled gradient method, with a scaling that is not automatically positive. For this reason, convergence can be proved only in the case of a sufficiently small value of the regularization parameter (Lange, 1990).

In this paper, we consider the regularization of the Poisson likelihood in the framework of a Bayesian approach. In general, regularization can be obtained by imposing *a priori* information about properties of the object to be restored, and the Bayesian approach enables integration of this available prior knowledge with the likelihood using Bayes' law; then, the estimate of the desired object is obtained by maximizing the resulting posterior function. This approach is called the *maximum a posteriori* (MAP) method and can be reduced to the solution of a minimization problem by taking the negative logarithm of the posterior function.

In general, the prior information can be introduced by regarding the object as a realization of a Markov random field (MRF) (Geman & Geman, 1984); then, the probability distribution of the object is obtained using the equivalence of MRF and Gibbs random fields (GRF) (Besag, 1974). In particular, the potential function of the Gibbs distribution can be appropriately chosen to bring out desired statistical properties of the object. The combination of MRF and MAP estimation is extensively studied in single photon emission computed tomography (SPECT) and positron emission tomography (PET) image reconstruction (Geman & McClure, 1985; Green, 1990). Recently, it was demonstrated that such a combination provides a powerful framework also for three-dimensional (3D) image restoration in fluorescence microscopy (Vicidomini *et al*., 2006; Mondal *et al*., 2007).

A simple and well-known regularization is based on the assumption that objects are made of smooth regions, separated by sharp edges. This is called edge-preserving regularization and requires non-quadratic potential functions. Therefore, in this paper, we consider the regularization of the negative logarithm of the Poisson likelihood by means of different edge-preserving potential functions proposed by different authors (see, for instance, Geman & Geman, 1984; Charbonnier *et al*., 1997).

In view of the application to fluorescence microscopy, we must consider the minimization of this functional on the convex set of the non-negative images (the non-negative orthant) and, in order to overcome the difficulties of the OSL algorithm, we investigate the applicability of the split-gradient method (SGM) to this problem. SGM, proposed by Lantéri *et al*. (2001, 2002), is a general approach that allows designing of iterative algorithms for the constrained minimization of regularized functionals both in the case of Gaussian and in the case of Poisson noise as well as in the case in which both noises are present (Lantéri & Theys, 2005). The general structure is that of a scaled gradient method, with a scaling that is always strictly positive. Therefore, from this point of view, SGM is superior to the OSL method.

Finally, we point out that, thanks to SGM, the edge-preserving regularizations investigated in this paper can also be easily applied to the case of the least-square problem (i.e. additive Gaussian noise); hence, we provide an approach to image deconvolution both for microscopy techniques in which Poisson noise is dominant, such as confocal microscopy, and for techniques in which a Gaussian noise can be more appropriate, such as wide-field microscopy.

The paper is organized as follows. In ‘The edge-preserving approach’, we briefly recall the maximum likelihood approach in the case of Poisson noise, its reduction to the minimization of Csiszár I-divergence (also called Kullback–Leibler divergence) and the RL algorithm; moreover, we introduce the edge-preserving potentials considered in this paper. In ‘Split-gradient method (SGM)’, we give SGM in the simple case of step length 1. This form can be easily obtained (Bertero *et al*., 2008) as an application of the method of successive approximations to a fixed-point equation derived from the Karush–Kuhn–Tucker (KKT) conditions for the constrained minimizers of the functional and based on a suitable splitting of the gradient into a positive and negative part. The convergence of this simplified version of SGM is not proved, even if it has always been verified in numerical experiments. However, convergence can be obtained by a suitable search of the step length (Lantéri *et al*., 2002) or by applying a recently proposed scaled gradient projection (SGP) method (Bonettini *et al*., 2007). The relationship between the simplified version of SGM and OSL is also shown. Moreover, we determine the splitting of the gradient for the edge-preserving potentials introduced in the previous section. In ‘Numerical results’, we present the results of our numerical experiments in the case of confocal microscopy in which photon counting noise is dominant, and therefore, a Poisson noise assumption is more appropriate. We compare the effect of the different potential functions using a simple 3D phantom consisting of spheres with different intensities and conclude that high-quality restorations can be obtained with the hyper-surface potential (Charbonnier *et al*., 1994) and the Geman–McClure potential (Geman & McClure, 1985), both being superior to the quadratic potential (the Tikhonov regularization). The estimates of the parameters derived from these simulations are used for a deconvolution of real images. In particular, the improvement provided by the deconvolution method is proved by comparing an image obtained with a high numerical aperture (NA) and the restored image obtained from a low NA image of the same object. In ‘Concluding remarks’, we give conclusions.