[11] Although the inclusion of the object equation in the second term of (8) can be considered as a physical regularization of the ill-posed data equation in the first term of (8), the inversion results may be improved by taking into account a priori information about the contrast profile. The standard way to include this a priori information is to modify the functional by introducing an extra penalty function, viz.,

As known in the literature the addition of the regularization term *F*_{n}^{R} to the cost functional has a very positive effect on the quality of the reconstruction. The drawback is the presence of the positive weighting parameter γ^{2} in the cost functional, which, with the present knowledge, can only be determined through considerable numerical experimentation and a priori information of the desired reconstruction [*van den Berg and Kleinman*, 1995]. Further, numerical experiments have shown that the results improve when we let the parameter γ^{2} decrease with increasing number of iterations. In fact, a good choice seems to take this parameter proportional to the value of the cost functional *F*_{n−1} of the previous iteration. This numerical experimentation has led us to the idea of multiplicative regularization technique, see *van den Berg et al.* [1999], viz.

Minimization of this functional with respect to changes in the contrast will change the minimizer χ_{n} given in (15) to χ_{n}^{R}. Our aim is not to change the updating procedure of the contrast sources *w*_{j,n}. At the beginning of each iteration we have to replace the quantity χ_{n−1} in (10) and (12) by χ_{n−1}^{R}, but the remainder of the contrast source updating procedure is not changed, when we keep the regularization factor to be equal to one during this part of the iteration. Then, only the updating of the contrast (for given *w*_{j} = *w*_{j,n}) has to be modified. Instead of taking the previous iterate of the contrast as done in our previous papers [*van den Berg et al.*, 1999] and [*van den Berg and Abubakar*, 2001], we now take the analytic value of (15) as starting value. From this point we make an additional minimization step,

where χ_{n} is now given by (15) and *d*_{n} is the conjugate gradient direction

We remark that we prefer now a line minimization around the minimum of the cost functional *F*_{D,n} (physical cost criterion). In view of (15) we take *g*_{n}^{R} as

being a preconditioned gradient of the regularization factor *F*_{n}^{R} with respect to changes in the contrast around the point χ = χ_{n}. In view of the previous minimization step, the gradient of *F*_{D,n} with respect to changes in the contrast around the point χ = χ_{n} vanishes. Hence, the gradient with respect to the contrast, in contrary to the previous approaches of the CSI method, contains only a contribution of the regularization additionally imposed. This simplifies the algorithm. In general, the real parameter β_{n} is found from a line minimization as minimizer of

The structure of this minimization procedure is such that it will minimize the regularization factor with a large weighting parameter in the beginning of the optimization process, because the value of *F*_{S} + *F*_{D,n} is still large, and that it will gradually minimize more and more the error in the data and object equations when the regularization factor *F*_{n}^{R} remains a nearly constant value close to one. If noise is present in the data, the data error term *F*_{S} will remain at a large value during the optimization and therefore, the weight of the regularization factor will be more significant. Hence, the noise will, at all times, be suppressed in the reconstruction process and we automatically fulfill the need of a larger regularization when the data contains noise as suggested by *Chan and Wong* [1998] and *Rudin et al.* [1992]. After we have obtained a new estimate χ_{n}^{R} for the contrast, we update the contrast sources starting with χ_{n−1} = χ_{n−1}^{R} of the previous iteration.