Although the inclusion of the object equation in the second term of (8) can be considered as a physical regularization of the ill-posed data equation in the first term of (8), the inversion results may be improved by taking into account a priori information about the contrast profile. The standard way to include this a priori information is to modify the functional by introducing an extra penalty function, viz.,
As known in the literature the addition of the regularization term FnR to the cost functional has a very positive effect on the quality of the reconstruction. The drawback is the presence of the positive weighting parameter γ2 in the cost functional, which, with the present knowledge, can only be determined through considerable numerical experimentation and a priori information of the desired reconstruction [van den Berg and Kleinman, 1995]. Further, numerical experiments have shown that the results improve when we let the parameter γ2 decrease with increasing number of iterations. In fact, a good choice seems to take this parameter proportional to the value of the cost functional Fn−1 of the previous iteration. This numerical experimentation has led us to the idea of multiplicative regularization technique, see van den Berg et al. , viz.
Minimization of this functional with respect to changes in the contrast will change the minimizer χn given in (15) to χnR. Our aim is not to change the updating procedure of the contrast sources wj,n. At the beginning of each iteration we have to replace the quantity χn−1 in (10) and (12) by χn−1R, but the remainder of the contrast source updating procedure is not changed, when we keep the regularization factor to be equal to one during this part of the iteration. Then, only the updating of the contrast (for given wj = wj,n) has to be modified. Instead of taking the previous iterate of the contrast as done in our previous papers [van den Berg et al., 1999] and [van den Berg and Abubakar, 2001], we now take the analytic value of (15) as starting value. From this point we make an additional minimization step,
where χn is now given by (15) and dn is the conjugate gradient direction
We remark that we prefer now a line minimization around the minimum of the cost functional FD,n (physical cost criterion). In view of (15) we take gnR as
being a preconditioned gradient of the regularization factor FnR with respect to changes in the contrast around the point χ = χn. In view of the previous minimization step, the gradient of FD,n with respect to changes in the contrast around the point χ = χn vanishes. Hence, the gradient with respect to the contrast, in contrary to the previous approaches of the CSI method, contains only a contribution of the regularization additionally imposed. This simplifies the algorithm. In general, the real parameter βn is found from a line minimization as minimizer of
The structure of this minimization procedure is such that it will minimize the regularization factor with a large weighting parameter in the beginning of the optimization process, because the value of FS + FD,n is still large, and that it will gradually minimize more and more the error in the data and object equations when the regularization factor FnR remains a nearly constant value close to one. If noise is present in the data, the data error term FS will remain at a large value during the optimization and therefore, the weight of the regularization factor will be more significant. Hence, the noise will, at all times, be suppressed in the reconstruction process and we automatically fulfill the need of a larger regularization when the data contains noise as suggested by Chan and Wong  and Rudin et al. . After we have obtained a new estimate χnR for the contrast, we update the contrast sources starting with χn−1 = χn−1R of the previous iteration.