Direct 3D imaging through spatial coherence of light

Wide-field imaging is widely adopted due to its fast acquisition, cost-effectiveness and ease of use. Its extension to direct volumetric applications, however, is burdened by the trade-off between resolution and depth of field (DOF), dictated by the numerical aperture of the system. We demonstrate that such trade-off is not intrinsic to wide-field imaging, but stems from the spatial incoherence of light: images obtained through spatially coherent illumination are shown to have resolution and DOF independent of the numerical aperture. This fundamental discovery enabled us to demonstrate an optimal combination of coherent resolution-DOF enhancement and incoherent tomographic sectioning for scanning-free, wide-field 3D microscopy on a multicolor histological section.


Introduction
Wide-field imaging is amongst the most common imaging modalities for the observation and characterization of absorbing specimens, as done, for instance, in bright-field microscopy [1].Some of the reasons behind its widespread use across many diverse applications are its ease of use, cost-effectiveness, fast acquisition, and its direct imaging capability (namely, the availability of the output image in real time, with no need for inverse computation techniques on the collected intensity).Although conventional devices work extremely well with 2D samples, having negligible thickness along the optical axis (), their use with 3D samples is significantly complicated by the well-known dependence of both resolution and depth of field (DOF) on the numerical aperture (NA) of the imaging device: this dependence results in a strong trade-off between image resolution and DOF, and imposes the need to z-scan the whole sample in order to collect the complete volumetric profile.The operation of z-scanning requires that either the imaging device or the sample itself are mechanically shifted along the optical axis, so as to change the plane at focus and perform multiple acquisitions of different transverse planes [2,3].The intrinsically long acquisition required by moving components implies limited in vivo applicability and comes with further disadvantages, such as the need for precise stabilization, requiring large and heavy devices, costly mechanical parts with the required precision, as well as high maintenance costs, which preclude the use of scanning microscopes in low-budget applications.The limitations of axial scanning become particularly relevant in large-NA devices, where the higher resolution comes at the expense of a narrower DOF.This has detrimental effects on the number of axial measurements necessary to characterize the entire sample, so that a common option to keep the measurement time low is to under-sample along the optical axis, with a consequent loss of information.In 3D imaging, resolution, axial sampling and acquisition speed are thus in direct conflict.
Several apporaches have been proposed in the literature to address this problem; optical coherence tomography (OCT) is one of the most noticeable examples [4].However, in all cases, the limitations imposed by the NA of the imaging system persist; in OCT, for example, small NA are required, at the expenses of resolution and signal-to-noise ratio (SNR), for addressing the loss of intensity implied by large-NA optics [5].
A recent and rapidly developing approach to scanning-free wide-field 3D microscopy is light-field (LF) imaging, where direct images of thick samples containing heavily defocused planes are acquired and then refocused, in post-processing [6][7][8][9][10].Directional information about light from the sample is in fact acquired by a microlens array and employed, in post-processing, to perform software z-scans with similar features to the typical mechanical scans.LF devices thus enable scanning-free single-shot acquisition of a 3D sample, but its fast acquisition comes at the expenses of a dramatic loss of resolution, well beyond the diffraction limit [11].In fact, due to its geometric-optics-based working principle, the maximum achievable DOF is defined by the circle of confusion (CoC), namely, by the projection of the lens aperture over the acquired defocused planes [12].The resolution of the refocused images is thus not determined by the Airy disk, as is typically the case in microscopy, but is rather dominated by the geometrical effects of defocusing, as typically occurring in photography.In addition, the lenslets introduce an even stronger trade-off between resolution and DOF, consisting in the loss of resolution at focus with the improvement of the volumetric performance [13].
In this work, we demonstrate that the resolution versus DOF trade-off of defocused images is governed by the spatial coherence properties of light, and is naturally relieved when the sample is illuminated with spatially coherent light (i.e., the coherence area on the sample is either comparable or larger than the sample details, as explained in the Methods and Results).Coherent imaging is thus found to entail a much slower image degradation with defocusing, a result that leads to discover the direct 3D imaging potential of spatially coherent light: by combining the extremely large DOF of coherent imaging with the strong localization capability of incoherent imaging, we design a direct, scanning-free, wide-field 3D microscope and demonstrate its working principle by means of both test and histological samples.In particular, we characterize the properties of direct coherent wide-field imaging and show 3D reconstruction compatible with absorbing non-fluorescent dyes routinely used for histochemistry [14].
An important aspect to remark is that the required coherence exclusively relates with the transverse coherence of the field illuminating the sample, disregarding both the temporal and the spatial coherence of the source [15]: although our findings apply to both temporally and spatially coherent sources such as lasers, as well as to collimated beams, none of these properties are necessary to our scopes, and our results are thus not confined to these scenarios.On the contrary, NA-independent resolution and DOF are shown to be obtained with virtually any source of spatially and temporally incoherent light, such as a LED, since the required spatial coherence can always be acquired through propagation (Van Cittert-Zernike theorem [16]).
Our 3D microscope is, in fact, based on a conventional bright-field imaging device, integrated with an array of LEDs for implementing a dedicated coherent illumination strategy.Conversely, typical bright-field illumination is obtained with extended sources, shining spatially incoherent light on the sample [1,17].Spatial coherence, on the other hand, is used in a plethora of non real-time imaging modalities relying on post-processing of the acquired data aimed at recovering rich phase information about the sample; techniques such as holography [18,19] and ptychography [20], for example, can achieve super-resolution, wavefront reconstruction, and correction of optical aberrations.Most notably, techniques based on computational illumination from LED arrays [21,22] have demonstrated high-resolution 3D amplitude and phase reconstruction [23] by exploiting sequential multi-angle plane-wave illumination and recursive phase-retrieval algorithms.However, all such coherent imaging techniques are indirect, due to the time-consuming algorithms they require for data analysis, and are thus not suitable for real-time imaging [24,25].
In this work, we show that the spatial coherence of light can be exploited in direct widefield imaging to obtain a breakthrough improvement of the image resolution over large DOF.This result is supported by the discovery of the completely different physical mechanisms regulating resolution loss in defocused images obtained through spatially coherent and spatially incoherent illumination.In fact, while the peculiarities of focused images, whether coherent or incoherent, are well known [26,27], the properties of coherent defocused images have been so far mostly unexplored, with the only exception of the very special case of collimated light illumination [23,27,28].In this work, the introduction of a dedicated formalism and an unbiased image quantifier enables to study the properties of coherent images and to compare them with the ones of conventional incoherent imaging.One of the main results we shall present is that neither the NA nor the design of the imaging system affect the quality of defocused coherent images; in fact, the NA-dependent trade-off between resolution and DOF defined, in incoherent defocused images, by the CoC, naturally disappears when illuminating the sample with spatially coherent light.We shall profit from this effect to perform the typical tomographic reconstruction of LF imaging and retain its multicolor capability, but with enhanced resolution both at focus (where we recover Rayleigh-limited resolution) and in refocused planes.No phase retrieval and time-consuming post processing of the acquired images are required in our approach, paving the way toward 3D real-time imaging.

Methods
As mentioned in the Introduction, we refer to coherent imaging whenever the coherence area of the illumination [29], on the sample, is larger than the spatial features of the sample one wishes to resolve [15].According to the size of the details composing a given object, an imaging system might thus behave coherently for object details smaller than the coherence area, and incoherently for larger details.In this respect, we should highlight that the size of the coherence area with respect to the whole field of view (FOV) of the image does not play any role.The transition from one regime to the other will be discussed in details later in the paper.For the sake of simplicity, we shall now disregard the effects of partial coherence, and only consider coherent systems as having a coherence area larger than any object detail, and incoherent systems as having a negligible (point-like) coherence area on the sample.

Resolution and DOF in coherent imaging
The upper part of panel b) suggests one of many possible ways for obtaining coherent illumination from an incoherent source: since the coherence area on the sample scales proportionally with the ratio between the source diameter and the source distance, the desired coherence can easily be obtained by reducing the source size [16].An obvious alternative would be to employ laser light illumination, but the presented results are not limited to this scenario.In Figs. 1 a) and b), we report two typical examples of incoherent and coherent imaging, respectively, as obtained by changing the illumination in the same exact imaging system.The opposite situation of incoherent illumination is typically achieved by placing extended natural sources at a small distance from the sample, as reported in the upper part of panel a).In the lower part of panels a) and b), we report the corresponding incoherent and coherent images, both focused (left panels) and defocused (central and right panels), of a two-dimensional sample (a double-slit mask).Defocused images have very distinctive features depending on the spatial incoherence or coherence of the light on the sample: while incoherent images tend to quickly blur upon defocusing, coherent images do not blur.This is even more apparent in panel c), reporting a section, in the (, ) plane, of the 3D cubes obtained by mechanically z-scanning the two-slit mask, in both cases of incoherent (left panel) and coherent (right panel) imaging.Whereas, in incoherent imaging, z-scanning quickly gives rise to flat intensity distributions as the object is moved out of focus, in coherent imaging, the transmissive details contain rich spatial modulations and stay well separated from each other over a much longer axial range compared to the corresponding incoherent image.Transmissive details thus appear "resolved" at a much larger distance from the plane at focus, before being completely altered by diffraction.A much longer DOF (or, equivalently, higher resolution of defocused images) is thus expected in coherent imaging, with image degradation not due to blurring.Upon quantitatively describing these effects, we shall find that resolution and DOF of defocused coherent images are actually completely independent of the NA of the imaging system.
The differences between coherent and incoherent systems can be traced back to the different underlying image formation processes, as formally expressed by the intensity distributions describing the images [27]: where A () is the complex transmission function of the object, P () is the Green's function describing the field propagation through the optical system, and  *  denotes the convolution between two complex-valued functions,  and .Unlike the incoherent image formation process, which is linear in the optical intensity, coherent imaging is non-linear with respect to the object A ().Therefore, although the same quantities are involved in both intensity distributions of Eq. ( 1), those contributing to the incoherent image formation are real and positive, whereas coherent imaging is sensitive to both the amplitude and phase of complex functions describing both the field distribution within the sample and its propagation through the imaging system [19][20][21]23].In fact, upon neglecting optical aberrations, the coherent (i.e.complex) PSF P () of Eq. ( 1) can be decomposed into two contributions: where P 0 is the complex PSF describing the focused coherent image and determining the well-known Airy disk [30], and D −  represents the field propagation over a distance  − , with  and  the axial coordinates of the object point and of the plane at focus, respectively.Depending on the placement of the sample and the numerical aperture of the device, the quality of the output image can thus be dominated either by the effects of out-of-focus propagation or by the Airy disk, with the two effects blending into each other only when the object is placed close to (but not perfectly on) focus.The corresponding transition between the focused and defocused image is well known in incoherent imaging: at focus, both the resolution (/NA) and the DOF (/NA 2 ) are determined by wave optics (Airy disk), with  the illumination wavelenght.However, as the object is moved outside of the natural DOF of the focused device, the PSF P is dominated by geometrical optics effects and reduces to the circle of confusion (namely, the projection of the lens aperture onto the defocused image plane) [12], which induces a typically circular blurring with a radius proportional to both the defocusing | − | and the effective lens radius.
The different physics regulating coherent and incoherent imaging helps developing an intuition about the different behaviour observed in Fig. 1, but does not suffice to quantitatively compare the resolution and DOF of coherent and incoherent imaging.In fact, image quality estimators typically used for characterizing imaging performance, from two-point resolution criteria, such as Rayleigh's and Abbe's [26,31], to more advanced ones, such as modulation transfer functions [32], all rely on the linearity of the (incoherent) image formation and the positiveness of the PSF, and thus fail in assessing the performance of coherent imaging.For instance, the definition of a Rayleigh criterion prescribes that, because of the broadening effect of the incoherent PSF, the image of two "points" is the superposition of two disks (Airy disks, at focus, CoC, out of focus).The resolution is then easily defined by arbitrarily setting an acceptable threshold to when the two disks are perceived as separated.But these methods cannot be applied as effectively to a non-linear process such as the coherent image formation, since coherence induces the appearance of spurious spatial frequency components.Therefore, neither an approach based on modulation transfer functions, that require the harmonic content to be unaltered, nor the two-point visibility, which requires a relative minimum separation between the images of two points, can be used.
To quantify the performance of coherent and incoherent imaging systems, we thus introduce a general-purpose quality estimator: the functional   , which we shall refer to as image fidelity, defined as a positive quantity   [ ()] that compares the intensity distribution  () of the image produced by an imaging system directly with the original intensity profile of the object  = |A| 2 , namely, where  is the magnification of the imaging system in its plane at focus.Both  and  are normalized quantities for the definition of the fidelity to be consistent and to saturate to unity in the ideal case of perfect imaging ( = ).Being completely independent of any detail of the image formation process, the fidelity enables performing image quality evaluation through any imaging device, as long as the shape of the known reference object is known: resolution and DOF shall thus be defined as the minimum object size and the maximum axial range producing a "faithful" image, as identified by a threshold set to the fidelity.Both these definitions apply equally well to focused and defocused images, thus enabling to study how resolution changes with Fig. 2. Image fidelity in coherent and incoherent imaging.The colored areas (blue for coherent illumination, orange for incoherent illumination) highlight the regions in which an -sized object placed at an axial coordinate  can be imaged with a fidelity larger than 95%, with  = 0 identifying the position of the object at focus.The black dashed line is the fidelity equivalent of the Rayleigh criterion, intended as the minimum object size that can be imaged faithfully.The orange dashed curve is the NA-dependent geometrical circle of confusion at 95% fidelity, obtained by evaluating the fidelity on the geometrical optics approximation.The blue dashed line represents the curve of 95% fidelity obtained in the wave optics regime (namely, by considering the free space coherent propagation of the field) in the case of infinite NA of the imaging system.The imaging system is a 20× microscope with NA= 0.5, illuminated with monochromatic light with wavelength 500 nm.The object is a mask with a single transmissive detail with Gaussian shape of width .
defocusing.Since incoherent imaging is only sensitive to the intensity transmitted by the sample, our study will now be restricted to non-diffusive objects and will disregard phase information, namely, we shall consider field transmission profiles with arg (A) = 0 uniformly in the sample, so that A = |A| ≥ 0.

Resolution limits
The plot reported in Fig. 2 employs the fidelity to offer a quantitative interpretation of the coherent and incoherent -scans reported in Fig. 1c).The colored areas in Fig. 2 highlight how far from the plane at focus (abscissas) an -sized object (ordinates) can be placed to produce an image with fidelity higher than 95%.The orange area refers to spatially incoherent illumination, whereas the blue area refers to the coherent case.The physical regimes leading to the dashed curves that delimit the high-fidelity regions associated with coherent and incoherent imaging offer a clear perspective on the physical mechanisms regulating the two image formation processes and enable quantifying the resolution versus DOF trade-off in the two cases.Such boundaries can be interpreted as resolution limit curves, giving the functional dependence of the resolution on the displacement of the sample from focus, at the threshold of the image fidelity above which an image is considered resolved.These curves are obtained from the analytical expression of the image fidelity, written in terms of the parameters on which the image depends, which in our case are: the dimension  of the features of the sample, the axial coordinate  where the sample is located, and the axial location  of the plane focused by the imaging system.The image fidelity associated with  () =  (;  − , ) is thus a two-variable function:   [] ( − , ).Since the quality of the image, upon mechanical z-scanning, only depends on the relative distance between the object and the focused plane, we shall set for simplicity  = 0 and interpret  as the relative defocusing distance.
By studying the analytical expression of   [] (, ), exact expressions of relevant image quantifiers can be extracted.For instance,   [] (0, ) gives the image fidelity in the plane with Rayleigh-limited resolution, as a function of the object size.By inversion, one obtains, both for coherent and incoherent imaging, where  foc () is a coefficient depending of the threshold image fidelity, amounting to 0.157 for  = 0.95 (Fig. 2).Apart from the multiplying constant, which only depends on the arbitrary choice of a threshold on the fidelity, the equation corresponds to the well-known diffraction-limited resolution of focused imaging systems (dashed black line in Fig. 2), as determined by the Airy disk.Therefore, the analysis in terms of fidelity recovers the well-known fact that the optical performance of focused coherent and incoherent systems is analogous.
The differences between the two illumination strategies emerge when investigating defocused images in two different physical regimes.The geometrical optics regime is explored by considering the fidelity in the limit  → 0, namely, If this physical regime is investigated in the incoherent imaging case, the implicit curves  geom [ inc ] = , in the (, ) plane, have an explicit expression, which, unsurprisingly, prescribes the well-known circle of confusion of geometrical optics: with  geom (0.95) = 1.97.As shown in Fig. 2, the CoC-defined trend perfectly traces the boundary of the fidelity area.Hence, the fidelity analysis confirms that wave optics has negligible effects on the optical performance of an incoherent system when the sample is moved away from perfect focus.By exploring the same physical limit in the case of coherent imaging, the obtained analytical expression does not describe any physically relevant situation and does not have a counterpart in the shape of the fidelity region.Instead, interesting results are obtained, in the coherent case, by investigating the opposite regime, namely by neglecting geometrical effects.This is done by considering the radius of the limiting aperture  → ∞, so as to completely ignore the influence of the imaging device.This condition is equivalent to considering an imaging system where the image formation process is solely governed by diffraction, from the object plane up to the plane at focus; in fact, in Eq. ( 2), P () → D  (), indicating that no CoC exists in this case.Upon setting a threshold  to the fidelity of a coherent system with infinite NA we obtain the resolution limit curves with  diff (0.95) = 0.396.Rather surprisingly, this square-root scaling of the resolution with defocusing perfectly reproduces the boundary of coherent imaging out of the plane at focus, as reported by the blue dashed line in Fig. 2. As in the previous case, exploring the same physical limit in the case of incoherent illumination yields no interesting conclusion.The optical performance of coherent and incoherent imaging are thus defined by two entirely different processes: the geometrical CoC (hence, the system NA) is basically the only factor limiting the resolution of defocused incoherent imaging; on the contrary, the aperture size and optical design of the imaging system play no role in coherent imaging, where the sole responsible for image degradation is diffraction and free-space space propagation from the object to the observation plane.The different physical phenomena governing image degradation (geometric optics, as opposed to diffraction and wave propagation) have surprising effects on the image quality.Resolution and DOF of coherent imaging are found to be independent of the NA of the imaging system, and their trade-off is extremely relieved with respect to incoherent imaging, as defined by the square-root law (dashed blue line) compared to the linear dependence (dashed red line).

Coherent 3D imaging with incoherent sectioning capability
The newly discovered properties of direct coherent images can be integrated with the strong axial localization capability of incoherent imaging to achieve scanning-free 3D wide-field imaging of absorbing samples with enhanced volumetric resolution.In fact, spatially coherent illumination will enable a (NA-independent) square-root scaling of transverse resolution, thus offering high lateral resolution over a long DOF; at the same time, the axial sectioning typical of spatially incoherent illumination entails, in the wide DOF accessed because through coherence, a precise sectioning capability, as enabled by large-NA tomographic systems [33].We should clarify that, in this context, the concepts of DOF and axial resolution are rather distinct: while the DOF represents the axial length of the volume where object details of a given size can be faithfully imaged, the axial resolution represents the axial sectioning capabilities, that is, how finely transverse planes within the DOF can be isolated along the axis.The underlying principle for achieving high-resolution 3D imaging within a direct wide-field coherent system is similar to LF imaging: in both cases, information about the propagation direction of light enables scanning-free volumetric reconstruction.However, while LF imaging acquires the required directional information by means of the microlens array, our proposal prescribes to do it with spatially coherent illumination of the sample from different locations.Our approach will be shown to come with two major advantages: a much larger DOF, as defined by the square-root (as opposed to linear) scaling of the resolution with defocusing, and Rayleigh-limited images at focus.
In the proposed scheme, 3D information about the sample is acquired by accessing the 4D function  ( 0 , ), where  0 is the transverse coordinate of a point-like emitter enabling spatially coherent illumination of the sample, and  is the transverse coordinate of the collected image.Sampling of the complete 4D function is performed by sequentially sweeping an illumination plane made of point-like emitters centered in  0 , and collecting, for each coordinate  0 , the resulting intensity where A and P are the same object transmittance and coherent PSF as in Eq. ( 1), and L  0 is the Green's function propagating the field from the point-like source centered in  0 to the sample plane.As we shall discuss in the "Results" section, the wide freedom in the choice of L  0 (hence, of the illumination scheme) enables to greatly customize the optical performances of the proposed 3D imaging system.Specifically, in order to encode 3D information into  ( 0 , ), illuminating the sample from many different angles is not necessary.In previous works (see, e.g., Ref. [23]), in fact, L  0 has always been arranged in such a way to have an illumination distance  between a source at coordinate  0 and the sample, such that the latter can be considered to be illuminated by tilted plane waves, corresponding to the choice as conventionally done in tomographic systems.However, as we shall discuss in the "Results" section, our complete formal analysis, and the consequent understanding of coherent imaging, enable to demonstrate that neither the angular illumination nor the requirement of collimated light are in any way necessary to encode 3D information into  ( 0 , ).Most importantly, understanding the underlying physics of coherent and incoherent imaging is the key for achieving scanning-free direct 3D imaging, with no need for time-consuming phase retrieval algorithms.The intensity distribution described by Eq. ( 9) is easily recognized as a coherent image, as in Eq. ( 1), with the only difference that the object is now replaced by the expression L  0 A, emphasizing the role of the illumination scheme and the wide freedom in its design.The acquired 4D intensity can thus be expected to have mostly the features we have attributed to coherent images, such as the decoupling of the lateral resolution and DOF.However, the large DOF entails the lack of axial localization: thick 3D samples are imaged with high transverse resolution, but lack any axial localization.To address the issue, we shall integrate the proposed technique with the properties of incoherent imaging, in which the tomographic properties are defined by the angular acceptance of the lens.The analogy with incoherent systems can easily be understood by considering the image resulting from the sum of the coherent images obtained from different illumination coordinates, namely, This equality can be analytically verified by plugging the expression of the plane-wave illumination into Eq.( 9) and integrating the result; however, any other illumination schemes presented in this work yields the same result.In fact, from a more intuitive standpoint, we can recognize that integrating over the entire illumination plane is equivalent to shining uniform incoherent light onto the sample, which is exactly the typically sought-after experimental condition of uniform illumination in conventional systems (e.g.Kohler illumination).It is thus not surprising that the integration must yield exactly the same results as conventional (incoherent) imaging: Rayleigh-limited resolution at focus, CoC blurring out of focus, and dependence on NA, with the only difference that the uniform illumination is achieved in post-processing.However, the mere integration reported in Eq. ( 11) is a rather poor way of employing the much larger amount of information contained within  ( 0 , ): due to the shallow DOF of incoherent imaging, the features of the sample are rapidly lost as it is moved away from perfect focus, more so for large NA.On the contrary, a Radon transformation of  ( 0 , ) (here expressed as a line integral): enables localizing the object within the much larger DOF characterizing coherent imaging.In Eq. ( 12), ( ′ ) are two lines of equations sin  () 0 + cos  () =  ′ defined in the spaces ( 0 , ) and ( 0 , ).In fact, the Radon transform   ( ′ ) isolates a specific axial coordinate  by integrating over the whole dataset  ( 0 , ) at a -dependent angle  (); this allows one to perform, in post-processing, a software -scanning similar to the hardware scan done by manually moving the focus of a conventional (incoherent imaging) device.The relation between the integration angle and the reconstructed axial plane can be understood by considering that the object point at coordinate  ′ , once illuminated by the source lit at transverse coordinate  0 , is mapped onto the detector coordinate , which depends on both  0 and  ′ .As anticipated, the geometrical locus of the points of the sensor  corresponding to the same object coordinate  ′ is a line in the ( 0 , ) space with equation where  and  are two functions depending on both the defocusing distance  and the particular illumination scheme; as in conventional LF imaging, they are obtained through ray tracing in a geometrical optics context.The same holds, with the same coefficients  and , for the other two coordinates ( 0 , ).Therefore, for an object imaged at an axial displacement  from the focused plane, the most accurate reconstructed image is  =  , as obtained by performing the integration in Eq. ( 12) along lines with a tilting  = arctan(−/).As we shall discuss in the "Results" section, the 3D performance of the proposed coherent 3D imaging technique can be characterized in terms of the image fidelity   [  ].However, volumetric reconstruction shall require the image fidelity to be evaluated onto a three-parameter space: since the focus of the system is fixed at a given coordinate, both the relative position  of the -sized object and the reconstruction coordinate  can be moved independently with respect to the plane at focus.

Results
The introduction of the image fidelity has enabled to directly compare the performance of coherent and incoherent systems and to discover that, in coherent imaging, the degradation of the image resolution with defocusing is not related with geometrical blurring mechanisms such as the CoC.On the contrary, the degradation of the image quality is governed almost entirely by diffraction from the object plane to the imaged plane; this leads to a square-root law scaling √︁ || of the resolution with the distance from focus , as opposed to the linear scaling characterizing the CoC.Thus, in coherent imaging, the axial range in which the object is resolved thus scales quadratically with the object size, rather than linearly.Therefore, in addition to being independent of the NA of the imaging device, coherent imaging yields a quantitative DOF advantage over conventional (incoherent) imaging.
In Fig. 3, we report the experimental demonstration of this prediction and show that coherent Although it is a well-known fact in microscopy, and in bright-field imaging in general, that source collimation indeed implies DOF augmentation [30], we should highlight that the square-root trend we experimentally demonstrate is the result of an entirely different physical phenomenon and cannot be understood in terms of source collimation, but only in terms of its spatial coherence.The conventional (incoherent imaging) explanation of the DOF improvement through source collimation, in fact, is related to the divergence of the illuminating beam becoming smaller than the acceptance angle of the optical devices, so that the optical properties of the imaging device are no longer dictated by the NA of imaging device, but rather by the effective NA defined by the illumination itself.However, this effect is profoundly different from the DOF extension enabled by spatial coherence, where collimation is by no means a requirement.The presented DOF advantage, in fact, is by all means maintained even with a quasi-infinite illumination NA, as one could get by bringing the illumination stage in extreme proximity to the sample and employing smaller sources, such as quantum dots and single-molecule LEDs.The incoherent effects of DOF improvement, however, indeed exist in the regime in which the illumination collimation is such to define the effective NA of the system, but the coherence area on the sample is not wide enough for the system to behave in a coherent manner.To gain more insight about the role played by the NA of the illumination system, we shall now study the transition from incoherent to coherent imaging and consider the general case of a finite-sized source, matching the experimental conditions of Fig. 3.In Fig. 4, we plot the 95%-fidelity curve (solid black line) of the image of a transmissive mask (an -sized slit) as a function of its distance from a -sized incoherent emitter; strictly speaking,  is the distance of the object from the plane at focus, but its variation naturally changes the object-to-source distance as well.When the object is so close to the source that the spatial coherence acquired through propagation towards the sample is smaller than , the resolution versus DOF trade-off is determined by the numerical aperture of the illumination, as expected in a conventional system; this is demonstrated by the overlap of the evaluated fidelity (black line) with the one obtained in the geometrical optics approximation (red dashed line), for large values of .In this regime, imaging is thus incoherent.However, as the object is moved farther away from the source, the coherence area on the object becomes proportionally larger till coherence effects become dominant, and the fidelity trend (black line) detaches from the geometrical optics prediction and overlaps on the coherent trend (dashed blue line), completely NA-independent.The yellow region highlights the transition from the incoherent to the coherent imaging, and shows that coherent effects enter into play, as predicted, when the coherence area becomes comparable to the details one wishes to resolve.Although this result is quite intuitive, its implications are very relevant.First, it demonstrates that the maximum DOF that direct imaging can achieve is ultimately limited, at least in the realms of classical optics, by the spatial coherence of the illumination on the sample; this is in contrast with the approximately infinite DOF one might incorrectly expect by interpreting the case of perfectly collimated illumination in terms of conventional incoherent imaging, along the line of what discussed above.Second, it leads to better appreciate the implications of the approach employed for obtaining the results of Fig. 3, namely, the reduction of the illumination stage area for showing, on the one hand, the effect of the NA on the CoC of incoherent imaging (blue and red points), on the other hand, the transition from incoherent to coherent imaging (orange points).The trend of the blue and red points indicate that the decrease of the illumination NA by a factor of 2 increases the DOF by the same amount, as expected for incoherent imaging; however, in both cases, the illumination area (i.e., the radius of the area of lit LEDs) is not yet small enough for the coherence area to be comparable with the desired resolution.By further shrinking the illumination, a point is reached where the linear trend is lost and the of NA-independent square-root trend emerge, as a consequence of the larger coherence acquired by the illumination.The transition is explained in terms of spatial coherence.
Let us now show why localized coherent illumination is so convenient for performing 3D scanning-free imaging.As detailed in the "Methods" section, 3D information about the sample is gathered by measuring the 4D function  ( 0 , ), obtained by sequentially illuminating the object from different point-source on the illumination plane.The image dataset  ( 0 , ) can then be Radon-transformed to isolate axial planes of the samples, as prescribed by Eq. (12).In particular, an extremely interesting result is obtained by applying the Radon reconstruction to the plane  on which an object with transmission function A is placed, namely: where P 0 is the coherent PSF of the imaging system in its focus, and D  is propagation in vacuum by a distance , as in Eq. ( 2).The properties of the reconstructed image are easily understood by noticing that Eq. ( 14) is exactly the expression of a coherent image (see Eqs. ( 1) and ( 2)), as observed by the same imaging device, but affected by an equivalent defocusing  ().More specifically, coherent illumination gives rise to images having a resolution at focus defined by the Rayleigh limit and a resolution out-of-focus scaling with the square root of the defocusing √ , whereas the images reconstructed by our method have the same resolution at focus, but an out-of-focus resolution scaling with √︁  ().As reported in Fig. 5a), different illumination schemes are thus possible, each one characterized by a specific scaling of the resolution with the defocusing; the optical performance of the device can thus be greatly enlarged, offering a wide ; the different scaling obtained within a given scheme for varying values of the object-to-illumination distance ( 0 in the first case,  0 in the third case) are also reported (dashed blue lines).Panel b).Axial section of a stack of reconstructed images   () of a double slit mask placed at  = 1 m (left), showing the capability of localizing the sample.The two images reconstructed at the focused distance  = 0 and at the correct sample locations  =  = 1 m are reported in the top right panels, and compared with the corresponding two reference images reported in the bottom right panels: the defocused images collected by the same imaging device in the cases of incoherent (left) and coherent (right) illumination (same images reported in Fig. 1).The illumination scheme adopted for this simulations is the upper one of panel a), with an illumination distance  0 = 110 mm.Panel c).Characterization of the resolution as a function of the defocusing  for five different object positions , showing the sectioning capability and the overall resolution versus DOF performance of the proposed approach.The yellow and blue area are a reference for the performance of incoherent and coherent imaging, respectively; the last representing the maximum achievable DOF of the proposed technique.The five "V-" shaped areas show the sectioning capability enabled by the software z-scanning and the characterize the performance of the reconstructions, for the five different object placements; the software z-scan for a focused object (yellow) is shown to give the same resolution versus DOF performance as incoherent imaging.
flexibility in view of a variety of different applications.In fact, the plots indicate that the scaling of the lateral resolution as a function of the defocusing is a pure square-root only in the case of plane-wave illumination, namely, when  0 → ∞, in the first scheme, or when the middle scheme is adopted.In the other cases, the scaling remains defined by a square-root law, but the actual dependence also involves the illumination distance ( 0 in the first scheme,  0 in the third one).Fig. 5b) demonstrates that the great (NA-independent) DOF extension typical of coherent imaging is integrated with very accurate (NA-dependent) axial localization, due to the incoherent imaging properties brought in by the reconstruction process.The reported software -scan shows that a double-slit object placed outside of the native DOF of the microscope can actually be localized extremely well around the plane where the most accurate reconstruction happens.Also, at a glance, one immediately recognizes that the depth of the reconstruction is not what one would expect by coherent imaging, but rather exactly the native (NA-defined) incoherent DOF of the device.As shown on the right hand side of the axial scanning, however, the reconstructed image is exactly the same image that coherent imaging would give, with an object displaced by an equivalent defocusing  ().
All the aforementioned properties are summarized by Fig. 5 c).The blue and yellow areas identify the resolution performance of coherent and incoherent illumination, respectively, with their characteristic linear (CoC) and square-root trends.The colored "V"-shaped regions, instead, show the optical properties of the images reconstructed, through software z-scanning, by the proposed 3D imaging modality, and correspond to five different axial positions of the object.As expected from our findings, the depth of the reconstructions, which represents the axial resolution of the 3D imaging technique, coincide with the DOF of incoherent imaging, namely, it is the same one would obtain by focusing the equivalent incoherent imaging system (i.e., be lighting on the whole array of small sources at once) on the correct object plane; the only difference with respect to the image obtained by mechanical z-scanning within an equivalent incoherent imaging system is in the minimum resolution, which in our approach lies on the square-root curve defined by coherent imaging.On the other hand, when the object is placed at focus ( = 0), the image reconstructed with our method is exactly the same as in incoherent imaging, both in the minimum Rayleigh-limited resolution and in the CoC-defined axial localization.
We shall now employ these results to experimentally demonstrate the high-resolution volumetric multicolor capability of the proposed technique (Fig. 6).Coherent illumination from localized emitters is obtained through an array of commercial RGB LEDs, placed far enough from the sample plane for the coherence area on the sample plane to be comparable with the details of interest.The sample is a 10 m-thick mouse brain section, where cell nuclei and cytoplasm have been labeled, respectively, by hematoxylin and eosin.The acquired 3D information enables to clearly compensate for the sub-optimal placement of the microscope slide, whose closest part to focus is 10 m away from the focused plane, and mounted with a tilting of about 10 degrees.Unlike the color-independent CoC, the square-root scaling of the resolution of coherent imaging has a weak dependence on wavelength (∝ √ ), thus giving rise to images characterized by negligible chromatic aberration.

Discussion
We have found that the lateral resolution and DOF of defocused images obtained through spatially coherent illumination are decoupled from the numerical aperture of the imaging system.Such independence is particularly convenient for designing 3D imaging devices exploiting transverse coherence of light: the resulting overall DOF of the technique becomes independent of the optical components used for the image acquisition, and is instead entirely defined by the coherent illumination scheme, as shown in Fig. 5.Despite being based on the same image reconstruction principle as LF imaging, our resolution scaling is much more convenient, with the additional benefit of retaining Rayleigh-limited resolution at focus.In fact, compared to conventional LF devices, which achieve DOF extension at the expense of the lateral resolution, 3D imaging systems based on spatially coherent illumination have a DOF that scales quadratically with the desired resolution, thus always yielding an advantage over the linear scaling typical of LF [34].Furthermore, since a large NA has no effect on the resolution and DOF of the system, large apertures can be used to obtain optimal sectioning capability upon refocusing, enabling a strong suppression of the background neighbouring planes, as in high-NA tomographic systems.
Since our proposal only requires transverse spatial coherence, these systems work with temporally incoherent sources, which induce negligible to modest radiation damage, as required by in vivo biological applications.Although image reconstruction through Radon transform does not recover the phase content, as opposed to computational techniques based on coherence [19,21,23], it carries the enormous advantage of being performed in real-time with current GPU architectures and FPGAs [35], or through the use of holographic screens [36].The proposed 3D wide-field imaging technique can thus be used both for direct and real-time imaging.
The extreme simplicity and low cost of the optical design, also compared to LF imaging, has high potentials to open up the possibility of employing 3D imaging in new scenarios, low-budget applications as well as for public healthcare in developing countries.

Fig. 1 .
Fig. 1.Comparison of resolution versus DOF in incoherent and coherent imaging.Panel a).Incoherent imaging setup (top) and corresponding focused and defocused images (bottom).In the setup, a transmissive sample (a double-slit mask) is illuminated by the incoherent wavefronts coming from an extended source and is imaged by a conventional imaging system.The focused image (bottom left) is obtained by placing the sample at the working distance of the imaging system; the defocused images (bottom center and right) are obtained by -scanning of the sample.Panel b).Coherent imaging setup (top) and corresponding focused and defocused images (bottom).In the setup, the same transmissive sample as in panel a) is illuminated by spatially coherent light from a small incoherent source, and is imaged by the same conventional imaging system.The focused (bottom left) and defocused (bottom center and right) coherent images are obtained by placing the sample at exactly the same positions as in panel a).Panel c).Axial section of the 3D cube obtained by z-scanning the sample in the setups of panels (a) and (b), namely, in the case of incoherent (left panel) and coherent (right panel) illumination, demonstrating the extremely larger DOF of coherent imaging.

Fig. 3 .
Fig. 3. Experimental comparison of resolution and DOF in incoherent and coherent imaging.Comparison of the resolution versus DOF trade-off in two cases of incoherent imaging (red and blue points), corresponding to two different illumination NA, and in the case of coherent imaging (orange points), as obtained by single LED illumination.For both coherent and incoherent imaging, an image is considered to be resolved when the corresponding fidelity is at least 75%.The areas of 75% fidelity (colored regions) are identified by the smallest slit width imaged with fidelity over 75% (circles) and the largest slit width imaged with fidelity lower than 75% (bullets).All data are taken with a conventional 20× microscope, whose details are reported in the Supplementary document.The analysis is based on triple-slit masks from a USAF test target, having slit width  and center-to-center slit distance  = 2.The four images, reported as a reference quality of the targeted fidelity, are taken at the same axial distance of  = 250 m and correspond to the 4 different reported values of the slit width.

Fig. 4 .
Fig.4.Effect of partial coherence in the transition from coherent to incoherent imaging.Log-log plot of the 95%-fidelity (black continuous line) evaluated in the case of partial coherence due to the finite transverse size of the light source.The dashed red and blue curves are the 95%-fidelities evaluated, respectively, in the geometric optics approximation and in wave optics but with infinte NA.All curves are obtained by considering a microscope with NA=0.5, illuminated by a green light emitter placed at 11 cm from its plane at focus and having a Gaussian intensity profile of width  = 0.3 mm; the object is a Gaussian slit of width .The two dashed orange lines identify, for each defocusing , the values of the slit width  corresponding to 1/2 (lower curve) to 2 (upper curve) times the coherence area; the orange colored area identify the values of  for which partial coherence enters into play giving rise to the transition from incoherent to coherent imaging.

Fig. 5 .
Fig. 5. 3D imaging capability of a high-NA microscope exploiting spatial coherence.Panel a).Schematic representation of three possible illumination schemes for performing 3D imaging through coherent illumination.The plots on the right hand side report the corresponding scaling of the resolution as a function of the defocusing , as prescribed by the coherent scaling √︁  () (blue curves); the different scaling obtained within a given scheme for varying values of the object-to-illumination distance ( 0 in the first case,  0 in the third case) are also reported (dashed blue lines).Panel b).Axial section of a stack of reconstructed images   () of a double slit mask placed at  = 1 m (left), showing the capability of localizing the sample.The two images reconstructed at the focused distance  = 0 and at the correct sample locations  =  = 1 m are reported in the top right panels, and compared with the corresponding two reference images reported in the bottom right panels: the defocused images collected by the same imaging device in the cases of incoherent (left) and coherent (right) illumination (same images reported in Fig.1).The illumination scheme adopted for this simulations is the upper one of panel a), with an illumination distance  0 = 110 mm.Panel c).Characterization of the resolution as a function of the defocusing  for five different object positions , showing the sectioning capability and the overall resolution versus DOF performance of the proposed approach.The yellow and blue area are a reference for the performance of incoherent and coherent imaging, respectively; the last representing the maximum achievable DOF of the proposed technique.The five "V-" shaped areas show the sectioning capability enabled by the software z-scanning and the characterize the performance of the reconstructions, for the five different object placements; the software z-scan for a focused object (yellow) is shown to give the same resolution versus DOF performance as incoherent imaging.

Fig. 6 .
Fig.6.Multicolor 3D reconstruction of a histological mouse brain section exploiting spatially coherent illumination The resolution versus DOF advantage granted by spatially coherent illumination is used to reconstruct the true-color wide-field image a histological mouse brain section marked at two wavelengths.Due to both a slight tilting of the sample holder and a axial misplacement, conventional incoherent illumination yields an unfocused image.By exploiting sequential coherent illumination from an array of RGB LEDs, volumetric information is obtained and employed, through the software axial scans, for reconstructing the different portions of the sample at different  coordinates.The 3D information is then used to compensate for the tilting of about 10 degrees and obtain a single wide-field image at focus.The optical microscope is a conventional 20×, 0.75 NA wide-field device.