SEARCH

SEARCH BY CITATION

Keywords:

  • curvature sensor;
  • dynamic range;
  • pyramid sensor;
  • Shack-Hartmann sensor;
  • shearing interferometry

Abstract

  1. Top of page
  2. Abstract
  3. STANDARD METHODS FOR SENSING THE WAVE ABERRATIONS OF HUMAN EYES
  4. INNOVATIVE METHODS FOR SENSING THE WAVE ABERRATIONS OF HUMAN EYES
  5. CONCLUSIONS
  6. REFERENCES

During the past decade, there has been a remarkable expansion of the application of wavefront-related technologies to the human eye. The ability to measure the wavefront aberrations (WA) of an individual eye has greatly improved our understanding on the optical properties of the human eye. The development of wavefront sensors has further generated an intensive effort to revise methods to correct vision. Wavefront sensors have offered the promise of a new generation of visual correction methods that can correct high order aberrations beyond defocus and astigmatism, that is, the wavefront-guided excimer laser platforms and adaptive optics, thus improving visual performance or fundus imaging at unprecedented spatial resolution. On the other hand, current wavefront technologies suffer from some inaccuracies that may limit a wider expansion in the clinical environment. Several innovative approaches have been developed to overcome the limits of standard wavefront sensing techniques. Curvature sensing, pyramid sensing and interferometry currently represent the most reliable methods to revise and improve the measurement and reconstruction of the WA of human eyes. This review describes advantages and disadvantages of current wavefront sensing technologies and provides recent knowledge on innovative methods for sensing the WA of human eyes. In the near future, we expect to benefit from these new wavefront sensor elements, including their application in the personalised correction of optical aberrations and adaptive optics imaging of the eye.

Optical aberrations of the eye include low-order and high-order aberrations. These aberrations degrade visual performance and blur the retinal image. The lower order aberrations, defocus and astigmatism, are widely known and corrected routinely in clinical practice. The presence of high-order ocular aberrations, beyond defocus and astigmatism, has been known by researchers since the 19th Century but only in the 1990s were wavefront sensors developed to allow routine estimation of these ocular aberrations.1,2

During the past decade, the ability to measure the monochromatic aberrations of the eye in a clinical environment has made large population studies possible, allowing us to describe more accurately the image-forming properties of the eye. This has resulted in a new approach to define and report the optical aberrations of human eyes.3,4 The imperfections in the optics of the eye are now measured and expressed as wave aberration errors of the eye. Wave aberrations define how the phase of light is affected as it passes through the eye's optical system and is usually defined mathematically by a series of polynomials, that is, the Zernike polynomials.4 Several metrics, derived from WA, are routinely used to accurately describe the optical quality of the eye,5,6 such as the point spread function (PSF) or the modulation transfer function (MTF). Moreover, advances in the knowledge and measurement of the eye's aberrations have generated intensive efforts to improve the retinal imaging and vision on an individual basis, through the development of adaptive optics (AO) systems and wavefront-guided excimer laser platforms.7

To date, several wavefront sensing techniques have been developed for the measurement of the wavefront error in human eyes. In this review, emphasis is given to principles rather than details of individual instruments, most of which can be found in the specialised literature.8,9 First, current standard techniques of wavefront sensing are described, principally discussing the most popular methods. Accordingly, the innovative methods for sensing the WA of human eyes are discussed, with details on curvature sensor, pyramid sensor and interferometry that hold the promise to be the most valuable techniques for measuring the WA of human eyes in the near future.

STANDARD METHODS FOR SENSING THE WAVE ABERRATIONS OF HUMAN EYES

  1. Top of page
  2. Abstract
  3. STANDARD METHODS FOR SENSING THE WAVE ABERRATIONS OF HUMAN EYES
  4. INNOVATIVE METHODS FOR SENSING THE WAVE ABERRATIONS OF HUMAN EYES
  5. CONCLUSIONS
  6. REFERENCES

A variety of psychophysical and objective techniques are currently available for obtaining a comprehensive assessment of the eye's optical aberrations. Most of the wavefront sensors are based on the same principle, which is an indirect measurement of local wavefront slopes and the reconstruction of the complete wavefront by integrating these slopes.

In general, the apparatus includes a source for generating the beam for producing the wavefront exiting the eye and an imaging device for receiving the wavefront to determine aberrations. In the objective alignment method, alignment is made by the clinician and can be achieved by centring the optical axis of the measurement system to the subject's pupil or to the corneal reflex. In both methods the WA is calculated with respect to the pupil centre. In the subjective method, the patient adjusts the position of his/her pupil until two alignment fixation points at different optical distances co-axial to the optical axis of the measurement device are superimposed.

Currently, the most popular systems to measure the WA of the eye use an objective approach and are represented by the Shack-Hartmann and the laser ray-tracing sensors.

The Shack-Hartmann (S-H) method10 is the most employed for measuring the optical quality of the eye and it is also widely used in current AO systems for vision science.11 The device was introduced in ophthalmology in the 199412 and uses a grid of micro-lenslets, each of which independently observes a laser ray projected onto the retina at unique angular location. The resulting light, aberrated by the eye's optics, is focused as an array of spots onto a charge coupled device (CCD) detection array. The distance of each spot from its ideal position is measured and related to the local distortions in the pupil due to the optics of the eye.

The principle of operation of the Shack-Hartmann aberrometer is schematised in Figure 1. The Shack-Hartmann system employs many such apertures, each corresponding to the face of a tiny lenslet which focuses the emerging pencil of rays onto an image sensor. The micro-lenslet array subdivides the reflected wavefront of light emerging from the eye into a large number of smaller wavefronts, each of which is focused to a small spot on the sensor. The spatial displacement of each spot relative to the optical axis (x, y) of the corresponding lenslet is a direct measure of the local slope of the incident wavefront as it passes through the entrance aperture of the lenslet. Integration of these slope measurements by subsequent computer processing of the captured image reveals the overall shape of the aberrated wavefront.

image

Figure 1. Principle of the Shack-Hartmann (S-H) aberrometer. The S-H sensor is optically conjugate to the eye's entrance pupil. The light exiting from the eye (aberrated wavefront) is focused on the CCD camera array. The S-H sensor subdivides the wavefront into a few hundred small beams, which are focused onto the CCD camera by an array of microlenses. The displacement of each image relative to the grid of optical axes is determined by the local slope of the wavefront in x- and y-directions (Δx and Δy) at the face of the corresponding lenslet. Integration of these slope measurements reveals the shape of the aberrated wavefront according to the following expression: Δs=F θ, where Δs is the spot displacement, θ is the wavefront slope and F the focal length of the lenslet. Measurement accuracy of the S-H sensor is strictly related to the measurement precision of Δs.

Download figure to PowerPoint

During the past decade, this technique has been used in several areas of clinical research, for example, studies of myopia, dry eye, keratoconus, cataract, refractive surgery, contact lenses and intraocular lenses,13–16 giving initial results on the WA distribution in normal and diseased eye populations. Despite this extensive use, a major limitation of the S-H method is that analysis of data from a S-H sensor does not consider the quality of individual spots formed by the lenslet array. Only the displacement of spots is needed for computing local slope of the wavefront over each lenslet aperture, however, experience has shown that the quality of the dot images can vary greatly over the pupil of a human eye.17 The S-H sensor works well for measuring normal eyes, though it is recommended to repeat measurements at least three times to assess repeatability and accuracy of a single test,18 however, it reveals some inaccuracies with highly aberrated eyes, for example, eyes suffering from keratoconus. If the wavefront shape within a single lenslet varies significantly, the spot pattern formed by that lenslet can be blurred and cause an error in the wavefront reconstruction, thus reducing the ‘dynamic range’ of the system. In the context of wavefront sensing, the term dynamic range represents the maximum wavefront slope that can be measured reliably. The dynamic range of the S-H device is further limited by the optical parameters of the S-H microlenses, namely, the lenslet spacing (or number of lenslets across the pupil) and the focal length of the lenslet array. It has been demonstrated mathematically that the maximum number of Zernike coefficients that a reconstruction algorithm can reliably calculate is approximately the same as the number of lenslets. In a population of normal eyes, the majority of high-order aberrations is typically included in Zernike modes up to 8th-order Zernike coefficients, corresponding to 42 coefficients in total.2,19 This indicates that at least 42 lenslets are needed to reliably measure high-order aberrations (HOA) in these eyes. This number is much higher when considering highly aberrated eyes. Therefore, the dynamic range is limited by the maximum distance within a lenslet subaperture (which is equal to one-half of the lenslet diameter) a single spot is allowed to displace as well as the maximum displacement a single spot is allowed to move on the corresponding CCD camera. When a highly aberrated wavefront is incident upon the lenslet array, spots on the CCD camera can overlap or cross-over one another (‘crossover’ effect) and it would be difficult or impossible to map each spot to its corresponding lenslet. In this case, a high precision reconstruction of the wavefront is not achievable, as a conventional centroid algorithm can fail to find the correct centres of the spots, if the spots are definitely blurred or partially overlap or are outside of the virtual subaperture (that is, that located directly behind the lenslet) on the CCD array.

Several methods have been proposed to increase the dynamic range of the S-H sensor. For example, precompensating for the low-order aberrations (either defocus or astigmatism) in the system could improve the focal length requirement. This precompensation could otherwise be performed with auxiliary optics or with a software algorithm.20 Alternative ways to increase the dynamic range of the sensor could be to use a lenslet with a larger diameter and/or a shorter focal length lenslet. Assuming that the lenslet diameter, as discussed above for the number of lenslets, is determined by the required number of Zernike coefficients, one way to minimise the cross-over effect is to shorten the focal length of the lenslet. If the focal length is too short, this causes a decrease in ‘measurement sensitivity’, failing in measuring small amounts of the WA. The measurement accuracy in eyes with highly aberrated corneal optics could be increased by projecting onto the retina a tight and well-defined spot, which, for example, could be achieved by restricting the illuminating beam diameter. A better approach could be to increase the magnification of the pupil at the lenslet array, however, this would require a larger CCD camera to capture the spot array pattern.

The main difference between the various commercially available S-H sensors is in optical parameters of the microlenses.21 The design and manufacture of new optical elements such as the microlens array are expensive and do not allow an active adaptation of the system to the wavefront under test. For this reason, different methods have been suggested to improve the dynamic range of the S-H by modifying its internal structure, in one of which, for example, the lenslet array has been replaced or simply implemented with a liquid crystal display.22,23

The laser ray tracing (R-T) technique was developed in 1997.24 In this apparatus, a thin diameter beam light is projected onto the retina sequentially and the distance to the retinal reference position or ideal spot location (if the eye did not suffer from imperfections) is determined and used to calculate the specific aberrations of the eye.25 During a rapid scan over several points (about 256), the light reflected off the retina passes back through the optics of the eye and forms an image of the retinal spot on the light detector, the location of which is determined by the slope of the WA of the eye under measurement. The position of this image outside the eye is measured to infer the wavefront slope for each point in the pupil. A major disadvantage of the R-T technique is the error caused by distortions in the spot intensity distribution due to retinal spatial non-uniformities that can influence retinal reflectance. This phenomenon may induce an error in the aberration measurement, especially in highly aberrated eyes.

Direct comparison between the laser R-T and the S-H sensors in measuring the optical aberrations of artificial and normal eyes revealed similar results between the two techniques.26 The main difference between the S-H and the R-T sensors is the method used to acquire the spot image. In contrast with laser R-T, where the incident beam is scanned sequentially over the entrance pupil to measure light going into the eye, the S-H sensor measures light coming out of the eye using a parallel process to acquire multiple spots over the exit pupil. Sequential acquisition of wavefront aberrations has the advantage of avoiding the possibility of ‘overlapping’ optical phenomena, altering the WA reconstruction in highly aberrated eyes, whereas simultaneous acquisition measurements need short acquisition periods to achieve a high accuracy in assessing wavefront error. If the eye moves during measurement, the correlation between measured retinal locations is lost anda precise reconstruction is no longer achievable.

The objective Tscherning aberrometer and skiascopy are less used objective methods for the clinical measurement of WA. The objective Tscherning aberrometer uses a grid pattern of laser beams entering the eye generated using a screen with a large number of holes in front of the laser source.27 The multispot pattern, reflected out from the retina, is imaged onto the CCD camera of the device and can be seen as that of the S-H sensor, where spots are displaced due to the optical imperfections of the tested eye. The distortions in the multispot pattern are then used to calculate the WA. As with the S-H sensor, the Tscherning technique may suffer from the ‘crossover’ effect. To date, the single commercially available Tscherning sensor is almost exclusively used in the preoperative evaluation of eyes undergoing customised ablation with a commercial laser platform.28

Skiascopy uses the principle embodied in retinoscopy with projection of a slit beam through the pupil onto the retina and observation of the corresponding retinal image along meridians, over a 360 degree area. The reflected light is captured by an array of high density rotating photo-detectors. This system measures longitudinal aberrations, contrary to other systems that compute aberration from local slopes (that is, transversal aberration). Therefore, the basic weakness of the system is its inability to measure the full wavefront gradient in that it is really only sensitive to radial deflections. This technique showed low repeatability of HOA measurements in clinical studies.9,29 Actually, it is used for driving wavefront-guided treatments in combination with a commercial excimer laser device.30

In contrast with objective methods, subjective or psychophysical techniques for wavefront sensing have not been widely used. Between them, the cross-cylinder and the spatially resolved refractometer (SSR) are the most characterised techniques.31 In the cross-cylinder technique32 the aberrations are inferred from the distortion of a grid that is shadowed with a ±5.0 dioptre cross cylinder on the patient's retina, fixating a distant point. After a few studies for characterising the distribution of the eye's WA with aging,33 this system is no longer used in the clinical environment.

The principle of the SSR34 is relatively simple. The system has two light sources, namely, a fixed source, the light from which passes through the centre of the pupil that serves as a reference; and a moveable source, the light from which is moved to different locations in the pupil. For each location of the moveable source, the patient's task is to change the position of the moveable light source on the retina until is aligned with the reference spot formed by the fixed light source. The change in the angle of incidence of the movable light required to align the spots at different locations in the pupil is a measure of the local wavefront slope. The main advantage of this type of sensor is the large dynamic range. The subjective method of measurement has the disadvantage that the measurement performance depends on the patient's ability to precisely complete the task. More importantly, the measurement process is very time consuming (about 12 to 15 minutes), which makes this method inappropriate for use in a clinical environment. On the other hand, when applied to normal eyes, the SSR provided results similar to those obtained with the S-H or laser R-T sensors.26

All the techniques described here, apart from the objective Tscherning methods, use infrared radiation in the approximate range 780 to 860 nm. There are several advantages of using near infrared instead of visible radiation:

  • 1
    It is comfortable because the eye is not sensitive to infrared radiation.
  • 2
    The source cannot influence accommodation.
  • 3
    Pupil dilation is usually not required because pupillary responses are not sensitive to infrared.
  • 4
    Fundus reflectance is higher than in visible radiation.

In addition, the amount of light required to both image the retina and measure the aberrations is reduced, which is important as it must be within the maximum exposure safety limits.

Although infrared radiations penetrate deeper into the fundus than visible wavelengths, no significant difference in focus has been demonstrated behind the photoreceptor layer for different wavelengths.35,36 This phenomenon has been related to the fact that most of the light returning from the retina is waveguided within photoreceptors, therefore the effective site of retinal reflection is approximately at the outer limiting membrane.37 In the clinical setting, Llorente and colleagues38 found that aberrations changed little from the infrared to the visible radiation (between 780 nm and 570 to 580 nm), except for the defocus term, using the S-H and laser R-T sensors. The difference in defocus can be predicted by the Indiana chromatic eye model.39 The surfaces and the internal refractive index distribution of the cornea and lens are responsible for the measured shift in defocus (the refractive index is a function of wavelength and monochromatic aberrations also depend on the considered wavelength), with the corneal-air interface being responsible for most of the shift. The chromatic effect on HOA is small in comparison with the change in defocus and therefore difficult to measure experimentally. A significant difference was found in the tails of the PSF images, with the infrared images presenting a larger scattering halo, probably as a result of a more important contribution of retinal and choroidal scattering for that wavelength.35

INNOVATIVE METHODS FOR SENSING THE WAVE ABERRATIONS OF HUMAN EYES

  1. Top of page
  2. Abstract
  3. STANDARD METHODS FOR SENSING THE WAVE ABERRATIONS OF HUMAN EYES
  4. INNOVATIVE METHODS FOR SENSING THE WAVE ABERRATIONS OF HUMAN EYES
  5. CONCLUSIONS
  6. REFERENCES

In recent years, due to the increasing interest and application of wavefront sensing techniques in the ophthalmic community, alternative and innovative methods have been designed and developed for the specific use in vision science, some of which are commercially available. In the following, we examine three emerging wavefront sensing techniques: curvature sensing, pyramid sensing and interferometry, with particular devotion to the shearing interferometric techniques.

The first two techniques are based on ‘phase-diversity’. They depend on comparisons between phases in adjacent areas in the image or objective plane of an optical system. The principle of the phase-diversity technique is that the propagation of a wavefront can be described using the ‘intensity transport equation’ (ITE).40 Phase diversity is normally implemented in the image plane, using two images recorded under different defocus conditions to reconstruct the wavefront but may equally well be symmetrically placed about the pupil plane. Distortions in the wavefront in the measurement plane will alter the local intensities of the wavefront as it propagates: a convex distortion will cause the wavefront to converge and hence become more intense, the contrary for a concave distortion. The change in intensity is a measure of the local wavefront curvature and may be used to reconstruct the wavefront.

There are several algorithms that may be used to reconstruct the wavefront phase by solution of the ITE, either iterative41 or non-iterative (analytical). Analytic solutions can be faster and more accurate reconstruction methods for defocus-based phase diversity wavefront sensing than iterative. The Green's function (GF)42 and the Gureyev-Nugent (GN) algorithm43,44 are two examples of analytic solutions to the ITE. The GF solution to the ITE uses the difference of two intensity images and a pre-computed Green's function matrix multiplier to reconstruct the phase.45 This makes it an extremely fast method of phase retrieval. On the other hand, assumptions are to be made about the nature of the wavefront under reconstruction: the wavefront and its first derivative (the slope) must be continuous everywhere within the pupil and this means that the performance of the algorithm could be less accurate if the input wavefront is discontinuous and illumination is not uniform.46,47 The GN algorithm uses a modal decomposition of the wavefront to reconstruct the unknown phase. It accomplishes this by using matrices and projecting Zernike modes onto the difference image (formed by a pair of intensity images with equal and opposite amounts of defocus applied), thus reducing the boundary-value problem for the ITE to a system of linear algebraic equations. The method can also be used in the case of non-uniform illumination and without the necessity of distinguishing the boundary phase data from the intensity derivative inside the aperture.44

The curvature censor (C-S) was originally developed for astronomical observation based on a technique developed by Roddier in 1988.48 His original idea was to couple a C-S element and a bimorph deformable mirror (DM) directly without a need for intermediate calculations in an AO astronomical telescope.

Experimental work by Diaz-Douton and associates49 has demonstrated the feasibility of a curvature sensor for ocular wavefront measurement. The principle of the C-S relies on the local changes in intensity in the planes perpendicular to the light's propagation direction, as it travels along its optical path, as depicted in Figure 2. Advantages of the C-S in comparison with the S-H sensor are the higher dynamic range and the lower cost; disadvantages may be the prolonged time of measurement and the fact that larger defocusing is needed to measure wavefront with higher resolution, thus reducing the sensitivity of the sensor. This means that the C-S may not be as accurate as expected for sensing high-order aberrations. This problem could be overcome by designing a C-S, in which the defocusing distance can be adjusted to varying clinical conditions.

image

Figure 2. Schematic of the curvature sensing (C-S) method. F is the focal distance of the lens, l is the defocusing distance, I1 and I2 are the light intensity distributions. In the geometrical optics approximation, a local wavefront curvature makes one image brighter (I1) and the other one dimmer (I2). The change in intensity is a measure of the wavefront curvature and may be used to reconstruct the wavefront. The sensitivity of C-S is inversely proportional to the defocusing distance l.

Download figure to PowerPoint

In their experimental setting, Diaz-Douton and associates49 compared the performance of a C-S sensor (using a light source of λ= 780 nm) with a S-H sensor, demonstrating very similar results in artificial test eyes: an average difference in root-mean-square (RMS) was 0.006 µm between measurements of the two systems, that is in the order of the variability between the measures taken by both techniques.

The other approach based on phase diversity for measuring the WA of the eye is the pyramid sensor (P-S), which was developed by Ragazzoni and Farinato50 and implemented for the first time within an AO astronomical telescope. In this system, a transparent pyramid dissects the stellar image into four parts. Each beam is deflected so that these beams form four images, with different intensities, of the telescope pupil on the same CCD detector.

Similar to a Foucault knife edge test,51 in a P-S sensor the aberration-induced distortions are sensed by placing a four-facet pyramidal refractive element with its tip aligned to the optical axis, as schematised in Figure 3. The wavefront gradients along two orthogonal directions are retrieved from the intensity distribution among the four pupils images resulting from this operation.

image

Figure 3. The basic principle of operation of the Pyramid wavefront sensor. The light exiting from the eye is dissected into four parts by a transparent pyramid element, placed in the focal plane of lens L1. Introducing four different tilts, the pyramid splits the wavefront into four parts. A second lens L2 is used to conjugate the exit pupil plane with the CCD sensor plane. If the eye suffers aberrations, the local wavefront tilt can be computed from the relative point-to-point intensity differences between the four pupil images.

Download figure to PowerPoint

The first application of the P-S sensor in ophthalmology was accomplished by Iglesias and co-workers,52 who demonstrated the feasibility of this method in measuring the WA of artificial and normal eyes, although pointing out the necessity to deal with the negative influence of spurious reflections from the anterior corneal surface. In recent experimental work, Chamot and colleagues53 developed an AO system for application in vision science (using a light source of λ= 635 nm) with a P-S sensor in the measurement arm of the device and a piezoelectric DM as the wavefront corrector element. The DM was optically conjugated to a steering mirror positioned in the wavefront sensor arm immediately after the tip of the diffractive pyramidal element. This approach was the same as that used by Ragazzoni,54 that is, the beam circulation strategy allowed a dynamic control of the sensitivity of the measurements of the system. A scientific CCD camera, operating in 4 by 4 binning mode, recorded the four pupils imaged behind the pyramidal element produced during a full rotation cycle. For each CCD read-out, the software registered the position of each of the four pupil intensity images and computed two maps proportional to the vertical and horizontal wavefront gradients. Therefore, the P-S retrieved wavefront gradient information by measuring intensity differences between regions of CCD readout frames. Tests were performed in artificial and in vivo eyes achieving results similar to those obtained by other AO systems with a S-H sensor, further demonstrating the feasibility and accuracy of the P-S sensing technique in an AO system for ophthalmic applications. The measurement time for recording a CCD frame containing the four pupil images was 10 milliseconds.

The great flexibility of the P-S system is that the pupil sampling and sensing sensitivity can be adjusted separately. One great advantage of the pyramidal sensor is the easy adaptability of the system to the variations in the range of aberrations one can expect in the optics of the human eye. The dynamic range of the sensor can be modified easily. Moreover, at small amplitudes the sensitivity of a P-S sensor can be higher than that of a S-H sensor. Also, it is possible (at least in principle) to place several pyramids in the focal plane, to combine the light from several faint light sources on a single detector.

Different types of interferometric techniques have been suggested for ocular wavefront sensing. Here, a collection of various methods used for biomedical or clinical purposes in recent years is provided.

A novel adaptive wavefront correction system uses an all-optical feedback interferometer that consists of a Mach-Zehnder interferometer and an ‘optically addressed-spatial light modulator’ (OA-SLM).55–57 The idea of feedback interferometry was originally proposed by Fisher and Warde.58 In this system, the output fringe intensity from the interferometric element is fed back optically to the OA-SLM, which is placed in one arm of the interferometer. With this system, the authors claimed to achieve real-time correction of aberrated wavefronts, without electronic calculations, thanks to a reliable reconstruction of the eye's WA achieved by the interferometric element. Advantages of such a system are that it is relatively simple and inexpensive if compared to other opto-electronic systems but it has not yet been shown to have a reliable application in human eyes. Moreover, to work properly, the phase modulating side of the SLM, which receives the aberrated wavefront to be corrected, must be exactly imaged onto its reverse side, that is, the write side, without any misalignment. Exact alignment in the feedback optics is not an easy task in some practical situations.

To date, the most interesting interferometric technique for measuring the WA in human eyes appears to be the ‘shearing interferometry’, or shearography, which is a well-known technique for measuring surface displacements or testing optical surfaces or laser beams. In principle, it can be employed for the measurement of the WA coming out from the eye. A brief description of this technique is given, also providing some experimental set-ups that have been applied to measurement of the WA of human eyes.

Shearing interferometry offers the advantage that it yields an analysed wavefront without the use of a reference wave, thus eliminating the need for a separate ideal reference wavefront. The wavefront under test interferes with a modified version of itself, either by translation (lateral shear), magnification (radial shear) or rotation (rotational shear).59 The phase information obtained is proportional to the gradient of the test wavefront in the direction of shear. Shearing may be induced by different optical setups: wedge plates, polarising prisms, gratings, diffractive optical elements (DOEs) et cetera.60–62

Although numerous techniques have been developed for either lateral or radial shearing, the most valuable for phase measurements appears to be phase-shifting.63 Briefly, the method involves illuminating the object (for example, the retina) with a single beam of coherent light. In the simplest case, a grating is situated in front of the object to provide two sheared images of the object. The object is then imaged onto a CCD array sensor. A shearing device in the imaging system results in two superimposed images: the relative separation, or shearing distance, is normally chosen to be a small fraction of the field of view. Therefore, any pixel in the sensor device receives light from two points on the object surface and phase changes at the pixel then depend directly on the relative displacement of the two points. The resultant pattern is formed in such a way that two wavefronts coming from different points (x+ Sx, y) or (x, y+ Sy) of the initial surface (x, y) will interfere in the image plane, where Sx and Sy are the shears in the x and y directions, respectively, as shown in Figure 4. To avoid error propagation, reconstruction methods based on least-squares estimation require the wavefront derivative in two or more orthogonal directions.64 Thus, two or more measurements must be made separately along two orthogonal directions of shear. This may be accomplished sequentially by using prisms mounted on translation or rotator stages or, better, simultaneously using DOEs or SLMs.65

image

Figure 4. Co-ordinate system of the original (W, dashed circle) and two sheared wavefronts, W1 and W2. The sheared wavefronts can be generated by a grating mounted on a translation stage or a DOE. The interference pattern is generated in the overlap area (in grey colour) of the two sheared wavefronts. Sx and Sy are the amount of shear in the x and y directions respectively; ρ is the shear vector and θ is the shear angle. Reconstruction of the phase of the original wavefront is achieved using the ‘phase-shifting’ method.

Download figure to PowerPoint

More recently, a new family of shearing interferometry has been developed, based on interference of more than one replica of an analysed wavefront with different directions of shear. These techniques are commonly called ‘multiple shearing interferometry’ (MSI).66 The principle of this technique relies on the generation of several replicas of the wavefront to be evaluated using a conventional grating or a DOE (Figure 5). The resulting interferogram is recorded onto a CCD camera and the acquired image is then processed using two-dimensional Fourier analysis to recover the phase. Using this method, one can exploit three or more derivatives in the three or more directions of shear to obtain the wavefront. A large dynamic range can characterise wavefront sensors of this family. Such interferometers are able to detect phase distortions of several tens of waves but also of very small fractions of a wave (λ/100). At the same time, sensitivity and dynamics can be continuously adapted to the analysed aberrations.67

image

Figure 5. Diffractive grating and corresponding interference pattern, generated at the CCD plane, of four replicas of a test wavefront in a Cartesian geometry and definitions of the shear directions (shear angle θ= 45°). Currently, diffractive gratings are mostly employed for generation of multiple shearing interferometry. They are manufactured by means of laser lithographic systems and subsequent reactive-ion etching to obtain the desired geometry for the grating. Analysing the interferogram is a matter of locating the minima or maxima of the fringes, assigning proper phase values to them and fitting Zernike polynomials to those phase values.

Download figure to PowerPoint

Currently, MSI has been obtained with different techniques: three-wave lateral shearing interferometry,68 multiple-wave lateral shearing interferometry69 and modified Hartmann mask. It is worth noting that the use of multiple holes or reticles, arranged in a bidimensional grating, can be seen as a sort of S-H sensor with the microlens array replaced with slits, thus creating a type of Hartmann mask for sensing of an optical wavefront.70 A Hartmann mask uses a mask of holes, usually arranged in a regular square grid. These holes break the incoming light into beams, which are deflected according to the local distortions of the sensed wavefront.71,72

The Hartmann test in monochromatic light can be seen as shearing interferometry.73 The Talbot interferometer exploits this principle.74 Strictly speaking, it is constructed with two gratings in which moiré fringes are generated by superimposing the Fourier image of the first grating on the second (hence, it is a sort of moiré deflectometry). The two gratings should have the same period. If the phase object is placed in front of the first grating, the light deflected by the object yields the shifted Fourier images and the resultant moiré fringes show the deflection mapping.75 On the other hand, only one periodic grating can be used for phase distortion analysis, by exploiting the Talbot effect or self-imaging phenomenon.76 The Talbot image can be directly detected by a CCD placed at the Talbot distance from the periodic pattern, as schematised in Figure 6. Distortion of the fringe pattern reflects the local tilt of the wavefront. This pattern can be observed only at a very short distance from the grating due to diffraction and the pattern disappears as distance increases. Diffraction patterns can be observed again at specific periodic distances from the grating. These are called Talbot images. The distance (Δz) at which the Talbot image is generated can be obtained by considering the fundamental frequency of the grating and the effect of diffraction, which is described in the case of one dimension as Δz=md2/λ, where m is an integer, d is the period of the grating and λ is the wavelength of the light source. When m is an even number (for example, 2), the Talbot image is the same pattern as the light distribution appearing just behind the grating, and when m is an odd number (for example, 1), the negative Talbot images, that is, phase-reversed Talbot images, are produced. Talbot images are exact replicas of the light intensity that would be observed just behind the grating. Thus, the distortion of the Talbot image can be used to detect wavefront tilt, similarly to Hartmann sensors.

image

Figure 6. Operation schematic of a Talbot sensor: an aberrated wavefront is incident onto the two-dimensional grating (d is the spacing of the grating and b is the width of the slit). Δz is the Talbot distance and Δs is the local phase shift of the fundamental frequency component of the grating image. The Talbot distance can be calculated by considering the fundamental frequency of grating, the wavelength of the light source used and the effect of diffraction. At the CCD plane, the distortion of the Talbot image can be processed to detect wavefront tilt.

Download figure to PowerPoint

It becomes clear that the distorted pattern produced by a periodic Hartmann mask can be considered as a distorted fringe pattern. As a consequence Fourier transform techniques that have been applied successfully to interferogram analysis also apply to Hartmann data analysis. For instance, let us consider a standard Hartmann mask made of a two-dimensional periodic array of holes. After propagation, the detector records a distorted array of spots. The Fourier transform of this irradiance distribution is a two-dimensional periodic array of harmonics, each being widened by the distortion. The technique consists of selecting one of the harmonics through a filtering window and taking the inverse Fourier transform. The phase of this transform maps the local phase distortion of the periodic array and therefore, is a measure of the wavefront slope in the direction of the harmonic (module 2π). The technique can definitely work with any two-dimensional periodic mask. Such an application would increase considerably the dynamic range and/or the spatial resolution of the Hartmann sensors. In addition, Fourier analysis could be applied to the S-H system,77,78 in which the transverse resolution still remains limited by the number of microlenses. Indeed, most current centroiding methods require the absolute displacement of Hartmann spots to be smaller than half the average spot interval. With Fourier transform techniques, this condition is no longer required. Phase unwrapping requires only the difference between two adjacent spot displacements to be smaller than half their average distance.

Recently, different research groups developed a wavefront sensor for ophthalmic applications based on the Talbot effect. In an experimental device, Sekine and co-workers79 used a two-dimensional grating for sensing the optical wavefront with the CCD placed in the plane of the Talbot image of the first order to maximise the contrast of the grating image. They obtained Talbot images from either model or in vivo eyes and were able to successfully reconstruct wavefront shapes, with no discernible difference in comparison with those obtained by a S-H sensor. Warden and colleagues80 demonstrated accurate results in both model and human eyes using a recent commercially available Talbot wavefront sensor.81 In this work, a series of measurements was taken in model eyes demonstrating the high accuracy of the Talbot sensor in comparison with two commercially available S-H sensors, especially for high-order aberrations. Similarly, a high correlation was found between subjective manifest refraction and calculated refraction in a group of 68 eyes using the Talbot aberrometer. Further details on the system's configuration can be found in a patent dated 2004.81 Another commercial instrument based on the Talbot effect is derived from a patent dated 199982 and recently commercialised in the United States. There is little information on the system's performance in a clinical environment.

Amplitude gratings in a Talbot sensor have the disadvantage of low optical efficiency, with the amount of transmitted light reduced of 50 per cent or more. The light intensity at the CCD plane could be very faint requiring a high sensitivity detector. Alternatively, the possibility of using phase gratings could be considered in the next generation of Talbot sensors.79

CONCLUSIONS

  1. Top of page
  2. Abstract
  3. STANDARD METHODS FOR SENSING THE WAVE ABERRATIONS OF HUMAN EYES
  4. INNOVATIVE METHODS FOR SENSING THE WAVE ABERRATIONS OF HUMAN EYES
  5. CONCLUSIONS
  6. REFERENCES

With the introduction of customised methods for the correction of vision and adaptive optics in ophthalmology, there is a special need to improve the accuracy of the devices used for WA measurement of the individual eye. Continuing efforts are made to improve the sensitivity and accuracy of wavefront sensors in measuring the WA of normal and diseased eyes, by exploiting new methods and techniques. Among these, curvature sensing, pyramid sensing and shearing interferometry have been applied recently to clinical visual science and hold the promise to achieve a high degree of accuracy in measurement of WA and reconstruction.

A major advantage of new techniques should be their applications in fast adaptive optics systems for imaging the retina at high resolution in real time. Further applications can range from accurate wavefront-guided excimer laser correction to design and fabrication of personalised contact lenses or intraocular lenses to optimise individual visual performance. To make these predictions come true will require joint research efforts between the ophthalmic and engineering communities to apply these innovative technologies in every-day clinical practice.

REFERENCES

  1. Top of page
  2. Abstract
  3. STANDARD METHODS FOR SENSING THE WAVE ABERRATIONS OF HUMAN EYES
  4. INNOVATIVE METHODS FOR SENSING THE WAVE ABERRATIONS OF HUMAN EYES
  5. CONCLUSIONS
  6. REFERENCES