Notice: Wiley Online Library will be unavailable on Saturday 27th February from 09:00-14:00 GMT / 04:00-09:00 EST / 17:00-22:00 SGT for essential maintenance. Apologies for the inconvenience.
Many mutants have been isolated from the model plant Arabidopsis thaliana, and recent important genetic resources, such as T-DNA knockout lines, facilitate the speed of identifying new mutants. However, present phenotypic analysis of mutant screens depends mainly on qualitative descriptions after visual observation of morphological traits. We propose a novel method of phenotypic analysis based on precise three-dimensional (3D) measurement by a laser range finder (LRF) and automatic data processing. We measured the 3D surfaces of young plants of two Arabidopsis ecotypes and successfully defined two new traits, the direction of the blade surface and epinasty of the blade, quantitatively. The proposed method enables us to obtain quantitative and precise descriptions of plant morphologies compared to conventional 2D measurement. The method will open a way to find new traits from mutant pools or natural ecotypes based on 3D data.
If you can't find a tool you're looking for, please click the link at the top of the page to "Go to old article view". Alternatively, view our Knowledge Base articles for additional help. Your feedback is important to us, so please let us know if you have comments or ideas for improvement.
Recent progress in plant genome research and the substantial generation of genetic resources have enabled more systematic analysis of plant gene function than was previously possible especially through the study of mutants. In the RIKEN Genomic Sciences Center, more than 50 000 activation-tagged mutant lines of Arabidopsis have been produced for the saturated mutagenesis of the genome (Nakazawa et al., 2003). To find the function of a gene, systematic analysis of phenotypes is vital. However, conventional phenotypic screening depends mainly on visual observation of qualitative traits, and is insufficient for the detection of subtle changes in quantitative traits, such as leaf curvature, growth rate, and mode of ramification, which are difficult to measure or describe accurately by human observation. Several conventional studies to quantify morphological traits that report automatic mutant screening depend on two-dimensional (2D) image processing (Boyes et al., 2001; Reuzeau et al., 2003; Xiong et al., 1999). In these studies, quantitative traits can be extracted from the 2D image. However, as the plant shape is 3D, exact values of morphological traits cannot be extracted from data obtained from a 2D image. A 3D reconstructed model of transgenic mouse embryos using episcopic fluorescence image capturing has been reported as an example of solid phenotypic analysis (Weninger and Mohun, 2002). Other research for screening mutant mice has used an automatic screening method by applying an image processing algorithm to 2D slice images (Paulus et al., 2000), which are generated from a 3D data set of X-ray computed tomography (CT), known as 3D CT. These examples of research into mutants based on 3D measurement have the advantage of allowing a more precise description than is possible with conventional 2D measurement. However, they do not incorporate automatic processing to extract 3D-specific traits. Besides mutation research, plant scientists of agriculture and developmental biology have turned their attention to the 3D quantification of plant morphological parameters. Spatial distribution of roots as well as root length has been quantified using 3D CT data from chestnut trees and maples (Pierret et al., 1999, 2000). The leaf area of cabbage has been calculated from 3D data obtained by stereo-photogrammetry, which estimates a 3D model from stereo images (Kanuma et al., 1998). Developmental parameters of cotton and bean have been extracted from 3D data from a sonic digitizer to visualize developmental processes by animation (Room et al., 1996). Some of them utilized automatic algorithms to extract 3D morphological parameters. However, as they were not intended for large-scale use, these conventional studies do not include systematic data processing to deal with a lot of individuals uniformly. Our aim is 3D phenotypic analysis with automatic processing at all steps allowing precise descriptions of traits toward computational mutant screening. Here we present a new method combining accurate measurement of 3D plant surfaces and automatic extraction of morphological traits (parameters). In this report, we apply 3D measurement based on an LRF as the 3D measuring device. The LRF is popular among 3D digitizers owing to the ease of handling and appropriately precise resolution. As an example of automatic data processing to extract traits, we describe reconstruction of a 3D model and segmentation of plant parts. Moreover this report shows examples of quantitatively 3D-specific traits: direction of the blade surface and epinasty of the blade. The results of the quantitative analyses elucidate the differences between the two ecotypes of Columbia (Col-0) and Tsu-0 (Figure 1a,b) for these traits, where the two ecotypes are substitutes for the wild type and mutants in a prospective mutant screening.
Acquisition of 3D data
After 3D measurement using an LRF and a texture data scanner, a range image, and an RGB color image are generated. Their images are depicted in Figure 2(a,b), respectively. Each image has 67 354 pixels of 238 rows and 283 columns. Resolution between pixels is 0.045 mm. Intensity of pixels in the range image represents the depth of the measured object from a reference point. In this spatial coordinate system, locations in a general 2D image are coordinates on a 2D plane and they are described in terms of X and Y. Likewise, intensity of pixels in the range image is equivalent to a coordinate on the Z axis in 3D space. Thus rows, columns, and intensity of pixels in a range image indicate coordinates on the X, Y, and Z axes. Hence, a range image, although it is a 2D representation, is equivalent to 3D point data, so that it is called a 2.5D image. The advantage of the 2.5D image is that it can be applied to classical 2D image processing algorithms, of which computational costs are substantially lower than those of 3D space. Practical operation of the 2.5D image is explained in the section under Automatic segmentation and classification.
Polygon model reconstruction
Objects in digital 3D space need to be modeled in terms of data structure. A polygon model is a type of representation of the geometric surface of a 3D shape, which is presented as arbitrary networks of connected polygons forming bounded planer surfaces. Figure 2(c) depicts triangular polygon models to approximate a spherical surface. Each model can be specified by information of edges, vertices, and polygons. The number of polygons (vertices) in each model is 72 (49), 162 (100), and 648 (361), in order from left to right. As shown in the figure, subdivision into finer polygons approximates to the desired surface, in this case a spherical surface. Triangular polygons can be generated on the foundations of three vertices in 3D space, depending on the relationship between neighboring points of a range image. Figure 2(d,e) shows snapshots of a re-constructed polygon model mapped with pseudo-color and RGB color, respectively. The polygon models are composed of 35 198 triangular polygons with 18 294 vertices. The 3D axes are depicted in red, green, and blue in the lower right of each snapshot to show the directions of X, Y, and Z. The Z-axis, corresponding to the depth in the range image, is not directly to the front but is on a slant. In Figure 2(d), the number of triangular polygons is so great that the model of the surface of the leaves tends to be uneven. If a smooth-processing algorithm is applied, the surface of the leaves becomes flatter. However, this study utilizes the original rough polygon model of phenotypic analysis for investigating accurate trait extraction. Figure 2(e) shows that the colors of triangular polygons are assigned by an average of colors at three vertices constituting a triangle. While color and texture are not analyzed in this report, the color image enables us to analyze traits of discoloration (color) and variegation (texture) by automatic image processing. By digital image processing, the traits of color and texture can be described quantitatively in the same way as can be morphological traits.
Automatic segmentation and classification
For extracting traits of true leaves, automatic processing of segments and classification of the whole polygon model into plant parts is required. We attempted to segment true leaves into two blades and two petioles in the polygon model by using a corresponding range image. Classification of the segmented parts was attempted with knowledge of the arrangement of the plant object during the 3D measurement. Figure 3(a) shows the corners marked with squares to be detected by a seven-curvature algorithm in the binary contour image based on a range image. Corners are selected under conditions when the angle of curvature is below 120°. Next, landmarks of the bases of blades and the joints of petioles are automatically specified for classification using knowledge of where the leaves are placed. Figure 3(b) represents upper and lower blades and petioles that have been classified with assigned gray levels. As landmarks of detected corners are identified correctly, the correct classification is obtained.
Automatic extraction of 3D traits
Phenotypic analysis using a re-constructed 3D polygon model of Arabidopsis enables us to quantify morphological traits comprehensively. In this report, we will give two examples of 3D-specific quantitative traits: the direction of the blade surface and epinasty of the blade. Moreover, differences in these traits between two ecotypes are presented.
Direction of the blade surface
First, we focused on the direction of a blade surface. The direction of the blade surface is an important trait in terms of the light environment. To gain energy by photosynthesis, the blades of plants have to face the direction of sunlight. The shape of a blade surface is approximated by triangle polygons as outlined in Figure 2(c,d). The direction of the blade surface can be defined via the normal vector of each triangle polygon. Note that the normal vector is a vector perpendicular to a polygonal surface. The direction of the normal vector determines the orientation of the surface. Now a quantity for direction of the blade surface can be defined by the angles between the vector of light incidence and normal vectors. However, in this study, the incident light does not originate as a point source, we substitute the weighted average of the normal vectors for the incident light. Thus, we quantify the direction of the blade surface as the angles between the normal vectors of the polygonal surface (normal vectors) and the weighted average of the normal vectors (mean normal vector) as shown in Figure 4(a). Conventional 2D image analysis cannot deal with such normal vectors. Thus, the direction of the blade surface is a 3D-specific trait. We have measured the direction of the blade surface of two wild-type ecotypes, Tsu-0 and Col-0. Figure 4(b) expresses a polygon model of the upper blade of ecotype Tsu-0 with pseudo-colors reflecting angles between normal vectors and the mean normal vector. Pseudo-colors change from red to blue as the angle fluctuates from 0 to 90°. The upper blade model of Tsu-0 is almost covered by a pseudo-color of red, which means that most polygonal faces of the blade are inclined to be parallel to the mean normal vector. On the other hand, Figure 4(c) presents a polygon model of the upper blade of ecotype Col-0 with pseudo-color. The pseudo-color at the edges of the blades tends to be green. In other words, the blade of the ecotype Col-0 seems to be more curved than that of Tsu-0. To analyze the direction of the blade surface, Figure 4(d) depicts a histogram in which the frequency density of angles is plotted along the horizontal axis. The frequency density of angles is weighted with the area of each polygon. The data in the histogram include not only the upper blades (B1) but also the lower blades (B2). To discuss the direction of the blade surface as a specific quantitative trait, we define a weighted average of angle distribution as a trait of the mean direction of the blade surface (MDBS). The weighted averages of angle distributions in the histogram are 25.2 at Col-0 and 18.8 at Tsu-0 for the mean of 16 true leaves. The statistical significance of the trait's difference between the two ecotypes can be tested by applying the one-tailed t-test for two samples. The statistical analysis elucidated that the trait of MDBS is significantly greater at Col-0 than at Tsu-0 (P < 0.001). This indicates that the surface direction of Col-0 compared to Tsu-0 tends to be inclined against the mean surface direction.
Epinasty of a blade
The trait chosen to quantify epinasty and hyponasty of leaves (Keller and Volkenburgh, 1997; Nakazawa et al., 2003) was global curvature of the leaf blade. Epinasty is the result of more vigorous growth on the adaxial surface of a leaf blade causing a downward curvature. To the contrary, hyponasty is an upward curvature of the blade. These kinds of traits can be found in hormonal mutants such as auxin response mutants and also as the result of mutations in transcription factors (Nakazawa et al., 2001, 2003). To obtain a global curvature of a leaf blade, we attempted extracting data from the cross-section of a blade along the transverse axis. Figure 5(a) depicts a 3D contour of a blade of Tsu-0, which corresponds to the segmented upper blade in Figure 3(b). Using data of a 3D contour of the blade, we can calculate the 3D principal axes of a blade. Estimation of the 2D principal axes (Costa and Cesar, 2001) can be extended to the 3D principal axes as shown in the figure. The 3D axes are called the 1st, 2nd, and 3rd principal axes in order of descending eigen-value. In particular, the 1st and 2nd axes are named as the longitudinal and the transverse axes in this report. The longitudinal axis represents the longitudinal direction in which the blade is most elongated. The transverse axis that is perpendicular to the longitudinal axis is the transverse direction of the blade. Along this axis, we cut a polygonal blade model to obtain data from a cross-section. Figure 5(c) shows standardized data of cross-sections of blades along the respective transverse axes of Tsu-0 (blue colored) and Col-0 (red colored). Similar curvatures of cross-sections along the longitudinal axis are depicted in Figure 5(b). The section data along the transverse axis in Figure 5(c) is approximated by a quadratic polynomial to calculate the macroscopic leaf curvature. In Figure 5(d), an estimated curve at each cross-section is depicted with a legend of the quadratic coefficient. We defined the second order quadratic coefficient as the quantitative trait of epinasty or hyponasty of a leaf blade. If the quadratic coefficient is negative, the leaf is defined as epinastic. If it is positive, the leaf is defined as hyponastic. All the data in Figure 5(d) are negative and accordingly the true leaves of both Tsu-0 and Col-0 are classified as epinastic. The averages of the quadratic coefficients for 16 true leaves are: −0.14 for Col-0 and −0.072 for Tsu-0. The one-tailed t-test for two samples indicates that the absolute mean value of the estimated coefficients for Col-0 is significantly larger than that of Tsu-0 (P < 0.001). This result is consistent with the result of the direction of the blade surface, as Col-0 true leaves tend to be more curved than those of Tsu-0.
In order to measure precisely morphological aspects of A. thaliana, we equipped a 3D laser scanner and developed computer programs to quantify morphological traits from measured 3D information. Conventional studies to quantify morphological traits in the phenotypic analysis of Arabidopsis are based on 2D measurements, and consequently 3D-specific traits are not correctly quantified since the 3D quantity is projected on to a 2D plane surface. The method proposed here is based on 3D measurement and can give more accurate descriptions of 3D-specific morphological traits compared to those obtained from conventional studies using 2D measurement. Computational digital analysis then enables us to quantify what are conventionally qualitative traits, such as epinasty of a blade. In addition to the quantitative description, this method enables definition of new 3D traits such as direction of the blade surface. Thus, the 3D measurement and automatic processing of the method discussed here enables us to obtain quantitative descriptions and the discovery of new traits in plant morphology.
The method has several problems that need to be resolved to develop it further. There are three major problems in 3D measurement using LRFs, and these are in the areas of range of measurement, developmental recording and occlusion. First, the problem in the range of measurement arises because the measured volume is confined within narrow limits under a given resolution to a small object, for example the early growth stage of Arabidopsis. Thus, under our measurement conditions, the depth of focus is often over the limit owing to the varying heights of Arabidopsis plants even when they are at the same growth stage. The depth of focus in 3D measurement currently depends on manual adjustment. However, automation of focusing is required if this method is to be applied to mutant screening. If the resolution of measurement device is flexibly adjustable, we can deal with more comprehensive phenotypes from microscopic traits such as trichome to macroscopic traits such as flower. The second problem is the developmental record of the 3D measurement. As described above, the developmental stage of Arabidopsis measured by LRFs is also limited to the young stages when the seedlings are less than 2 cm. Third, the occlusion problem indicates that any data at dead angles cannot be measured because of the acquisition mechanism of LRFs. If leaves are overlapping each other, a viewpoint of a rear leaf cannot be generated in 3D data. This inevitable problem originates in the measurement device of LRFs to capture the object's surface. Thus, if other 3D measurement devices to capture the inner structure, such as CT, are adopted, the occlusion problem may be resolved. However, any 3D measurement device has both merits and demerits. A more appropriate measurement device will be investigated in future research, taking into consideration varying conditions, convenient handling, and costs. Difficulty in the automatic image processing in this report concentrates on two main topics, threshold determination and automatic segmentation. Thresholds are often required in the processing of noise reduction to generate an accurate 3D polygon model. At present, the thresholds are given empirically, that is to say manually optimized. However, the thresholds should be automatically determined in future. As for the problem of automatic segmentation, the currently proposed algorithm segments true leaves into blades and petioles by detecting corners on the plant contours in a range image. Several mutants might show obscure borders between blades and petioles. More sophisticated segmentation algorithms might be required.
In order to accomplish fully automatic mutant screening, continuous research effort into exploitation and extraction of morphological traits will be required. This report focuses on only two 3D-specific morphological traits. To achieve automatic mutant screening, all extractive morphological traits should be considered. Thus, it is necessary to list algorithms for extracting multiple traits. Various morphological traits should be investigated for all standard wild types under uniform experimental conditions. If traits of wild types are collected as reference data, mutants can be detected automatically by quantifying the differences with reference to the standard wild types. Moreover, beyond the viewpoint of reverse genetics, differences in morphological traits between ecotypes are also applicable to forward genetics, which hunts genes responsible for traits by quantitative trait mapping. Thus, extracted quantitative traits will be of great value in gene research. The color image generated by a range image can provide color and texture information that can highlight important traits like discoloration (color) or variegation (texture). Toward color/texture image processing, it is necessary to establish a process of normalization to equalize background color. Further, morphological and color/texture traits extracted with 3D measurement can be used as a resource in a web database. As a channel to supply Arabidopsis information on the web, the 2010 project of Arabidopsis (Chory et al., 2000) touches on a clickable virtual model of Arabidopsis. While our 3D polygon models need to be compressed or parameterized before they can be used at a powerful user interface, they are applicable to the clickable virtual model. Thus, not only do the extracted traits of the end products deserve to be prepared as a web resource but also do the 3D models of the materials.
Plant growth conditions
Arabidopsis thaliana seeds were treated at 4°C for 2 days and then transferred to continuous red light at 22°C for 2 h. After red-light treatment, seeds were sown in soil, and transferred to a plant growth room (16 h light/8 h dark at 22°C). Average light intensity was 70 µmol m−2 sec−1. Plants were transplanted to 1.5 ml microtubes before laser scanning. For 3D measurement, Arabidopsis plants were selected at the stage when two true leaves had developed because of the bounds of the range of measurement. Therefore, the two ecotypes Col-0 and Tsu-0 in Figure 1 are photographed at 13 days after sowing. For statistical analysis of traits between Col-0 and Tsu-0, 16 true leaves with 8 individuals are used.
System configuration of 3D measurement
The system configuration of the 3D shape measurement is furnished with an automatic control towards high throughput mutant screening. It consists of an LRF and a texture data scanner, a light used for color texture of the polygon model and a control computer. The computer automatically controls the shutter of the camera and the light source. Automatic processing for one measurement takes about 45 sec in total, which is divided into about 30 sec for laser scanning, a few seconds for capturing a color image, and 10 sec for pre-processing and generation of data files. The angle of elevation of the camera is fixed at 90° above the Arabidopsis plant. The LRF is based on the VOXELAN HEW-50HS (Hamano Engineering Co. Ltd., Kawasaki, Japan), which has a color image scanner accurately coinciding with the coordinates of a range image. The dimensions of the measured volume are 25, 10, and 23.4 mm for width, depth and height, respectively. The resolution between data points is 0.045 mm. The laser in the LRF has a peak wavelength of 670 nm. This value overlaps with the absorbance spectra of phytochromes (Grimm and Rüdiger, 1986). The action of laser scanning may cause plant movement by light activation. To avoid this possibility, our proposed system used a laser whose peak is shifted up from 670 nm.
Polygon model reconstruction
The range image and color image are generated after 3D measurement by the LRF. In the range image, the intensity of pixels represents the depth from a reference point to the measured object. Each pixel in the range image is converted to a 3D point by using information of the row index, the column index, and the intensity of pixels. Moreover, three neighboring pixels in a range image can constitute a triangular polygon on three vertices in 3D space. The front face of a polygon is defined by a counter-clockwise ordering of vertex indices. However, the re-constructed polygon model cannot express the correct surface of Arabidopsis because of various noises. Thus, we attempted the following two methods for noise reduction. First, the depth intensity in a range image is cut off at a given threshold. Second, we set up an empirical threshold of the difference between maximum and minimum depths of vertices comprising a triangular polygon so that unsuitably noisy polygons are removed. All software processing is developed at matlab 6.5 (MathWorks Inc., Natick, MA, USA). Polygon models are generated with a standard 3D file format of Virtual Reality Modeling Language 2.0 (VRML). A VRML browser glview 4.4 (http://home.snafu.de/hg/) is used as rendering software for the polygon models for figures. For smooth visualization of polygon models, the Gourand shading model is selected in the VRML browser.
Automatic segmentation and classification
To identify blades and petioles, several landmarks on the 3D data consisting of a polygon model need to be located. This report adopts the bases of blades and the joints of petioles as landmarks. To allow easy processing, true leaves are installed in a vertical direction on a range image. Consequently, cotyledons tend to spread in a horizontal direction. For locating the landmarks, we apply 2D image processing algorithms to a range image. First, we transform the range image to a binary image to pick out the pixels corresponding to a part of the plant rather than the background. Next, we extract the contour of the whole shape of the Arabidopsis plant. The contour is defined by whether eight neighbors of a focused pixel contain a background pixel. Lastly, we detect corners of sharp curves below the angle of a threshold by the k-curvature method (Costa and Cesar, 2001). Of all the extracted corners, we assign landmarks using the knowledge of an installed arrangement for true leaves. Landmarks of bases of blades are defined as corners detected first from the tip of each true leaf by a contour-following algorithm (Jain, 1989). Similarly, landmarks for the joints of petioles are defined as secondly detected corners. Pairs of landmarks are connected by straight lines. The region inside the contour is segmented by the straight lines connecting landmarks. In order, from the top of the range image, the segmented region is classified as upper blade, upper petiole, lower petiole, and lower blade.
Automatic extraction of 3D traits
The direction of the blade surface, the first example of a 3D-specific trait, is defined by the angles between the normal vectors of triangular polygons (normal vectors) and the weighted average of the normal vectors (mean normal vector), where the weight is the area of a polygon. A normal vector can be calculated from the positions of vertices comprising a triangular polygon. As an Arabidopsis surface model is composed of triangular polygons, an angle for the direction of the blade surface can be obtained from individual triangular polygons. If a normal vector of a triangular polygon and the mean normal vector turn to the same direction, the angle is 0°. A polygon model of only a blade with pseudo-color corresponding to each angle is generated. The pseudo-color is given from red for 0° to blue for 90°. For promoting a better understanding, a histogram of the angles used to determine the direction of the blade surface is depicted. The histogram shows the frequency density of triangular polygons of each angle with a step of 1°, where the frequency density is weighted with the area of a polygon. A quantitative trait for the mean direction of the blade surface, which is called as MDBS, is defined by the weighted average of the angles. The second example of 3D-specific traits, epinasty and hyponasty, is calculated as the macroscopic leaf curvature of a blade of a true leaf. First, we segment and extract parts of blades from two true leaves. Next, we calculate the 3D principal axis by using the contour of the extracted blade. Then, we cut the polygonal surface along the minor principal axis to obtain a cross-section. Data from the cross-section of a blade are approximated by a quadratic polynomial using the least square estimation. The quadratic polynomial can be denoted as w = β0 + β1v + β2v2 (β2 ≠ 0), where w and v are values on the third principal axis and the second principal (transverse) axis, respectively. The variables of β0, β1, and β2 are the coefficients in the approximating quadratic polynomial. The estimated second-order coefficient β2 is defined as a quantitative trait of epinasty or hyponasty.
We thank Dr Ichiroh Kanaya for advice about 3D measurement by an LRF. We are also grateful to Masako Fukuda for plant growth management. We acknowledge generous support from RIKEN GSC.