Automatic quantification of morphological traits via three-dimensional measurement of Arabidopsis

Authors

  • Eli Kaminuma,

    Corresponding author
    1. Genomic Knowledge Base Research Team, Bioinformatics Group, RIKEN Yokohama Institute, Genomic Sciences Center, 1-7-22 Suehiro-cho, Tsurumi-ku, Yokohama, Kanagawa 230-0045, Japan,
      For correspondence (fax +81 45 503 9553; e-mail eli@gsc.riken.go.jp).
    Search for more papers by this author
  • Naohiko Heida,

    1. Genomic Knowledge Base Research Team, Bioinformatics Group, RIKEN Yokohama Institute, Genomic Sciences Center, 1-7-22 Suehiro-cho, Tsurumi-ku, Yokohama, Kanagawa 230-0045, Japan,
    Search for more papers by this author
  • Yuko Tsumoto,

    1. Plant Function Exploration Team, Plant Functional Genomics Research Group, RIKEN Yokohama Institute, Genomic Sciences Center, 1-7-22 Suehiro-cho, Tsurumi-ku, Yokohama, Kanagawa 230-0045, Japan,
    2. Department of Biology, Ochanomizu University, 2-1-1 Ohtsuka, Tokyo 112-8610, Japan,
    Search for more papers by this author
  • Naoki Yamamoto,

    1. Department of Biology, Ochanomizu University, 2-1-1 Ohtsuka, Tokyo 112-8610, Japan,
    Search for more papers by this author
  • Nobuharu Goto,

    1. Department of Biology, Miyagi College of Education, Aramaki, Aoba-ku, Sendai 980-0845, Japan, and
    Search for more papers by this author
  • Naoki Okamoto,

    1. NEC Informatec Systems Ltd, 3-2-1, Sakado, Takatsu, Kawasaki, Kanagawa 213-0012, Japan
    Search for more papers by this author
  • Akihiko Konagaya,

    1. Genomic Knowledge Base Research Team, Bioinformatics Group, RIKEN Yokohama Institute, Genomic Sciences Center, 1-7-22 Suehiro-cho, Tsurumi-ku, Yokohama, Kanagawa 230-0045, Japan,
    Search for more papers by this author
  • Minami Matsui,

    1. Plant Function Exploration Team, Plant Functional Genomics Research Group, RIKEN Yokohama Institute, Genomic Sciences Center, 1-7-22 Suehiro-cho, Tsurumi-ku, Yokohama, Kanagawa 230-0045, Japan,
    Search for more papers by this author
  • Tetsuro Toyoda

    1. Genomic Knowledge Base Research Team, Bioinformatics Group, RIKEN Yokohama Institute, Genomic Sciences Center, 1-7-22 Suehiro-cho, Tsurumi-ku, Yokohama, Kanagawa 230-0045, Japan,
    Search for more papers by this author

For correspondence (fax +81 45 503 9553; e-mail eli@gsc.riken.go.jp).

Summary

Many mutants have been isolated from the model plant Arabidopsis thaliana, and recent important genetic resources, such as T-DNA knockout lines, facilitate the speed of identifying new mutants. However, present phenotypic analysis of mutant screens depends mainly on qualitative descriptions after visual observation of morphological traits. We propose a novel method of phenotypic analysis based on precise three-dimensional (3D) measurement by a laser range finder (LRF) and automatic data processing. We measured the 3D surfaces of young plants of two Arabidopsis ecotypes and successfully defined two new traits, the direction of the blade surface and epinasty of the blade, quantitatively. The proposed method enables us to obtain quantitative and precise descriptions of plant morphologies compared to conventional 2D measurement. The method will open a way to find new traits from mutant pools or natural ecotypes based on 3D data.

Introduction

Recent progress in plant genome research and the substantial generation of genetic resources have enabled more systematic analysis of plant gene function than was previously possible especially through the study of mutants. In the RIKEN Genomic Sciences Center, more than 50 000 activation-tagged mutant lines of Arabidopsis have been produced for the saturated mutagenesis of the genome (Nakazawa et al., 2003). To find the function of a gene, systematic analysis of phenotypes is vital. However, conventional phenotypic screening depends mainly on visual observation of qualitative traits, and is insufficient for the detection of subtle changes in quantitative traits, such as leaf curvature, growth rate, and mode of ramification, which are difficult to measure or describe accurately by human observation. Several conventional studies to quantify morphological traits that report automatic mutant screening depend on two-dimensional (2D) image processing (Boyes et al., 2001; Reuzeau et al., 2003; Xiong et al., 1999). In these studies, quantitative traits can be extracted from the 2D image. However, as the plant shape is 3D, exact values of morphological traits cannot be extracted from data obtained from a 2D image. A 3D reconstructed model of transgenic mouse embryos using episcopic fluorescence image capturing has been reported as an example of solid phenotypic analysis (Weninger and Mohun, 2002). Other research for screening mutant mice has used an automatic screening method by applying an image processing algorithm to 2D slice images (Paulus et al., 2000), which are generated from a 3D data set of X-ray computed tomography (CT), known as 3D CT. These examples of research into mutants based on 3D measurement have the advantage of allowing a more precise description than is possible with conventional 2D measurement. However, they do not incorporate automatic processing to extract 3D-specific traits. Besides mutation research, plant scientists of agriculture and developmental biology have turned their attention to the 3D quantification of plant morphological parameters. Spatial distribution of roots as well as root length has been quantified using 3D CT data from chestnut trees and maples (Pierret et al., 1999, 2000). The leaf area of cabbage has been calculated from 3D data obtained by stereo-photogrammetry, which estimates a 3D model from stereo images (Kanuma et al., 1998). Developmental parameters of cotton and bean have been extracted from 3D data from a sonic digitizer to visualize developmental processes by animation (Room et al., 1996). Some of them utilized automatic algorithms to extract 3D morphological parameters. However, as they were not intended for large-scale use, these conventional studies do not include systematic data processing to deal with a lot of individuals uniformly. Our aim is 3D phenotypic analysis with automatic processing at all steps allowing precise descriptions of traits toward computational mutant screening. Here we present a new method combining accurate measurement of 3D plant surfaces and automatic extraction of morphological traits (parameters). In this report, we apply 3D measurement based on an LRF as the 3D measuring device. The LRF is popular among 3D digitizers owing to the ease of handling and appropriately precise resolution. As an example of automatic data processing to extract traits, we describe reconstruction of a 3D model and segmentation of plant parts. Moreover this report shows examples of quantitatively 3D-specific traits: direction of the blade surface and epinasty of the blade. The results of the quantitative analyses elucidate the differences between the two ecotypes of Columbia (Col-0) and Tsu-0 (Figure 1a,b) for these traits, where the two ecotypes are substitutes for the wild type and mutants in a prospective mutant screening.

Figure 1.

Pictures of the two ecotypes of A. thaliana.

(a) Col-0.

(b) Tsu-0.

Results

Acquisition of 3D data

After 3D measurement using an LRF and a texture data scanner, a range image, and an RGB color image are generated. Their images are depicted in Figure 2(a,b), respectively. Each image has 67 354 pixels of 238 rows and 283 columns. Resolution between pixels is 0.045 mm. Intensity of pixels in the range image represents the depth of the measured object from a reference point. In this spatial coordinate system, locations in a general 2D image are coordinates on a 2D plane and they are described in terms of X and Y. Likewise, intensity of pixels in the range image is equivalent to a coordinate on the Z axis in 3D space. Thus rows, columns, and intensity of pixels in a range image indicate coordinates on the X, Y, and Z axes. Hence, a range image, although it is a 2D representation, is equivalent to 3D point data, so that it is called a 2.5D image. The advantage of the 2.5D image is that it can be applied to classical 2D image processing algorithms, of which computational costs are substantially lower than those of 3D space. Practical operation of the 2.5D image is explained in the section under Automatic segmentation and classification.

Figure 2.

Acquired data from the laser surface scanner and 3D reconstructed model of a plant of the ecotype Tsu-0.

(a) Range image: the intensity of pixels indicates the depth of a measured object from a reference point of the LRF. As the depth increases, the level of the gray scale darkens.

(b) RGB color image acquired at the same coordinate with pixels in the range image. Color data are used as photographic texture of triangular polygonal data. We can observe that a tip of the left cotyledon has the brown skin of a seed attached.

(c) An example of polygon models with triangular meshes for a spherical surface. Three polygon models are characterized, from rough to high density of triangular meshes.

(d) Snapshot of a polygon model with pseudo-color shading. Both back and flanking sides are not captured by the LRF attached right above so that the surface of the model is of only the right side. Unevenness of surface shape is observable.

(e) Snapshot of polygon model with mapping of RGB color image. Scale bar = 1 mm in (a,b). 3D axes are shown in (d,e), green and red bars indicate horizontal and vertical axes in the range or color image, respectively. Blue bar indicates the depth. Viewpoints in (d,e) are specified at 18° in terms of azimuth and 15° in terms of elevation in 3D space.

Polygon model reconstruction

Objects in digital 3D space need to be modeled in terms of data structure. A polygon model is a type of representation of the geometric surface of a 3D shape, which is presented as arbitrary networks of connected polygons forming bounded planer surfaces. Figure 2(c) depicts triangular polygon models to approximate a spherical surface. Each model can be specified by information of edges, vertices, and polygons. The number of polygons (vertices) in each model is 72 (49), 162 (100), and 648 (361), in order from left to right. As shown in the figure, subdivision into finer polygons approximates to the desired surface, in this case a spherical surface. Triangular polygons can be generated on the foundations of three vertices in 3D space, depending on the relationship between neighboring points of a range image. Figure 2(d,e) shows snapshots of a re-constructed polygon model mapped with pseudo-color and RGB color, respectively. The polygon models are composed of 35 198 triangular polygons with 18 294 vertices. The 3D axes are depicted in red, green, and blue in the lower right of each snapshot to show the directions of X, Y, and Z. The Z-axis, corresponding to the depth in the range image, is not directly to the front but is on a slant. In Figure 2(d), the number of triangular polygons is so great that the model of the surface of the leaves tends to be uneven. If a smooth-processing algorithm is applied, the surface of the leaves becomes flatter. However, this study utilizes the original rough polygon model of phenotypic analysis for investigating accurate trait extraction. Figure 2(e) shows that the colors of triangular polygons are assigned by an average of colors at three vertices constituting a triangle. While color and texture are not analyzed in this report, the color image enables us to analyze traits of discoloration (color) and variegation (texture) by automatic image processing. By digital image processing, the traits of color and texture can be described quantitatively in the same way as can be morphological traits.

Automatic segmentation and classification

For extracting traits of true leaves, automatic processing of segments and classification of the whole polygon model into plant parts is required. We attempted to segment true leaves into two blades and two petioles in the polygon model by using a corresponding range image. Classification of the segmented parts was attempted with knowledge of the arrangement of the plant object during the 3D measurement. Figure 3(a) shows the corners marked with squares to be detected by a seven-curvature algorithm in the binary contour image based on a range image. Corners are selected under conditions when the angle of curvature is below 120°. Next, landmarks of the bases of blades and the joints of petioles are automatically specified for classification using knowledge of where the leaves are placed. Figure 3(b) represents upper and lower blades and petioles that have been classified with assigned gray levels. As landmarks of detected corners are identified correctly, the correct classification is obtained.

Figure 3.

Segmentation and classification processing using a range image.

(a) Corner detection using a k-curvature algorithm on a binary contour image out of a range image. Black squares indicate detected corners. Corner angle is defined as a sharp curve below 120°. CO, cotyledon; and TR, true leaf.

(b) Two blades and two petioles segmented and classified with individually assigned gray levels. Regions are identified using landmarks of detected corners. B1, upper blade; B2, lower blade; P1, upper petiole; and P2, lower petiole.

Automatic extraction of 3D traits

Phenotypic analysis using a re-constructed 3D polygon model of Arabidopsis enables us to quantify morphological traits comprehensively. In this report, we will give two examples of 3D-specific quantitative traits: the direction of the blade surface and epinasty of the blade. Moreover, differences in these traits between two ecotypes are presented.

Direction of the blade surface

First, we focused on the direction of a blade surface. The direction of the blade surface is an important trait in terms of the light environment. To gain energy by photosynthesis, the blades of plants have to face the direction of sunlight. The shape of a blade surface is approximated by triangle polygons as outlined in Figure 2(c,d). The direction of the blade surface can be defined via the normal vector of each triangle polygon. Note that the normal vector is a vector perpendicular to a polygonal surface. The direction of the normal vector determines the orientation of the surface. Now a quantity for direction of the blade surface can be defined by the angles between the vector of light incidence and normal vectors. However, in this study, the incident light does not originate as a point source, we substitute the weighted average of the normal vectors for the incident light. Thus, we quantify the direction of the blade surface as the angles between the normal vectors of the polygonal surface (normal vectors) and the weighted average of the normal vectors (mean normal vector) as shown in Figure 4(a). Conventional 2D image analysis cannot deal with such normal vectors. Thus, the direction of the blade surface is a 3D-specific trait. We have measured the direction of the blade surface of two wild-type ecotypes, Tsu-0 and Col-0. Figure 4(b) expresses a polygon model of the upper blade of ecotype Tsu-0 with pseudo-colors reflecting angles between normal vectors and the mean normal vector. Pseudo-colors change from red to blue as the angle fluctuates from 0 to 90°. The upper blade model of Tsu-0 is almost covered by a pseudo-color of red, which means that most polygonal faces of the blade are inclined to be parallel to the mean normal vector. On the other hand, Figure 4(c) presents a polygon model of the upper blade of ecotype Col-0 with pseudo-color. The pseudo-color at the edges of the blades tends to be green. In other words, the blade of the ecotype Col-0 seems to be more curved than that of Tsu-0. To analyze the direction of the blade surface, Figure 4(d) depicts a histogram in which the frequency density of angles is plotted along the horizontal axis. The frequency density of angles is weighted with the area of each polygon. The data in the histogram include not only the upper blades (B1) but also the lower blades (B2). To discuss the direction of the blade surface as a specific quantitative trait, we define a weighted average of angle distribution as a trait of the mean direction of the blade surface (MDBS). The weighted averages of angle distributions in the histogram are 25.2 at Col-0 and 18.8 at Tsu-0 for the mean of 16 true leaves. The statistical significance of the trait's difference between the two ecotypes can be tested by applying the one-tailed t-test for two samples. The statistical analysis elucidated that the trait of MDBS is significantly greater at Col-0 than at Tsu-0 (P < 0.001). This indicates that the surface direction of Col-0 compared to Tsu-0 tends to be inclined against the mean surface direction.

Figure 4.

Direction of the blade surface.

(a) Direction of the blade surface is quantified as angle θ between a normal vector of a polygon face (normal vector N) and the weighted average of normal vectors (mean normal vector M), where the weight is the area of a polygon.

(b) Pseudo-colored polygon model of the adaxial surface of an upper blade of ecotype Tsu-0.

(c) Pseudo-colored polygon model of the adaxial surface of an upper blade of ecotype Col-0.

(d) Histogram of the number of polygons with area weight against angles between normal vector and mean normal vector. Polygons used in the histogram are extracted from blades of true leaves only. The steps of the bins on the histogram on the horizontal axis are 1° units. Data in the histogram are normalized by the number of polygons and the weighted area of polygons in each blade. Blue and red lines on the histogram indicate Tsu-0 and Col-0, respectively. Solid and dotted lines denote upper and lower blades, respectively. Each angle along the vertical axis is mapped to the corresponding pseudo-color as depicted below the histogram.

Epinasty of a blade

The trait chosen to quantify epinasty and hyponasty of leaves (Keller and Volkenburgh, 1997; Nakazawa et al., 2003) was global curvature of the leaf blade. Epinasty is the result of more vigorous growth on the adaxial surface of a leaf blade causing a downward curvature. To the contrary, hyponasty is an upward curvature of the blade. These kinds of traits can be found in hormonal mutants such as auxin response mutants and also as the result of mutations in transcription factors (Nakazawa et al., 2001, 2003). To obtain a global curvature of a leaf blade, we attempted extracting data from the cross-section of a blade along the transverse axis. Figure 5(a) depicts a 3D contour of a blade of Tsu-0, which corresponds to the segmented upper blade in Figure 3(b). Using data of a 3D contour of the blade, we can calculate the 3D principal axes of a blade. Estimation of the 2D principal axes (Costa and Cesar, 2001) can be extended to the 3D principal axes as shown in the figure. The 3D axes are called the 1st, 2nd, and 3rd principal axes in order of descending eigen-value. In particular, the 1st and 2nd axes are named as the longitudinal and the transverse axes in this report. The longitudinal axis represents the longitudinal direction in which the blade is most elongated. The transverse axis that is perpendicular to the longitudinal axis is the transverse direction of the blade. Along this axis, we cut a polygonal blade model to obtain data from a cross-section. Figure 5(c) shows standardized data of cross-sections of blades along the respective transverse axes of Tsu-0 (blue colored) and Col-0 (red colored). Similar curvatures of cross-sections along the longitudinal axis are depicted in Figure 5(b). The section data along the transverse axis in Figure 5(c) is approximated by a quadratic polynomial to calculate the macroscopic leaf curvature. In Figure 5(d), an estimated curve at each cross-section is depicted with a legend of the quadratic coefficient. We defined the second order quadratic coefficient as the quantitative trait of epinasty or hyponasty of a leaf blade. If the quadratic coefficient is negative, the leaf is defined as epinastic. If it is positive, the leaf is defined as hyponastic. All the data in Figure 5(d) are negative and accordingly the true leaves of both Tsu-0 and Col-0 are classified as epinastic. The averages of the quadratic coefficients for 16 true leaves are: −0.14 for Col-0 and −0.072 for Tsu-0. The one-tailed t-test for two samples indicates that the absolute mean value of the estimated coefficients for Col-0 is significantly larger than that of Tsu-0 (P < 0.001). This result is consistent with the result of the direction of the blade surface, as Col-0 true leaves tend to be more curved than those of Tsu-0.

Figure 5.

Global leaf curvature for epinasty and hyponasty.

(a) Contour of the upper blade of Tsu-0 on real 3D axes. The centroid of the leaf contour is re-located at the original point on the 3D axes. Components of 3D axes are denoted by X, Y, and Z. Green and red lines signify the longitudinal axis and the transverse axis, respectively. The longitudinal and transverse axes are defined as the 1st and 2nd principal axes. The axes' lengths are represented proportionally to the respective eigen-values. Axis length on the 3rd principal axis is too small to be recognized. The three principal axes and their components are denoted by U, V, and W and u, v, and w.

(b) Normalized section data along the longitudinal axis of a blade. Joint with a petiole is at the positive end on the longitudinal axis. Data of each leaf are standardized by respective eigen-values for the longitudinal axis.

(c) Normalized section data along the transverse axis of upper and lower blades. Data of each leaf are standardized by respective eigen-values of the transverse axis.

(d) Estimated curve of blades based on only the quadratic coefficient of the polynomial. The estimated coefficient is denoted in parentheses. In (b–d), blue- and red-colored data are Tsu-0 and Col-0, respectively. The solid and dotted data indicate the upper and lower blades, respectively. In (b,c), data on the 3rd axis are shifted for a value at 0 on the longitudinal or transverse axis.

Unit of axes in (a–d) is in mm.

Discussion

In order to measure precisely morphological aspects of A. thaliana, we equipped a 3D laser scanner and developed computer programs to quantify morphological traits from measured 3D information. Conventional studies to quantify morphological traits in the phenotypic analysis of Arabidopsis are based on 2D measurements, and consequently 3D-specific traits are not correctly quantified since the 3D quantity is projected on to a 2D plane surface. The method proposed here is based on 3D measurement and can give more accurate descriptions of 3D-specific morphological traits compared to those obtained from conventional studies using 2D measurement. Computational digital analysis then enables us to quantify what are conventionally qualitative traits, such as epinasty of a blade. In addition to the quantitative description, this method enables definition of new 3D traits such as direction of the blade surface. Thus, the 3D measurement and automatic processing of the method discussed here enables us to obtain quantitative descriptions and the discovery of new traits in plant morphology.

The method has several problems that need to be resolved to develop it further. There are three major problems in 3D measurement using LRFs, and these are in the areas of range of measurement, developmental recording and occlusion. First, the problem in the range of measurement arises because the measured volume is confined within narrow limits under a given resolution to a small object, for example the early growth stage of Arabidopsis. Thus, under our measurement conditions, the depth of focus is often over the limit owing to the varying heights of Arabidopsis plants even when they are at the same growth stage. The depth of focus in 3D measurement currently depends on manual adjustment. However, automation of focusing is required if this method is to be applied to mutant screening. If the resolution of measurement device is flexibly adjustable, we can deal with more comprehensive phenotypes from microscopic traits such as trichome to macroscopic traits such as flower. The second problem is the developmental record of the 3D measurement. As described above, the developmental stage of Arabidopsis measured by LRFs is also limited to the young stages when the seedlings are less than 2 cm. Third, the occlusion problem indicates that any data at dead angles cannot be measured because of the acquisition mechanism of LRFs. If leaves are overlapping each other, a viewpoint of a rear leaf cannot be generated in 3D data. This inevitable problem originates in the measurement device of LRFs to capture the object's surface. Thus, if other 3D measurement devices to capture the inner structure, such as CT, are adopted, the occlusion problem may be resolved. However, any 3D measurement device has both merits and demerits. A more appropriate measurement device will be investigated in future research, taking into consideration varying conditions, convenient handling, and costs. Difficulty in the automatic image processing in this report concentrates on two main topics, threshold determination and automatic segmentation. Thresholds are often required in the processing of noise reduction to generate an accurate 3D polygon model. At present, the thresholds are given empirically, that is to say manually optimized. However, the thresholds should be automatically determined in future. As for the problem of automatic segmentation, the currently proposed algorithm segments true leaves into blades and petioles by detecting corners on the plant contours in a range image. Several mutants might show obscure borders between blades and petioles. More sophisticated segmentation algorithms might be required.

In order to accomplish fully automatic mutant screening, continuous research effort into exploitation and extraction of morphological traits will be required. This report focuses on only two 3D-specific morphological traits. To achieve automatic mutant screening, all extractive morphological traits should be considered. Thus, it is necessary to list algorithms for extracting multiple traits. Various morphological traits should be investigated for all standard wild types under uniform experimental conditions. If traits of wild types are collected as reference data, mutants can be detected automatically by quantifying the differences with reference to the standard wild types. Moreover, beyond the viewpoint of reverse genetics, differences in morphological traits between ecotypes are also applicable to forward genetics, which hunts genes responsible for traits by quantitative trait mapping. Thus, extracted quantitative traits will be of great value in gene research. The color image generated by a range image can provide color and texture information that can highlight important traits like discoloration (color) or variegation (texture). Toward color/texture image processing, it is necessary to establish a process of normalization to equalize background color. Further, morphological and color/texture traits extracted with 3D measurement can be used as a resource in a web database. As a channel to supply Arabidopsis information on the web, the 2010 project of Arabidopsis (Chory et al., 2000) touches on a clickable virtual model of Arabidopsis. While our 3D polygon models need to be compressed or parameterized before they can be used at a powerful user interface, they are applicable to the clickable virtual model. Thus, not only do the extracted traits of the end products deserve to be prepared as a web resource but also do the 3D models of the materials.

Experimental procedures

Plant growth conditions

Arabidopsis thaliana seeds were treated at 4°C for 2 days and then transferred to continuous red light at 22°C for 2 h. After red-light treatment, seeds were sown in soil, and transferred to a plant growth room (16 h light/8 h dark at 22°C). Average light intensity was 70 µmol m−2 sec−1. Plants were transplanted to 1.5 ml microtubes before laser scanning. For 3D measurement, Arabidopsis plants were selected at the stage when two true leaves had developed because of the bounds of the range of measurement. Therefore, the two ecotypes Col-0 and Tsu-0 in Figure 1 are photographed at 13 days after sowing. For statistical analysis of traits between Col-0 and Tsu-0, 16 true leaves with 8 individuals are used.

System configuration of 3D measurement

The system configuration of the 3D shape measurement is furnished with an automatic control towards high throughput mutant screening. It consists of an LRF and a texture data scanner, a light used for color texture of the polygon model and a control computer. The computer automatically controls the shutter of the camera and the light source. Automatic processing for one measurement takes about 45 sec in total, which is divided into about 30 sec for laser scanning, a few seconds for capturing a color image, and 10 sec for pre-processing and generation of data files. The angle of elevation of the camera is fixed at 90° above the Arabidopsis plant. The LRF is based on the VOXELAN HEW-50HS (Hamano Engineering Co. Ltd., Kawasaki, Japan), which has a color image scanner accurately coinciding with the coordinates of a range image. The dimensions of the measured volume are 25, 10, and 23.4 mm for width, depth and height, respectively. The resolution between data points is 0.045 mm. The laser in the LRF has a peak wavelength of 670 nm. This value overlaps with the absorbance spectra of phytochromes (Grimm and Rüdiger, 1986). The action of laser scanning may cause plant movement by light activation. To avoid this possibility, our proposed system used a laser whose peak is shifted up from 670 nm.

Polygon model reconstruction

The range image and color image are generated after 3D measurement by the LRF. In the range image, the intensity of pixels represents the depth from a reference point to the measured object. Each pixel in the range image is converted to a 3D point by using information of the row index, the column index, and the intensity of pixels. Moreover, three neighboring pixels in a range image can constitute a triangular polygon on three vertices in 3D space. The front face of a polygon is defined by a counter-clockwise ordering of vertex indices. However, the re-constructed polygon model cannot express the correct surface of Arabidopsis because of various noises. Thus, we attempted the following two methods for noise reduction. First, the depth intensity in a range image is cut off at a given threshold. Second, we set up an empirical threshold of the difference between maximum and minimum depths of vertices comprising a triangular polygon so that unsuitably noisy polygons are removed. All software processing is developed at matlab 6.5 (MathWorks Inc., Natick, MA, USA). Polygon models are generated with a standard 3D file format of Virtual Reality Modeling Language 2.0 (VRML). A VRML browser glview 4.4 (http://home.snafu.de/hg/) is used as rendering software for the polygon models for figures. For smooth visualization of polygon models, the Gourand shading model is selected in the VRML browser.

Automatic segmentation and classification

To identify blades and petioles, several landmarks on the 3D data consisting of a polygon model need to be located. This report adopts the bases of blades and the joints of petioles as landmarks. To allow easy processing, true leaves are installed in a vertical direction on a range image. Consequently, cotyledons tend to spread in a horizontal direction. For locating the landmarks, we apply 2D image processing algorithms to a range image. First, we transform the range image to a binary image to pick out the pixels corresponding to a part of the plant rather than the background. Next, we extract the contour of the whole shape of the Arabidopsis plant. The contour is defined by whether eight neighbors of a focused pixel contain a background pixel. Lastly, we detect corners of sharp curves below the angle of a threshold by the k-curvature method (Costa and Cesar, 2001). Of all the extracted corners, we assign landmarks using the knowledge of an installed arrangement for true leaves. Landmarks of bases of blades are defined as corners detected first from the tip of each true leaf by a contour-following algorithm (Jain, 1989). Similarly, landmarks for the joints of petioles are defined as secondly detected corners. Pairs of landmarks are connected by straight lines. The region inside the contour is segmented by the straight lines connecting landmarks. In order, from the top of the range image, the segmented region is classified as upper blade, upper petiole, lower petiole, and lower blade.

Automatic extraction of 3D traits

The direction of the blade surface, the first example of a 3D-specific trait, is defined by the angles between the normal vectors of triangular polygons (normal vectors) and the weighted average of the normal vectors (mean normal vector), where the weight is the area of a polygon. A normal vector can be calculated from the positions of vertices comprising a triangular polygon. As an Arabidopsis surface model is composed of triangular polygons, an angle for the direction of the blade surface can be obtained from individual triangular polygons. If a normal vector of a triangular polygon and the mean normal vector turn to the same direction, the angle is 0°. A polygon model of only a blade with pseudo-color corresponding to each angle is generated. The pseudo-color is given from red for 0° to blue for 90°. For promoting a better understanding, a histogram of the angles used to determine the direction of the blade surface is depicted. The histogram shows the frequency density of triangular polygons of each angle with a step of 1°, where the frequency density is weighted with the area of a polygon. A quantitative trait for the mean direction of the blade surface, which is called as MDBS, is defined by the weighted average of the angles. The second example of 3D-specific traits, epinasty and hyponasty, is calculated as the macroscopic leaf curvature of a blade of a true leaf. First, we segment and extract parts of blades from two true leaves. Next, we calculate the 3D principal axis by using the contour of the extracted blade. Then, we cut the polygonal surface along the minor principal axis to obtain a cross-section. Data from the cross-section of a blade are approximated by a quadratic polynomial using the least square estimation. The quadratic polynomial can be denoted as w = β0 + β1v + β2v2 (β2 ≠ 0), where w and v are values on the third principal axis and the second principal (transverse) axis, respectively. The variables of β0, β1, and β2 are the coefficients in the approximating quadratic polynomial. The estimated second-order coefficient β2 is defined as a quantitative trait of epinasty or hyponasty.

Acknowledgements

We thank Dr Ichiroh Kanaya for advice about 3D measurement by an LRF. We are also grateful to Masako Fukuda for plant growth management. We acknowledge generous support from RIKEN GSC.

Supplementary Material

The following material is available from http://www.blackwellpublishing.com/products/journals/suppmat/TPJ/TPJ2042/TPJ2042sm.htm

Appendix S1 calc_MDBS.exe – MS-DOS program to calculate the MDBS trait.

Appendix S2 calc_epinasty.exe – MS-DOS program to calculate the epinastic trait.

Appendix S3 Tsu_ul_nocol.wrl – Input sample of a blade polygon model with VRML format.

Appendix S4 mglinstaller.exe – MATLAB compiler run-time library installer.

Appendix S5 Tsu_col.wrl – Whole plant polygon model with VRML format for reference. This file can not be used as the input file.

Ancillary

Advertisement