SEARCH

SEARCH BY CITATION

Keywords:

  • experimental ecology;
  • ImageJ;
  • mesocosm;
  • microbial ecology;
  • microcosm;
  • Python;
  • R;
  • technology;
  • Tetrahymena thermophila

Summary

  1. Top of page
  2. Summary
  3. Introduction
  4. Developing an image analysis workflow
  5. Illustration of an image analysis workflow in Tetrahymena thermophila ELS
  6. ELS examples using automated image analysis
  7. Discussion
  8. Conclusion
  9. Acknowledgements
  10. References
  11. Supporting Information
  1. Experimental laboratory systems (ELS) are widely applied research tools to test theoretical predictions in ecology and evolution. Combining ELS with automated image analysis could significantly boost information acquisition due to the ease at which abundance and morphological data is collected. Despite the advantages of image analysis, the technology has not been fully adopted yet, presumably due to the difficulties of technical implementation.
  2. The tools needed to integrate image analysis in ELS are nowadays readily available: digital camera equipment is purchased at limited costs and free software solutions which allow sophisticated image processing and analysis exist. Here, we give a concise description how to integrate these pieces into a largely automated image analysis workflow. We provide researchers with necessary background information on the principles of image analysis, explaining how to standardize image acquisition and how to validate the results to reduce bias.
  3. Three cross-platform and open-source software solutions for image analysis are compared: ImageJ, the EBImage package in R, and Python with the SciPy/scikit image libraries. The relative strengths and limitations of each solution are compared and discussed. In addition, a set of test images and three scripts are provided in the Online Supplementary Material to illustrate the use of image analysis and help biologists to implement image analysis in their own systems.
  4. To demonstrate the reliability and versatility of a validated image analysis workflow, we introduce our own Tetrahymena thermophila ELS. Then, examples from evolutionary ecology are provided showing the advantages of image analysis to study different ecological questions, aiming at both the population and individual level.
  5. Experimental laboratory systems that integrate the advantages of image analysis extend their application and versatility compared with regular ELS. Such improvements are necessary to understand complex processes such as eco-evolutionary feedbacks, community dynamics and individual behaviour in ELS.

Introduction

  1. Top of page
  2. Summary
  3. Introduction
  4. Developing an image analysis workflow
  5. Illustration of an image analysis workflow in Tetrahymena thermophila ELS
  6. ELS examples using automated image analysis
  7. Discussion
  8. Conclusion
  9. Acknowledgements
  10. References
  11. Supporting Information

To understand the complexity of nature, ecologists and evolutionary biologists have developed various, complementary approaches ranging from comparative analyses, field observations and experiments, to laboratory experiments and theory, each with particular strengths and limitations. Experimental laboratory systems (ELS) have long been recognized as valuable research tools (Holyoak & Lawler 2005). Well-known model organisms allow the studying of both ecological and evolutionary responses (Benton et al. 2007) as well as their feedback loops (Yoshida et al. 2003). The advantages of ELS directly depend on key features such as: control over the environment and the population, easy replication to increase statistical power and high level of repeatability (Fraser & Keddy 1997; Jessup et al. 2004). Balancing these features against logistic constraints (money, manpower and time invested to collect data), will determine the system efficiency.

Digital image analysis is a rapidly advancing field in the computer sciences with high potential for data collection in many academic disciplines (Burger & Burge 2008) including biology (Weeks & Gaston 1997). The potential of image analysis for data collection in ELS was repeatedly shown (Hooper et al. 2006; Lukas, Kucerova & Stejskal 2009; Mallard, Le Bourlot & Tully 2012). Yet, it is still not widely applied in experimental ecology and evolution. Save some pioneering studies (Kirk 1997; Laakso, Loytynoja & Kaitala 2003; Hooper et al. 2006; Fjerdingstad et al. 2007; Tully & Ferrière 2008) most researchers still rely on manual organism counts and cumbersome manual measurements of phenotypes (e.g. Drake & Griffen 2009; Vasseur & Fox 2009; Beveridge, Petchey & Humphries 2010; Bowler & Benton 2010; DeLong & Hanson 2011). In microbial ecology on the contrary, image analysis is part of the tool box to characterize cell phenotypes and evaluate patterns in biofilms (Daims & Wagner 2007; Schillinger et al. 2012). A wider application of image analysis can significantly boost the acquisition of information in ELS at a limited cost and it is applicable to a wide variety of study systems (Fig. 1).

image

Figure 1. Model systems with different organisms where automatic image analysis is applicable (each row from left to right: Pedersen 2008; Benhaïm et al. 2012; Færøvig, Andersen & Hessen 2002; Mallard, Le Bourlot & Tully 2012; Correll et al. 2006; Lukas, Kucerova & Stejskal 2009). Photos kindly provided by the respective authors or permissions granted by John Wiley & Sons publisher (Daphnia and grain beetles).

Download figure to PowerPoint

Image analysis has a series of advantages for ELS. (i) It is highly efficient due to its fast, reliable and low cost estimation of important biological parameters from a sample (e.g. abundance, morphological and behavioural traits). Liberated resources (manpower, money, time) may be allocated to improve replication and/or additional treatments, thus yielding better scientific output; (ii) It allows the simultaneous measurement of abundance data and phenotypic traits such as morphology or behaviour, which are important for understanding eco-evolutionary dynamics (Hairston et al. 2005); (iii) It enables the quantification of traits among individuals within a given population (Brehm-Stecher & Johnson 2004); (iv) The wealth of information gathered from images provides the possibility to quantitatively assess complex behaviours such as aggregation (Schtickzelle et al. 2009; Schillinger et al. 2012); and (v) Raw information on images is effectively stored, allowing re-analysis, reviewing and quality checking, or demonstration.

Given the potential of image analysis, the poor adoption of the technology is rather surprising. A major obstacle may have been that previous attempts to instigate image analysis lacked a comprehensive explanation of how image analysis works, details of the technical implementation, and were often customized for a single specific system. To overcome this bottleneck, we present here a detailed hands-on guide how researchers can implement image analysis in their own ELS by describing the necessary steps to consider, pointing towards options for customizing the system, and highlighting common pitfalls. We compare the strengths and limitations of three free software solutions allowing automated image analysis (ImageJ, R and Python), and provide pre-fabricated scripts ready to try out on a set of test images (see Appendix S1-S4, Supporting Information). We finish our guide with the examples of our own ELS using the ciliate Tetrahymena thermophila and some illustrative examples on how automated image analysis is used in evolutionary ecology.

Developing an image analysis workflow

  1. Top of page
  2. Summary
  3. Introduction
  4. Developing an image analysis workflow
  5. Illustration of an image analysis workflow in Tetrahymena thermophila ELS
  6. ELS examples using automated image analysis
  7. Discussion
  8. Conclusion
  9. Acknowledgements
  10. References
  11. Supporting Information

Overall, an image analysis workflow comprises three major steps: image acquisition (shooting the image), image analysis (treating the image and measuring objects) and data processing (cleaning the data). The most crucial step for automatic image analysis is to create a sharp contrast between the objects of interest (foreground) and their environment (background), so they can be accurately distinguished; this process is called segmentation (Gonzalez & Woods 2002). Ideally, the foreground will contain only objects. However, it is more likely that some misidentified objects, hereafter called artefacts, are comprised in the foreground.

Setting up an image analysis workflow implies (i) optimizing the parameters influencing the resulting image to maximize its information/noise ratio; (ii) fixing them to ensure high reproducibility (e.g. between users, experimental conditions); and (iii) validating the results against reference values as measured manually by an informed examiner, to quantify the error rate. We detail how this can be achieved for each of the three workflow steps.

Image acquisition

Reviewing the many hardware and software to acquire images in a specific system is out of scope of this article. Given that the focus of this article is to explain and illustrate the use of image analysis, we will only shortly state the crucial requirements of image acquisition. For further information on optimizing image acquisition, refer to dedicated book chapters on scientific photography (e.g. Haddock & Dunn 2010).

We assume that a system has been created to shoot greyscale images of objects against a background. The use of colour images is only recommended if colour conveys specific information (e.g. to distinguish objects from background), because they require more space to store and segmentation is less straightforward. First, to maximize information collected from each image, the viewing field should be enlarged to the largest portion of the study area (e.g. a microscope should be used at as low magnification as possible), while still retaining important detail in terms of object shape or size. Secondly, three aspects of scene illumination are particularly important for image acquisition: contrast, homogeneity and intensity (i.e. pixel brightness). Maximizing the image contrast (i.e. difference between fore- and background) is important as several segmentation methods are based on intensity difference between fore- and background. Illumination should be homogeneous over the whole image, otherwise similar objects are likely to be treated and/or characterized differently according to their position in the image. Illumination intensity needs to be high enough to allow for short exposure time and avoid hence blurring of fast moving objects. Thirdly, focussing is crucial to ensure high information/noise ratio and reproducibility. Objects out of the focal plane appear usually bigger (biasing morphological measurements), with less detail and darker (biasing segmentation).

To use images to compare experimental conditions, i.e. to make inference about a specific biological effect, reproducibility is crucial: the same reality must give the same image whatever the experimenter, the experimental conditions (e.g. object density), the time of the year, etc. This is achieved by two approaches: specifying fixed values for all settings amenable to modification, and/or including an invariant reference element in the scene, allowing adjusting the image properties (e.g. object size or brightness) retrospectively by image processing (Mallard, Le Bourlot & Tully 2012).

Image analysis

Greyscale images are usually represented as arrays, where the height and width in pixel give the row and column dimensions of the array. Each array element is hence the intensity value of a given pixel in the image (i.e. a value between 0 and 255 for greyscale images). The goal of image analysis is to identify objects of interest by segmenting the fore- from the background, the latter usually represented as zeroes in the array. Four widely applied segmentation techniques are thresholding, difference image, edge detection and watershed (Fig. 2), each with strengths and weaknesses depending on the constraints set by the biological system, and the complexity of the acquired image (Gonzalez & Woods 2002). A combination of segmentation techniques may often yield the best foreground identification.

image

Figure 2. Illustration of the four segmentation techniques: the original image is shown in the left column, and the segmented image (foreground in white) in the right. (a) Thresholding is based on the intensity of pixels. Only pixels brighter than the threshold are counted as foreground. (b) The difference image segmentation compares a sequence of two images. Image 2 is subtracted from image 1 and only pixels with positive values are retained as foreground. (c) Edge detection is based on the abrupt change in intensity between pixels. These are then marked as foreground. A morphological operation will fill the object. (d) Watershed segmentation allows splitting of adjacent objects.

Download figure to PowerPoint

Thresholding is based on the difference in pixel intensity between fore- and background: an intensity threshold is either manually set or automatically adjusted by an algorithm leading to the classification of brighter pixels as foreground and darker pixels as background (Fig. 2a) (Gonzalez & Woods 2002). Hence, all the elements of the array that are beneath the threshold are set to 0, while the rest is set to 255. Thresholding is generally fast and works efficiently if the background is homogenous and contrasts with foreground. Optimizing and validating the threshold value is crucial because it has a major effect on object count and morphology.

The difference image method uses motion cues by comparing the sample image with a time-lapsed image. A difference image is created by subtracting the second from the first image, retaining only those pixels that changed intensity as foreground (Fig. 2b) (Gonzalez & Woods 2002). In terms of the array containing the intensity difference values, elements with a 0 value (intensity value equal on both images) or a negative value (background on first image, object on second image) are interpreted as background, whereas elements with a positive value (object on first image that is not anymore present at the same place on the second image) are interpreted as foreground. This method may be useful if the background is complex or illumination heterogeneous, but is highly sensitive to departures from its central assumption: all objects move, while background is perfectly constant. Bias will for example result from any variation in background illumination (e.g. background particles displaced by moving objects or shadows created by unilateral illumination of objects: Mallard, Le Bourlot & Tully 2012), and objects considered background when they do not move (e.g. resting individuals) or when a different object occupies the same position on the second image just by chance, which is frequent when the density of objects is high.

Segmentation by edge detection is based on discontinuity rather than continuity of the intensity values. An edge is a set of connected pixels at the boundary of an intensity transition. In case of white fore- on black background, edge detection will outline the outermost layer of foreground pixels as the object edge (Fig. 2c) (Gonzalez & Woods 2002). In the array, all elements that are edges are set to 255. Thresholding slightly below 255 will then only retain elements in the array that are edges, and a morphological operation will fill the objects. Edge detection should work when contrasted intensity differences exist between fore- and background, but has not yet been used in any of the examined ELS.

In watershed segmentation, the image is seen as a topographical profile, the intensity value representing the altitude. The watershed analogy is based on the idea that a virtual drop of water would flow to the local intensity minimum of the image. At points where the drop would flow to more than one minimum, a watershed line exists, which splits adjacent watersheds and accordingly adjacent objects. Several algorithms exist for watershed segmentation (Roerdink & Meijster 2000). This approach requires that foreground is already defined (e.g. by one of the three previous segmentation approaches), but is valuable due to its power to split touching objects (Fig. 2d). Given that segmented images are used as input (foreground with intensity 255) instead of real grey intensity images, the topography is replaced by a distance map of each foreground pixel to the next background pixel. The elements that represent watershed lines are set to zero, forming background lines that split the objects.

The segmentation approaches described so far usually succeed in identifying most of the foreground. However, false positives may be introduced by misidentifying foreground, i.e. artefacts. Improvements can sometimes be made by image filters or operations, e.g. eroding–dilating operation that should not affect large objects, but will shrink small artefacts to zero (Marçal & Caridade 2006).

Data processing

An alternative approach to exclude artefacts is to use the information acquired on foreground (e.g. size, intensity) to determine their probability to be objects. A two-step cleaning procedure should be efficient in most cases. To calibrate such a cleaning procedure, one needs reliable information on the truth about foreground (object or artefact) to link to its measured characteristics. To do so, a set of images covering the possible variation in objects (e.g. density, occurrence of typical artefacts) is collected and the segmented foreground manually classified as objects vs. artefacts by an informed experimenter. The first cleaning step excludes artefacts that are outside the biologically feasible morphology range (e.g. too big or too thin), as determined from observed minimum and maximum for each morphology variable (Laakso, Loytynoja & Kaitala 2003). The second cleaning step removes artefacts more similar to objects based on their probability to be artefacts. Any statistical model which relates a binary response variable (artefact vs. object) to continuous and/or categorical predictor variables can be used (e.g. logistic regression, discriminant analysis or artificial neural networks). Data processing is done after the raw information is extracted from the images. Given the extensive information collected, powerful data management software is mandatory to batch process the results from each image analysis, filter the data for quality control, merge it with descriptive information on experimental units/treatments and store it in a database.

Image analysis software

To allow a maximum of researchers to apply the proposed image analysis, we compare here three free, cross-platform (windows, mac os and linux) and open-source solutions to perform image analysis: ImageJ (Schneider, Rasband & Eliceiri 2012), the statistical computing environment R with the EBImage package (Pau et al. 2012; R Development Core Team 2012) and Python with the scikit image and SciPy image libraries (http://scikit-image.org/). Each is capable of reading different image formats, converting file formats, image processing and analysis, including the segmentation approaches mentioned above. All possess functions to measure the properties of foreground (size, perimeter, spatial position) and export visual representations (outlined foreground on original image) and the quantitative results in form of tables. All three solutions have strengths and limitations (Table 1), partly depending on existing knowledge/skills of the researcher. To allow readers to interact with the methods and test the above-mentioned segmentation approaches, example images and commented scripts for the three solutions are provided in the Supporting Information.

Table 1. Comparing the relative strengths and limitations of the three image analysis solutions; ***, good for this criterion; **, average; *, poor. These benchmarks may vary according to existing knowledge/skills of the researcher
 ImageJR/EBImagePython/scikit image
Ease of implementation******
User-friendliness*****
Versatility*********
Speed******
Computational requirements*******
Integration with data management and analysis*******

In terms of ease of implementation (see Supporting Information for details), ImageJ is readily installed and comprises all the functions needed to perform image analysis. To perform image analysis in R, the R environment itself and the EBImage package require installation. The image analysis in Python requires that either a distribution comprising all the required libraries is installed, or the installation of several required libraries manually.

ImageJ is more user-friendly than the solutions in R and Python because it has a graphical user interface (GUI), while the latter require scripted input from a text file or the console. ImageJ also provides a powerful macro language to automate repetitive tasks and a recorder function that is translating commands performed via the GUI into macro scripts, facilitating macro development without extensive programming knowledge. All solutions are well documented online; however, given that the implementation in Python relies on several libraries, the information is slightly more scattered than in the other solutions.

ImageJ is specifically tailored to perform image analysis and widely used in many areas of biology (Schneider, Rasband & Eliceiri 2012). It is therefore versatile, with many plugins and macros available that modify and extend its basic functionality. Because the source code of ImageJ is open, one may also optimize existing functions and plugins for one's own needs, provided the underlying Java programming language is mastered. Given that the R and Python image analysis solutions are embedded within versatile programming languages, the potential to extend and change existing functions exists. However, this equally requires advanced programming knowledge. Both Python and R have functionality to perform subsequent data management/analysis within the same environment, whereas ImageJ requires additional data management software to analyse the results.

The speed with which a set of images is treated differs substantially between the three solutions: 2 min with ImageJ, 11 min with Python and 28 min with R for the same 20 images test set on the same machine (see Supporting Information). This may have important consequences when hundreds or thousands of images must be analysed.

Finally, the minimum requirements in terms of computational power differed widely: ImageJ ran the analysis without problems on the less powerful test machine (4 GB RAM laptop), while Python was running, but with the velocity deficits mentioned above. R was not able to perform the analysis on the 4 GB RAM machine due to memory constraints, while it worked on the 12 GB RAM desktop PC.

Illustration of an image analysis workflow in Tetrahymena thermophila ELS

  1. Top of page
  2. Summary
  3. Introduction
  4. Developing an image analysis workflow
  5. Illustration of an image analysis workflow in Tetrahymena thermophila ELS
  6. ELS examples using automated image analysis
  7. Discussion
  8. Conclusion
  9. Acknowledgements
  10. References
  11. Supporting Information

Our Tetrahymena thermophila ELS combines the advantages of an experimental laboratory system with a largely automatized data collection workflow based on image analysis in ImageJ. T. thermophila, a 50 μm unicellular eukaryotic ciliate usually found in fresh water ponds in North America (Asai & Forney 1999), has long been used in molecular biology as a model system due to its ease of cultivation in axenic liquid medium in flasks (Asai & Forney 1999). For measurements, samples are taken from homogenized cultures and pipetted into counting chambers on disposable microscope slides; images are taken using a digital camera. The contrast between fore- and background is obtained via dark-field microscopy such that transparent organisms appear white on a black background.

When setting up the system, image parameters were optimized to ensure both a high reproducibility of images and best correspondence between the results of image analysis and the reality. Seventy images, representative of the various experimental conditions in which the system will be used, were manually analysed and objects identified as cell vs. artefact (34 832 cells vs. 9424 artefacts). The automatic image analysis workflow was optimized using data from a subset of these images; the remainder was used for validation and to quantify the bias in parameters obtained through image analysis. Figure. 3 summarizes the improvements made by each step of the workflow on four response variables: cell count, cell size, cell shape and number of cells per cluster.

image

Figure 3. Illustration of image analysis workflow of T. thermophila experimental laboratory systems: the effect of each step is visualized on an example image portion (left column). The histograms show the distribution of counts, size and shape, and their mean value as dashed lines, on 70 validation images (34 832 cells vs. 9424 artefacts). Two summary quantities are given as insets: the deviation (observed – reference, as a percentage of the reference), and the Pearson correlation coefficient (calculated between observed and reference values over all images). In the last column, the effect of splitting cell clusters is illustrated. (a) Segmentation I (thresholding): overlay of original greyscale image; outlined objects are considered foreground. The deviation is globally the smallest for all descriptors; however, the correlations on an image-to-image basis are globally the weakest indicating that there is considerable deviation from the reference values. No splitting is performed, so all touching objects (cell clusters) are counted as one cell. (b) Segmentation II (watershed segmentation): subsequent watershed segmentation separates touching objects (see right arrow). However, some artefacts remain in foreground (left arrow). Watershed successfully splits clusters previously counted as one cell. (c) Cleaning, according to a range of biologically plausible morphology values and a subsequent quantification of the probability that each object is an artefact successfully, removes artefacts, but leads to underestimated counts. However, the image-to-image correspondence between reference and observed counts is considerably improved by this step. In addition, cell size and shape deviate much less compared with the previous step, with respect to both their deviation and their correlation. Note that cleaning is done by data processing, i.e. discarding data from the result file, and not by erasing objects on the image itself (image modified manually in this figure for demonstration). (d) An extra, size-based splitting finally separates clusters remaining after watershed segmentation and improves the match between reference and observed counts, while the correspondence with all reference value remains similar and tight. This gain may seem slight but is highly important when the spatial distribution of cells is studied (e.g. to quantify cooperative aggregation). Again, this step is performed on the data file and illustrated in this figure by manually dividing the cluster in the lower right corner by a black line for demonstration only.

Download figure to PowerPoint

Image acquisition

Illumination was optimized to provide a homogeneous background and strong contrast between cells and background, by manually optimizing and fixing several microscope (e.g. illumination, depth of field) and camera (e.g. light sensitivity of sensor (ISO speed) and shutter speed) parameters to ensure reproducibility of images.

Image analysis

Thresholding was selected for segmentation due to the high contrast between white cells and black background; the threshold was fixed to a carefully optimized and validated value (Fig. 3a). Watershed segmentation was used after thresholding to split overlapping/touching cells (Fig. 3b).

Data processing

Because artefacts (dust, scratches and cell debris) are common and can appear as bright as cells, a subsequent data processing step was implemented based on the characteristics of the segmented objects. A logistic regression model was calibrated using 12 attributes of objects to estimate the probability of an object to be an artefact; removing objects with at least 40% chance to be an artefact was found optimal to discard artefacts from subsequent data analysis (Fig. 3c). Finally, an extra, size-based splitting is performed to split cell clusters that remain after watershed segmentation (Fig. 3d; Chaine et al. 2010). This step was important for our studies involving analysis of relative position of cells (point pattern analysis, see below), highly sensitive to the correct positioning of cells close to each other.

ELS examples using automated image analysis

  1. Top of page
  2. Summary
  3. Introduction
  4. Developing an image analysis workflow
  5. Illustration of an image analysis workflow in Tetrahymena thermophila ELS
  6. ELS examples using automated image analysis
  7. Discussion
  8. Conclusion
  9. Acknowledgements
  10. References
  11. Supporting Information

The following section illustrates how image analysis is used in ELS to assess ecological and evolutionary questions. Examples from the literature and our Tetrahymena thermophila microcosms are used to illustrate the versatility of the approach.

Density reveals demography and dispersal

Density is basic, but versatile information gained from images. By estimating density at multiple points in time it is possible to capture the dynamics of a given population and its modulation by environmental factors. Laakso, Loytynoja & Kaitala (2003) studied how the colour of environmental noise affects the population dynamics of T. thermophila. In a similar fashion, automatic counts were used to examine the role of resource enrichment on the population dynamics of several rotifer species (Kirk 1998). Demographic parameters such as growth rate or maximum density in a given environment can be estimated from such time series by fitting an appropriate population dynamic model (Hooper et al. 2006).

Conveniently, image analysis provides simultaneous measurements of morphology and density, hence allowing the study of not only the density but also the biomass dynamics (Færøvig, Andersen & Hessen 2002). In our own study, we quantified differences in demographic parameters (e.g. growth rate, maximum density) between genotypes of T. thermophila by following population growth from low density over a period of 200 h. While this could be done entirely via image analysis, we combined optical density measurements performed with a spectrophotometer with image analysis at specific time points. Optical density is faster, cheaper and minimizes contaminations because sample tubes remain closed for measurement, but only provides information about biomass; image analysis provided cell size and hence the conversion of biomass into density. Growth curves and size measurements obtained were highly repeatable and precise, allowing even small differences in the change over time in abundance and morphology of two genotype populations to be seen (Fig. 4).

image

Figure 4. The growth of two clonal populations (3 replicates each) in terms of biomass and cell density within 200 h: growth of both genotypes (optical density) is similar in terms of biomass (upper panel). However, biomass converted into cell density (middle panel) using cell size obtained from images (lower panel) reveals different growth strategies: genotype D7 invests in high cell density whereas genotype F invests in big cell size.

Download figure to PowerPoint

By combining density measurements in specific experimental designs, additional processes, such as dispersal between two populations, can be studied: cells are inoculated into a start tube connected by a narrow corridor to a target tube, and measurements of the density in start and target populations after some time reveal dispersal (Fjerdingstad et al. 2007, Pennekamp et al. unpublished data).

Characterization of life-history variation

The study of life-history variation is central to evolutionary ecology; however, it is also notoriously tedious to study how individuals change in size and volume or quantify survival and fecundity on many samples. Image analysis is especially suited to replace such repetitive tasks on a large number of samples (Mallard, Le Bourlot & Tully 2012). For example, automatic counts were used to estimate fecundity and survival of rotifer species under starvation (Kirk 1997). Besides measurements on the population level, individuals can be measured by image analysis providing detailed information on life-history variation. Tully & Ferrière (2008) followed the individual growth of springtails from different geographical origins and estimated the number and volume of their eggs in response to food environment by automated image analysis.

Spatial information reveals cooperative aggregation

Point pattern analysis is a tool widely used by ecologists to infer underlying processes such as aggregation or competition from spatial positions (Wiegand & Moloney 2004). Spatial positions of objects are readily available from image analysis. Schtickzelle et al. (2009) quantified the variation in cell cooperation between genotypes using an index describing the deviation between observed and expected numbers of cells at a certain distance of a focal cell, computed with the programita software (Wiegand & Moloney 2004). A plethora of ecological questions is open to investigation with point pattern analysis, such as tests of spatial randomness for patterns of more than one object type. For example, Schillinger et al. (2012) studied the co-localization of different bacterial species in biofilms by image analysis of fluorescence stained species.

Discussion

  1. Top of page
  2. Summary
  3. Introduction
  4. Developing an image analysis workflow
  5. Illustration of an image analysis workflow in Tetrahymena thermophila ELS
  6. ELS examples using automated image analysis
  7. Discussion
  8. Conclusion
  9. Acknowledgements
  10. References
  11. Supporting Information

While image analysis is an established tool in microbial ecology, experimental ecologists and evolutionary biologists have to fully exploit this technology yet. To facilitate the image analysis, we gave a detailed description how to develop an automatic image analysis workflow and provide ready-to-use scripts. This should help researchers working with ELS to implement image analysis and thus benefit from improved efficiency, reliability and versatility.

Software solutions

The three software solutions are all capable to perform similar analyses, and yielded similar results in terms of mean values and correlation of counts, object size, intensity etc. (r > 0·99 for all variables; for details, see Supporting Information). However, they differ in ease of implementation, user-friendliness, versatility, speed and computational requirements. The best overall solution seems to be ImageJ: it is versatile, the fastest and least computationally demanding. Moreover, the GUI and the macro recorder functions make it easy to use when scripting skills are lacking. However, it requires additional software to manage the measurements and to perform statistical analysis. R and Python provide the same functionality as ImageJ, but allow the integration of image analysis and data management: the data obtained from image analysis are ready for further statistical analysis within the same environment. However, their speed was tremendously slower (5–15 times) than ImageJ, which may become a bottleneck when many images need treatment within a short time window; they also require more programming skills for use and customization.

Costs, time effectiveness and accuracy

Implementing an image analysis workflow in an existing ELS where all optical equipment is ready only requires to add a camera to shoot images, and a computer to process them. The additional costs should be limited: regular consumer or semi-pro high resolution cameras are available at low price (e.g. < 2000 EUR for the Canon EOD 5D Mark II we use in our T. thermophila ELS).

In terms of time effectiveness, the advantages of an image analysis workflow compared with manual counts are twofold. First, time spent by the experimenter to acquire images for automatic counts remains constant, while manual counts increase linearly with the number of objects (Lukas, Kucerova & Stejskal 2009). Acquiring an image with manual focus may take 5–20 s; treatment by image analysis may take only a couple of seconds, depending on the software used, processing operations and image complexity. Secondly, the separation in time of experiment and data extraction allows the experimenter to allocate time to increase sample size and treatments and/or levels, while data extraction from images may be run later, when time is available. Storing the data in the form of images additionally makes results transparent and opens any possibility of re-analysis of data in the future.

Systems based on image analysis show usually high correspondence between manual and automatic counts (R2 > 0·98; Færøvig, Andersen & Hessen 2002; Lukas, Kucerova & Stejskal 2009). Authors have reported deviations from the real values, but these occur in a systematic and therefore predictable fashion (e.g. perimeter estimation differed in a systematic way between the three software solutions, see Supporting Information). In our T. thermophila ELS, automatic counting underestimates abundance (Fig. 3), but the deviations reported here are disproportionate as we purposefully analysed images including extremes of artefacts and density for validation. The observed morphology descriptors were very close to the reference values after applying cleaning and size-based splitting. Besides the high overall reliability, the image analysis workflows differed in their degree of automation and complexity. While basic systems still rely on some manual cleaning and data manipulation, more advanced systems may include automatic cleaning, splitting and classification steps that improve the counts and morphology descriptors.

Population and individual level measurements

Images provide information on population and individual levels simultaneously, thus enabling the study of links between traits and population dynamics. Indeed, a recent experiment showed that models that take changes in trait distributions into account improved the prediction of population dynamics compared with models without such information (Ozgul et al. 2012). While the phenotypic changes observed in this particular study may be due to plasticity, evolutionary responses are equally likely to occur after sufficient time, highlighting the advantages of applying image analysis to ELS for understanding eco-evolutionary dynamics (Yoshida et al. 2003; Fussmann et al. 2005; Hairston et al. 2005). Given that phenotypic traits are available for many individuals, researchers have the possibility to quantify intraspecific or population variation, which is key to understanding a variety of ecological dynamics in metacommunity studies (Bolnick et al. 2003, 2011) and evolutionary changes (Grant & Grant 1993). Finally, understanding individual interactions (e.g. aggregation and competition) is crucial to predict ecological responses to global environment change (Berg et al. 2010). Image analysis provides precise information on the localization of individuals and thus allows studying spatial patterns of one or more groups and how such behaviour is modulated by the environment.

Avenues for future research

Although all image analysis workflows in ELS were designed for the identification of single species, an exciting perspective is to expand such approach to the community level. Automatically measuring abundances and phenotypes of multiple species simultaneously requires that species show marked phenotypic differences (e.g. morphology or behaviour). Indeed, automatic identification of hundreds of different planktonic species has already been achieved by image analysis and training appropriate statistical discrimination techniques (Culverhouse et al. 2006; Rodenacker et al. 2006; Gorsky et al. 2010). Therefore, automatic identification of the small number of species commonly used in ELS should be easily accomplished by image analysis (e.g. Matthiessen & Hillebrand 2006; Haddad et al. 2008).

Conclusion

  1. Top of page
  2. Summary
  3. Introduction
  4. Developing an image analysis workflow
  5. Illustration of an image analysis workflow in Tetrahymena thermophila ELS
  6. ELS examples using automated image analysis
  7. Discussion
  8. Conclusion
  9. Acknowledgements
  10. References
  11. Supporting Information

Our concise description of basic principles of image analysis and the scripts provided should allow researchers working with ELS to readily experiment with image analysis in their own systems and thus overcome the technical difficulties that may have prevented the spread of the methodology so far. The known advantages of ELS are thus substantially extended: more information is obtained and complex experimental designs are streamlined, providing valuable additional data urgently needed by researchers to understand complex ecological and evolutionary processes.

Image analysis can replace the human observer to perform tedious and repetitive tasks, such as counting and measuring individuals, in a more constant, objective and efficient way than people. However, it will not replace the critical observations of an informed experimenter, as the machine records only what it is told to. Thus, automatic image analysis should only be applied to systems where the natural history of the model is well-studied and understood deeply enough to allow efficient and reliable automation.

Acknowledgements

  1. Top of page
  2. Summary
  3. Introduction
  4. Developing an image analysis workflow
  5. Illustration of an image analysis workflow in Tetrahymena thermophila ELS
  6. ELS examples using automated image analysis
  7. Discussion
  8. Conclusion
  9. Acknowledgements
  10. References
  11. Supporting Information

We thank Alexis Chaine, Jean Clobert, Linda Dhondt, Michèle Huet, Kate Mitchell and Virginie Thuillier for their help in developing and/or running the T. thermophila ELS. Kate Mitchell, Camille Turlure, Alexis Chaine and Christophe Lebigre provided valuable comments to an earlier manuscript draft. F. Pennekamp is funded by Fonds Spéciaux de Recherche, Université catholique de Louvain. N. Schtickzelle is a Research Associate of the Fund for Scientific Research (F.R.S.-FNRS). Financial support to acquire scientific material needed for the T. thermophila ELS was provided by F.R.S.-FNRS and Université catholique de Louvain (ARC 10-15/031). This is publication BRC291 of Biodiversity Research Centre.

References

  1. Top of page
  2. Summary
  3. Introduction
  4. Developing an image analysis workflow
  5. Illustration of an image analysis workflow in Tetrahymena thermophila ELS
  6. ELS examples using automated image analysis
  7. Discussion
  8. Conclusion
  9. Acknowledgements
  10. References
  11. Supporting Information
  • Asai, D.J. & Forney, J.D. (1999) Tetrahymena Thermophila. Methods in Cell Biology, Volume 62. Academic Press, San Diego.
  • Benhaïm, D., Péan, S., Lucas, G., Blanc, N., Chatain, B. & Bégout, M.-L. (2012) Early life behavioural differences in wild caught and domesticated sea bass (Dicentrarchus labrax). Applied Animal Behaviour Science, 141, 7990.
  • Benton, T.G., Solan, M., Travis, J.M.J. & Sait, S.M. (2007) Microcosm experiments can inform global ecological problems. Trends in Ecology & Evolution, 22, 516521.
  • Berg, M.P., Kiers, E., Driessen, G., Van Der Heijden, M., Kooi, B.W., Kuenen, F., Liefting, M., Verhoef, H.A. & Ellers, J. (2010) Adapt or disperse: understanding species persistence in a changing world. Global Change Biology, 16, 587598.
  • Beveridge, O.S., Petchey, O.L. & Humphries, S. (2010) Direct and indirect effects of temperature on the population dynamics and ecosystem functioning of aquatic microbial ecosystems. Journal of Animal Ecology, 79, 13241331.
  • Bolnick, D.I., Svanbäck, R., Fordyce, J.A., Yang, L.H., Davis, J.M., Hulsey, C.D. & Forister, M.L. (2003) The ecology of individuals: incidence and implications of individual specialization. The American Naturalist, 161, 128.
  • Bolnick, D.I., Amarasekare, P., Araújo, M.S., Bürger, R., Levine, J.M., Novak, M., Rudolf, V.H.W., Schreiber, S.J., Urban, M.C. & Vasseur, D.A. (2011) Why intraspecific trait variation matters in community ecology. Trends in Ecology & Evolution, 26, 183192.
  • Bowler, D.E. & Benton, T.G. (2010) Testing the interaction between environmental variation and dispersal strategy on population dynamics using a soil mite experimental system. Oecologia, 166, 111119.
  • Brehm-Stecher, B.F. & Johnson, E.A. (2004) Single-cell microbiology: tools, technologies, and applications. Microbiology and Molecular Biology Reviews, 68, 538559.
  • Burger, W. & Burge, M. (2008) Digital Image Processing: An Algorithmic Introduction Using Java. Springer, New York.
  • Chaine, A.S., Schtickzelle, N., Polard, T., Huet, M. & Clobert, J. (2010) Kin-based recognition and social aggregation in a ciliate. Evolution, 64, 12901300.
  • Correll, N., Sempo, G., Lopez de Meneses, Y., Halloy, J., Deneubourg, J.-L. & Martinoli, A. (2006) SwisTrack: a Tracking Tool for Multi-Unit Robotic and Biological Systems. 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems 9-15 October 2006, Beijing, China pp. 21852191.
  • Culverhouse, P.F., Williams, R., Benfield, M., Flood, P.R., Sell, A.F., Mazzocchi, M.G., Buttino, I. & Sieracki, M. (2006) Automatic image analysis of plankton: future perspectives. Marine Ecology Progress Series, 312, 297309.
  • Daims, H. & Wagner, M. (2007) Quantification of uncultured microorganisms by fluorescence microscopy and digital image analysis. Applied Microbiology and Biotechnology, 75, 237248.
  • DeLong, J.P. & Hanson, D.T. (2011) Warming alters density dependence, energetic fluxes, and population size in a model algae. Ecological Complexity, 8, 320325.
  • Drake, J.M. & Griffen, B.D. (2009) Speed of expansion and extinction in experimental populations. Ecology Letters, 12, 772778.
  • Færøvig, P.J., Andersen, T. & Hessen, D.O. (2002) Image analysis of Daphnia populations: non-destructive determination of demography and biomass in cultures. Freshwater Biology, 47, 19561962.
  • Fjerdingstad, E.J., Schtickzelle, N., Manhes, P., Gutierrez, A. & Clobert, J. (2007) Evolution of dispersal and life history strategies - Tetrahymena ciliates. BMC Evolutionary Biology, 7:133
  • Fraser, L.H. & Keddy, P. (1997) The role of experimental microcosms in ecological research. Trends in Ecology & Evolution, 12, 478481.
  • Fussmann, G.F., Ellner, S.P., Hairston, J., Jones, L.E., Shertzer, K.W. & Yoshida, T. (2005) Ecological and Evolutionary Dynamics of Experimental Plankton Communities. Population Dynamics and Laboratory Ecology (ed R.A. Desharnais), pp. 221243. Academic Press, London.
  • Gonzalez, R.C. & Woods, R.E. (2002) Digital Image Processing, 2nd edn. Prentice Hall.
  • Gorsky, G., Ohman, M.D., Picheral, M., Gasparini, S., Stemmann, L., Romagnan, J.B., Cawood, A., Pesant, S., García-Comas, C. & Prejger, F. (2010) Digital zooplankton image analysis using the ZooScan integrated system. Journal of Plankton Research, 32, 285303.
  • Grant, B.R. & Grant, P.R. (1993) Evolution of Darwin's finches caused by a rare climatic event. Proceedings of the Royal Society of London, Biological Sciences, 251, 111117.
  • Haddad, N.M., Holyoak, M., Mata, T.M., Davies, K.F., Melbourne, B.A. & Preston, K. (2008) Species' traits predict the effects of disturbance and productivity on diversity. Ecology Letters, 11, 348356.
  • Haddock, S. & Dunn, C. (2010) Practical Computing for Biologists. Sinauer Associates, Sunderland.
  • Hairston, N.G., Ellner, S.P., Geber, M.A., Yoshida, T. & Fox, J.A. (2005) Rapid evolution and the convergence of ecological and evolutionary time. Ecology Letters, 8, 11141127.
  • Holyoak, M. & Lawler, S.P. (2005) The contribution of laboratory experiments on protists to understanding population and metapopulation dynamics. Population Dynamics and Laboratory Ecology (ed R.A. Desharnais), pp. 245266. Academic Press, London.
  • Hooper, H.L., Connon, R., Callaghan, A., Maund, S.J., Liess, M., Duquesne, S., Hutchinson, T.H., Moggs, J. & Sibly, R.M. (2006) The use of image analysis to estimate population growth rate in Daphnia magna. Journal of Applied Ecology, 43, 828834.
  • Jessup, C.M., Kassen, R., Forde, S.E., Kerr, B., Buckling, A., Rainey, P.B. & Bohannan, B.J.M. (2004) Big questions, small worlds: microbial model systems in ecology. Trends in Ecology & Evolution, 19, 189197.
  • Kirk, K.L. (1997) Life-History responses to variable environments: starvation and reproduction in planktonic rotifers. Ecology, 78, 434441.
  • Kirk, K.L. (1998) Enrichment can stabilize population dynamics: autotoxins and density dependence. Ecology, 79, 24562462.
  • Laakso, J., Loytynoja, K. & Kaitala, V. (2003) Environmental noise and population dynamics of the ciliated protozoa Tetrahymena thermophila in aquatic microcosms. Oikos, 102, 663671.
  • Lukas, J., Kucerova, Z. & Stejskal, V. (2009) Computer-based image analysis to quantify the number of micro-arthropods in a sample. Entomologia Experimentalis et Applicata, 132, 289294.
  • Mallard, F., Le Bourlot, V. & Tully, T. (2012) Automatic particle analysis as sensors for life history studies in experimental microcosms. Sensors for Ecology (eds J.-F. Le Gaillard, J.-M. Guarini & F. Gaill), pp. 163184. CNRS, Paris.
  • Marçal, A. & Caridade, C. (2006) A System for Automatic Counting the Number of Collembola Individuals on Petri Disk Images. Image Analysis and Recognition Lecture Notes in Computer Science (eds A. Campilho & M. Kamel), pp. 814822. Springer, Berlin/Heidelberg.
  • Matthiessen, B. & Hillebrand, H. (2006) Dispersal frequency affects local biomass production by controlling local diversity. Ecology Letters, 9, 652662.
  • Ozgul, A., Coulson, T., Reynolds, A., Cameron, T.C. & Benton, T.G. (2012) Population responses to perturbations: the importance of trait-based analysis illustrated through a microcosm experiment. The American Naturalist, 179, 582594.
  • Pau, G., Oles, A., Dodd, M. & Sklyar, O.. & Huber, W. (2012) EBImage: Image Processing Toolbox for R. Available at: http://www.bioconductor.org/packages/2.12/bioc/html/EBImage.html (accessed 20/2/2013)
  • Pedersen, J.S. (2008) C. elegans motility analysis in ImageJ - A practical approach. http://www.phage.dk/plugins/wrmtrck.html[accessed 25 October 2012].
  • R Development Core Team (2012) R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing, Vienna, Austria.
  • Rodenacker, K., Hense, B., Jütting, U. & Gais, P. (2006) Automatic analysis of aqueous specimens for phytoplankton structure recognition and population estimation. Microscopy Research and Technique, 69, 708720.
  • Roerdink, J.B.T.M. & Meijster, A. (2000) The watershed transform: definitions, algorithms and parallelization strategies. Fundamenta Informaticae, 41, 187228.
  • Schillinger, C., Petrich, A., Lux, R., Riep, B., Kikhney, J., Friedmann, A., Wolinsky, L.E., Göbel, U.B., Daims, H. & Moter, A. (2012) Co-localized or randomly distributed? Pair cross correlation of in vivo grown subgingival biofilm bacteria quantified by digital image analysis. PLoS ONE, 7, e37583.
  • Schneider, C.A., Rasband, W.S. & Eliceiri, K.W. (2012) NIH Image to ImageJ: 25 years of image analysis. Nature Methods, 9, 671675.
  • Schtickzelle, N., Fjerdingstad, E., Chaine, A. & Clobert, J. (2009) Cooperative social clusters are not destroyed by dispersal in a ciliate. BMC Evolutionary Biology, 9, 251.
  • Tully, T. & Ferrière, R. (2008) Reproductive flexibility: genetic variation, genetic costs and long-term evolution in a collembola. PLoS ONE, 3, e3207.
  • Vasseur, D.A. & Fox, J.W. (2009) Phase-locking and environmental fluctuations generate synchrony in a predator–prey community. Nature, 460, 10071010.
  • Weeks, P.J.D. & Gaston, K.J. (1997) Image analysis, neural networks, and the taxonomic impediment to biodiversity studies. Biodiversity and Conservation, 6, 263274.
  • Wiegand, T. & Moloney, A. (2004) Rings, circles, and null-models for point pattern analysis in ecology. Oikos, 104, 209229.
  • Yoshida, T., Jones, L.E., Ellner, S.P., Fussmann, G.F. & Hairston, N.G. (2003) Rapid evolution drives ecological dynamics in a predator-prey system. Nature, 424, 303306.

Supporting Information

  1. Top of page
  2. Summary
  3. Introduction
  4. Developing an image analysis workflow
  5. Illustration of an image analysis workflow in Tetrahymena thermophila ELS
  6. ELS examples using automated image analysis
  7. Discussion
  8. Conclusion
  9. Acknowledgements
  10. References
  11. Supporting Information
FilenameFormatSizeDescription
mee312036-sup-0003-AppendixS1.docxWord document1440KAppendix S1. Document describing the installation of ImageJ, R and Python and the required image analysis libraries to run the provided image analysis scripts on a set of test images.
mee312036-sup-0004-ImageJ-script.ijmtext/ijm6KAppendix S2. Script to perform image analysis with ImageJ.
mee312036-sup-0002-R-script.Rtext/R7KAppendix S3. Script to perform image analysis with R and the EBImage package.
mee312036-sup-0001-Python-script.pytext/py9KAppendix S4. Script to perform image analysis with Python and the scikits-image library.
mee312036-sup-0005-AppendixS5.zipZip archive118576KAppendix S5. Zip file containing a set of images to test the provided scripts as described in Appendix S1.

Please note: Wiley Blackwell is not responsible for the content or functionality of any supporting information supplied by the authors. Any queries (other than missing content) should be directed to the corresponding author for the article.