Experimental laboratory systems (ELS) are widely applied research tools to test theoretical predictions in ecology and evolution. Combining ELS with automated image analysis could significantly boost information acquisition due to the ease at which abundance and morphological data is collected. Despite the advantages of image analysis, the technology has not been fully adopted yet, presumably due to the difficulties of technical implementation.
The tools needed to integrate image analysis in ELS are nowadays readily available: digital camera equipment is purchased at limited costs and free software solutions which allow sophisticated image processing and analysis exist. Here, we give a concise description how to integrate these pieces into a largely automated image analysis workflow. We provide researchers with necessary background information on the principles of image analysis, explaining how to standardize image acquisition and how to validate the results to reduce bias.
Three cross-platform and open-source software solutions for image analysis are compared: ImageJ, the EBImage package in R, and Python with the SciPy/scikit image libraries. The relative strengths and limitations of each solution are compared and discussed. In addition, a set of test images and three scripts are provided in the Online Supplementary Material to illustrate the use of image analysis and help biologists to implement image analysis in their own systems.
To demonstrate the reliability and versatility of a validated image analysis workflow, we introduce our own Tetrahymena thermophila ELS. Then, examples from evolutionary ecology are provided showing the advantages of image analysis to study different ecological questions, aiming at both the population and individual level.
Experimental laboratory systems that integrate the advantages of image analysis extend their application and versatility compared with regular ELS. Such improvements are necessary to understand complex processes such as eco-evolutionary feedbacks, community dynamics and individual behaviour in ELS.
To understand the complexity of nature, ecologists and evolutionary biologists have developed various, complementary approaches ranging from comparative analyses, field observations and experiments, to laboratory experiments and theory, each with particular strengths and limitations. Experimental laboratory systems (ELS) have long been recognized as valuable research tools (Holyoak & Lawler 2005). Well-known model organisms allow the studying of both ecological and evolutionary responses (Benton et al. 2007) as well as their feedback loops (Yoshida et al. 2003). The advantages of ELS directly depend on key features such as: control over the environment and the population, easy replication to increase statistical power and high level of repeatability (Fraser & Keddy 1997; Jessup et al. 2004). Balancing these features against logistic constraints (money, manpower and time invested to collect data), will determine the system efficiency.
Digital image analysis is a rapidly advancing field in the computer sciences with high potential for data collection in many academic disciplines (Burger & Burge 2008) including biology (Weeks & Gaston 1997). The potential of image analysis for data collection in ELS was repeatedly shown (Hooper et al. 2006; Lukas, Kucerova & Stejskal 2009; Mallard, Le Bourlot & Tully 2012). Yet, it is still not widely applied in experimental ecology and evolution. Save some pioneering studies (Kirk 1997; Laakso, Loytynoja & Kaitala 2003; Hooper et al. 2006; Fjerdingstad et al. 2007; Tully & Ferrière 2008) most researchers still rely on manual organism counts and cumbersome manual measurements of phenotypes (e.g. Drake & Griffen 2009; Vasseur & Fox 2009; Beveridge, Petchey & Humphries 2010; Bowler & Benton 2010; DeLong & Hanson 2011). In microbial ecology on the contrary, image analysis is part of the tool box to characterize cell phenotypes and evaluate patterns in biofilms (Daims & Wagner 2007; Schillinger et al. 2012). A wider application of image analysis can significantly boost the acquisition of information in ELS at a limited cost and it is applicable to a wide variety of study systems (Fig. 1).
Image analysis has a series of advantages for ELS. (i) It is highly efficient due to its fast, reliable and low cost estimation of important biological parameters from a sample (e.g. abundance, morphological and behavioural traits). Liberated resources (manpower, money, time) may be allocated to improve replication and/or additional treatments, thus yielding better scientific output; (ii) It allows the simultaneous measurement of abundance data and phenotypic traits such as morphology or behaviour, which are important for understanding eco-evolutionary dynamics (Hairston et al. 2005); (iii) It enables the quantification of traits among individuals within a given population (Brehm-Stecher & Johnson 2004); (iv) The wealth of information gathered from images provides the possibility to quantitatively assess complex behaviours such as aggregation (Schtickzelle et al. 2009; Schillinger et al. 2012); and (v) Raw information on images is effectively stored, allowing re-analysis, reviewing and quality checking, or demonstration.
Given the potential of image analysis, the poor adoption of the technology is rather surprising. A major obstacle may have been that previous attempts to instigate image analysis lacked a comprehensive explanation of how image analysis works, details of the technical implementation, and were often customized for a single specific system. To overcome this bottleneck, we present here a detailed hands-on guide how researchers can implement image analysis in their own ELS by describing the necessary steps to consider, pointing towards options for customizing the system, and highlighting common pitfalls. We compare the strengths and limitations of three free software solutions allowing automated image analysis (ImageJ, R and Python), and provide pre-fabricated scripts ready to try out on a set of test images (see Appendix S1-S4, Supporting Information). We finish our guide with the examples of our own ELS using the ciliate Tetrahymena thermophila and some illustrative examples on how automated image analysis is used in evolutionary ecology.
Developing an image analysis workflow
Overall, an image analysis workflow comprises three major steps: image acquisition (shooting the image), image analysis (treating the image and measuring objects) and data processing (cleaning the data). The most crucial step for automatic image analysis is to create a sharp contrast between the objects of interest (foreground) and their environment (background), so they can be accurately distinguished; this process is called segmentation (Gonzalez & Woods 2002). Ideally, the foreground will contain only objects. However, it is more likely that some misidentified objects, hereafter called artefacts, are comprised in the foreground.
Setting up an image analysis workflow implies (i) optimizing the parameters influencing the resulting image to maximize its information/noise ratio; (ii) fixing them to ensure high reproducibility (e.g. between users, experimental conditions); and (iii) validating the results against reference values as measured manually by an informed examiner, to quantify the error rate. We detail how this can be achieved for each of the three workflow steps.
Reviewing the many hardware and software to acquire images in a specific system is out of scope of this article. Given that the focus of this article is to explain and illustrate the use of image analysis, we will only shortly state the crucial requirements of image acquisition. For further information on optimizing image acquisition, refer to dedicated book chapters on scientific photography (e.g. Haddock & Dunn 2010).
We assume that a system has been created to shoot greyscale images of objects against a background. The use of colour images is only recommended if colour conveys specific information (e.g. to distinguish objects from background), because they require more space to store and segmentation is less straightforward. First, to maximize information collected from each image, the viewing field should be enlarged to the largest portion of the study area (e.g. a microscope should be used at as low magnification as possible), while still retaining important detail in terms of object shape or size. Secondly, three aspects of scene illumination are particularly important for image acquisition: contrast, homogeneity and intensity (i.e. pixel brightness). Maximizing the image contrast (i.e. difference between fore- and background) is important as several segmentation methods are based on intensity difference between fore- and background. Illumination should be homogeneous over the whole image, otherwise similar objects are likely to be treated and/or characterized differently according to their position in the image. Illumination intensity needs to be high enough to allow for short exposure time and avoid hence blurring of fast moving objects. Thirdly, focussing is crucial to ensure high information/noise ratio and reproducibility. Objects out of the focal plane appear usually bigger (biasing morphological measurements), with less detail and darker (biasing segmentation).
To use images to compare experimental conditions, i.e. to make inference about a specific biological effect, reproducibility is crucial: the same reality must give the same image whatever the experimenter, the experimental conditions (e.g. object density), the time of the year, etc. This is achieved by two approaches: specifying fixed values for all settings amenable to modification, and/or including an invariant reference element in the scene, allowing adjusting the image properties (e.g. object size or brightness) retrospectively by image processing (Mallard, Le Bourlot & Tully 2012).
Greyscale images are usually represented as arrays, where the height and width in pixel give the row and column dimensions of the array. Each array element is hence the intensity value of a given pixel in the image (i.e. a value between 0 and 255 for greyscale images). The goal of image analysis is to identify objects of interest by segmenting the fore- from the background, the latter usually represented as zeroes in the array. Four widely applied segmentation techniques are thresholding, difference image, edge detection and watershed (Fig. 2), each with strengths and weaknesses depending on the constraints set by the biological system, and the complexity of the acquired image (Gonzalez & Woods 2002). A combination of segmentation techniques may often yield the best foreground identification.
Thresholding is based on the difference in pixel intensity between fore- and background: an intensity threshold is either manually set or automatically adjusted by an algorithm leading to the classification of brighter pixels as foreground and darker pixels as background (Fig. 2a) (Gonzalez & Woods 2002). Hence, all the elements of the array that are beneath the threshold are set to 0, while the rest is set to 255. Thresholding is generally fast and works efficiently if the background is homogenous and contrasts with foreground. Optimizing and validating the threshold value is crucial because it has a major effect on object count and morphology.
The difference image method uses motion cues by comparing the sample image with a time-lapsed image. A difference image is created by subtracting the second from the first image, retaining only those pixels that changed intensity as foreground (Fig. 2b) (Gonzalez & Woods 2002). In terms of the array containing the intensity difference values, elements with a 0 value (intensity value equal on both images) or a negative value (background on first image, object on second image) are interpreted as background, whereas elements with a positive value (object on first image that is not anymore present at the same place on the second image) are interpreted as foreground. This method may be useful if the background is complex or illumination heterogeneous, but is highly sensitive to departures from its central assumption: all objects move, while background is perfectly constant. Bias will for example result from any variation in background illumination (e.g. background particles displaced by moving objects or shadows created by unilateral illumination of objects: Mallard, Le Bourlot & Tully 2012), and objects considered background when they do not move (e.g. resting individuals) or when a different object occupies the same position on the second image just by chance, which is frequent when the density of objects is high.
Segmentation by edge detection is based on discontinuity rather than continuity of the intensity values. An edge is a set of connected pixels at the boundary of an intensity transition. In case of white fore- on black background, edge detection will outline the outermost layer of foreground pixels as the object edge (Fig. 2c) (Gonzalez & Woods 2002). In the array, all elements that are edges are set to 255. Thresholding slightly below 255 will then only retain elements in the array that are edges, and a morphological operation will fill the objects. Edge detection should work when contrasted intensity differences exist between fore- and background, but has not yet been used in any of the examined ELS.
In watershed segmentation, the image is seen as a topographical profile, the intensity value representing the altitude. The watershed analogy is based on the idea that a virtual drop of water would flow to the local intensity minimum of the image. At points where the drop would flow to more than one minimum, a watershed line exists, which splits adjacent watersheds and accordingly adjacent objects. Several algorithms exist for watershed segmentation (Roerdink & Meijster 2000). This approach requires that foreground is already defined (e.g. by one of the three previous segmentation approaches), but is valuable due to its power to split touching objects (Fig. 2d). Given that segmented images are used as input (foreground with intensity 255) instead of real grey intensity images, the topography is replaced by a distance map of each foreground pixel to the next background pixel. The elements that represent watershed lines are set to zero, forming background lines that split the objects.
The segmentation approaches described so far usually succeed in identifying most of the foreground. However, false positives may be introduced by misidentifying foreground, i.e. artefacts. Improvements can sometimes be made by image filters or operations, e.g. eroding–dilating operation that should not affect large objects, but will shrink small artefacts to zero (Marçal & Caridade 2006).
An alternative approach to exclude artefacts is to use the information acquired on foreground (e.g. size, intensity) to determine their probability to be objects. A two-step cleaning procedure should be efficient in most cases. To calibrate such a cleaning procedure, one needs reliable information on the truth about foreground (object or artefact) to link to its measured characteristics. To do so, a set of images covering the possible variation in objects (e.g. density, occurrence of typical artefacts) is collected and the segmented foreground manually classified as objects vs. artefacts by an informed experimenter. The first cleaning step excludes artefacts that are outside the biologically feasible morphology range (e.g. too big or too thin), as determined from observed minimum and maximum for each morphology variable (Laakso, Loytynoja & Kaitala 2003). The second cleaning step removes artefacts more similar to objects based on their probability to be artefacts. Any statistical model which relates a binary response variable (artefact vs. object) to continuous and/or categorical predictor variables can be used (e.g. logistic regression, discriminant analysis or artificial neural networks). Data processing is done after the raw information is extracted from the images. Given the extensive information collected, powerful data management software is mandatory to batch process the results from each image analysis, filter the data for quality control, merge it with descriptive information on experimental units/treatments and store it in a database.
Image analysis software
To allow a maximum of researchers to apply the proposed image analysis, we compare here three free, cross-platform (windows, mac os and linux) and open-source solutions to perform image analysis: ImageJ (Schneider, Rasband & Eliceiri 2012), the statistical computing environment R with the EBImage package (Pau et al. 2012; R Development Core Team 2012) and Python with the scikit image and SciPy image libraries (http://scikit-image.org/). Each is capable of reading different image formats, converting file formats, image processing and analysis, including the segmentation approaches mentioned above. All possess functions to measure the properties of foreground (size, perimeter, spatial position) and export visual representations (outlined foreground on original image) and the quantitative results in form of tables. All three solutions have strengths and limitations (Table 1), partly depending on existing knowledge/skills of the researcher. To allow readers to interact with the methods and test the above-mentioned segmentation approaches, example images and commented scripts for the three solutions are provided in the Supporting Information.
Table 1. Comparing the relative strengths and limitations of the three image analysis solutions; ***, good for this criterion; **, average; *, poor. These benchmarks may vary according to existing knowledge/skills of the researcher
Ease of implementation
Integration with data management and analysis
In terms of ease of implementation (see Supporting Information for details), ImageJ is readily installed and comprises all the functions needed to perform image analysis. To perform image analysis in R, the R environment itself and the EBImage package require installation. The image analysis in Python requires that either a distribution comprising all the required libraries is installed, or the installation of several required libraries manually.
ImageJ is more user-friendly than the solutions in R and Python because it has a graphical user interface (GUI), while the latter require scripted input from a text file or the console. ImageJ also provides a powerful macro language to automate repetitive tasks and a recorder function that is translating commands performed via the GUI into macro scripts, facilitating macro development without extensive programming knowledge. All solutions are well documented online; however, given that the implementation in Python relies on several libraries, the information is slightly more scattered than in the other solutions.
ImageJ is specifically tailored to perform image analysis and widely used in many areas of biology (Schneider, Rasband & Eliceiri 2012). It is therefore versatile, with many plugins and macros available that modify and extend its basic functionality. Because the source code of ImageJ is open, one may also optimize existing functions and plugins for one's own needs, provided the underlying Java programming language is mastered. Given that the R and Python image analysis solutions are embedded within versatile programming languages, the potential to extend and change existing functions exists. However, this equally requires advanced programming knowledge. Both Python and R have functionality to perform subsequent data management/analysis within the same environment, whereas ImageJ requires additional data management software to analyse the results.
The speed with which a set of images is treated differs substantially between the three solutions: 2 min with ImageJ, 11 min with Python and 28 min with R for the same 20 images test set on the same machine (see Supporting Information). This may have important consequences when hundreds or thousands of images must be analysed.
Finally, the minimum requirements in terms of computational power differed widely: ImageJ ran the analysis without problems on the less powerful test machine (4 GB RAM laptop), while Python was running, but with the velocity deficits mentioned above. R was not able to perform the analysis on the 4 GB RAM machine due to memory constraints, while it worked on the 12 GB RAM desktop PC.
Illustration of an image analysis workflow in Tetrahymena thermophila ELS
Our Tetrahymena thermophila ELS combines the advantages of an experimental laboratory system with a largely automatized data collection workflow based on image analysis in ImageJ. T. thermophila, a 50 μm unicellular eukaryotic ciliate usually found in fresh water ponds in North America (Asai & Forney 1999), has long been used in molecular biology as a model system due to its ease of cultivation in axenic liquid medium in flasks (Asai & Forney 1999). For measurements, samples are taken from homogenized cultures and pipetted into counting chambers on disposable microscope slides; images are taken using a digital camera. The contrast between fore- and background is obtained via dark-field microscopy such that transparent organisms appear white on a black background.
When setting up the system, image parameters were optimized to ensure both a high reproducibility of images and best correspondence between the results of image analysis and the reality. Seventy images, representative of the various experimental conditions in which the system will be used, were manually analysed and objects identified as cell vs. artefact (34 832 cells vs. 9424 artefacts). The automatic image analysis workflow was optimized using data from a subset of these images; the remainder was used for validation and to quantify the bias in parameters obtained through image analysis. Figure. 3 summarizes the improvements made by each step of the workflow on four response variables: cell count, cell size, cell shape and number of cells per cluster.
Illumination was optimized to provide a homogeneous background and strong contrast between cells and background, by manually optimizing and fixing several microscope (e.g. illumination, depth of field) and camera (e.g. light sensitivity of sensor (ISO speed) and shutter speed) parameters to ensure reproducibility of images.
Thresholding was selected for segmentation due to the high contrast between white cells and black background; the threshold was fixed to a carefully optimized and validated value (Fig. 3a). Watershed segmentation was used after thresholding to split overlapping/touching cells (Fig. 3b).
Because artefacts (dust, scratches and cell debris) are common and can appear as bright as cells, a subsequent data processing step was implemented based on the characteristics of the segmented objects. A logistic regression model was calibrated using 12 attributes of objects to estimate the probability of an object to be an artefact; removing objects with at least 40% chance to be an artefact was found optimal to discard artefacts from subsequent data analysis (Fig. 3c). Finally, an extra, size-based splitting is performed to split cell clusters that remain after watershed segmentation (Fig. 3d; Chaine et al. 2010). This step was important for our studies involving analysis of relative position of cells (point pattern analysis, see below), highly sensitive to the correct positioning of cells close to each other.
ELS examples using automated image analysis
The following section illustrates how image analysis is used in ELS to assess ecological and evolutionary questions. Examples from the literature and our Tetrahymena thermophila microcosms are used to illustrate the versatility of the approach.
Density reveals demography and dispersal
Density is basic, but versatile information gained from images. By estimating density at multiple points in time it is possible to capture the dynamics of a given population and its modulation by environmental factors. Laakso, Loytynoja & Kaitala (2003) studied how the colour of environmental noise affects the population dynamics of T. thermophila. In a similar fashion, automatic counts were used to examine the role of resource enrichment on the population dynamics of several rotifer species (Kirk 1998). Demographic parameters such as growth rate or maximum density in a given environment can be estimated from such time series by fitting an appropriate population dynamic model (Hooper et al. 2006).
Conveniently, image analysis provides simultaneous measurements of morphology and density, hence allowing the study of not only the density but also the biomass dynamics (Færøvig, Andersen & Hessen 2002). In our own study, we quantified differences in demographic parameters (e.g. growth rate, maximum density) between genotypes of T. thermophila by following population growth from low density over a period of 200 h. While this could be done entirely via image analysis, we combined optical density measurements performed with a spectrophotometer with image analysis at specific time points. Optical density is faster, cheaper and minimizes contaminations because sample tubes remain closed for measurement, but only provides information about biomass; image analysis provided cell size and hence the conversion of biomass into density. Growth curves and size measurements obtained were highly repeatable and precise, allowing even small differences in the change over time in abundance and morphology of two genotype populations to be seen (Fig. 4).
By combining density measurements in specific experimental designs, additional processes, such as dispersal between two populations, can be studied: cells are inoculated into a start tube connected by a narrow corridor to a target tube, and measurements of the density in start and target populations after some time reveal dispersal (Fjerdingstad et al. 2007, Pennekamp et al. unpublished data).
Characterization of life-history variation
The study of life-history variation is central to evolutionary ecology; however, it is also notoriously tedious to study how individuals change in size and volume or quantify survival and fecundity on many samples. Image analysis is especially suited to replace such repetitive tasks on a large number of samples (Mallard, Le Bourlot & Tully 2012). For example, automatic counts were used to estimate fecundity and survival of rotifer species under starvation (Kirk 1997). Besides measurements on the population level, individuals can be measured by image analysis providing detailed information on life-history variation. Tully & Ferrière (2008) followed the individual growth of springtails from different geographical origins and estimated the number and volume of their eggs in response to food environment by automated image analysis.
Spatial information reveals cooperative aggregation
Point pattern analysis is a tool widely used by ecologists to infer underlying processes such as aggregation or competition from spatial positions (Wiegand & Moloney 2004). Spatial positions of objects are readily available from image analysis. Schtickzelle et al. (2009) quantified the variation in cell cooperation between genotypes using an index describing the deviation between observed and expected numbers of cells at a certain distance of a focal cell, computed with the programita software (Wiegand & Moloney 2004). A plethora of ecological questions is open to investigation with point pattern analysis, such as tests of spatial randomness for patterns of more than one object type. For example, Schillinger et al. (2012) studied the co-localization of different bacterial species in biofilms by image analysis of fluorescence stained species.
While image analysis is an established tool in microbial ecology, experimental ecologists and evolutionary biologists have to fully exploit this technology yet. To facilitate the image analysis, we gave a detailed description how to develop an automatic image analysis workflow and provide ready-to-use scripts. This should help researchers working with ELS to implement image analysis and thus benefit from improved efficiency, reliability and versatility.
The three software solutions are all capable to perform similar analyses, and yielded similar results in terms of mean values and correlation of counts, object size, intensity etc. (r > 0·99 for all variables; for details, see Supporting Information). However, they differ in ease of implementation, user-friendliness, versatility, speed and computational requirements. The best overall solution seems to be ImageJ: it is versatile, the fastest and least computationally demanding. Moreover, the GUI and the macro recorder functions make it easy to use when scripting skills are lacking. However, it requires additional software to manage the measurements and to perform statistical analysis. R and Python provide the same functionality as ImageJ, but allow the integration of image analysis and data management: the data obtained from image analysis are ready for further statistical analysis within the same environment. However, their speed was tremendously slower (5–15 times) than ImageJ, which may become a bottleneck when many images need treatment within a short time window; they also require more programming skills for use and customization.
Costs, time effectiveness and accuracy
Implementing an image analysis workflow in an existing ELS where all optical equipment is ready only requires to add a camera to shoot images, and a computer to process them. The additional costs should be limited: regular consumer or semi-pro high resolution cameras are available at low price (e.g. < 2000 EUR for the Canon EOD 5D Mark II we use in our T. thermophila ELS).
In terms of time effectiveness, the advantages of an image analysis workflow compared with manual counts are twofold. First, time spent by the experimenter to acquire images for automatic counts remains constant, while manual counts increase linearly with the number of objects (Lukas, Kucerova & Stejskal 2009). Acquiring an image with manual focus may take 5–20 s; treatment by image analysis may take only a couple of seconds, depending on the software used, processing operations and image complexity. Secondly, the separation in time of experiment and data extraction allows the experimenter to allocate time to increase sample size and treatments and/or levels, while data extraction from images may be run later, when time is available. Storing the data in the form of images additionally makes results transparent and opens any possibility of re-analysis of data in the future.
Systems based on image analysis show usually high correspondence between manual and automatic counts (R2 > 0·98; Færøvig, Andersen & Hessen 2002; Lukas, Kucerova & Stejskal 2009). Authors have reported deviations from the real values, but these occur in a systematic and therefore predictable fashion (e.g. perimeter estimation differed in a systematic way between the three software solutions, see Supporting Information). In our T. thermophila ELS, automatic counting underestimates abundance (Fig. 3), but the deviations reported here are disproportionate as we purposefully analysed images including extremes of artefacts and density for validation. The observed morphology descriptors were very close to the reference values after applying cleaning and size-based splitting. Besides the high overall reliability, the image analysis workflows differed in their degree of automation and complexity. While basic systems still rely on some manual cleaning and data manipulation, more advanced systems may include automatic cleaning, splitting and classification steps that improve the counts and morphology descriptors.
Population and individual level measurements
Images provide information on population and individual levels simultaneously, thus enabling the study of links between traits and population dynamics. Indeed, a recent experiment showed that models that take changes in trait distributions into account improved the prediction of population dynamics compared with models without such information (Ozgul et al. 2012). While the phenotypic changes observed in this particular study may be due to plasticity, evolutionary responses are equally likely to occur after sufficient time, highlighting the advantages of applying image analysis to ELS for understanding eco-evolutionary dynamics (Yoshida et al. 2003; Fussmann et al. 2005; Hairston et al. 2005). Given that phenotypic traits are available for many individuals, researchers have the possibility to quantify intraspecific or population variation, which is key to understanding a variety of ecological dynamics in metacommunity studies (Bolnick et al. 2003, 2011) and evolutionary changes (Grant & Grant 1993). Finally, understanding individual interactions (e.g. aggregation and competition) is crucial to predict ecological responses to global environment change (Berg et al. 2010). Image analysis provides precise information on the localization of individuals and thus allows studying spatial patterns of one or more groups and how such behaviour is modulated by the environment.
Avenues for future research
Although all image analysis workflows in ELS were designed for the identification of single species, an exciting perspective is to expand such approach to the community level. Automatically measuring abundances and phenotypes of multiple species simultaneously requires that species show marked phenotypic differences (e.g. morphology or behaviour). Indeed, automatic identification of hundreds of different planktonic species has already been achieved by image analysis and training appropriate statistical discrimination techniques (Culverhouse et al. 2006; Rodenacker et al. 2006; Gorsky et al. 2010). Therefore, automatic identification of the small number of species commonly used in ELS should be easily accomplished by image analysis (e.g. Matthiessen & Hillebrand 2006; Haddad et al. 2008).
Our concise description of basic principles of image analysis and the scripts provided should allow researchers working with ELS to readily experiment with image analysis in their own systems and thus overcome the technical difficulties that may have prevented the spread of the methodology so far. The known advantages of ELS are thus substantially extended: more information is obtained and complex experimental designs are streamlined, providing valuable additional data urgently needed by researchers to understand complex ecological and evolutionary processes.
Image analysis can replace the human observer to perform tedious and repetitive tasks, such as counting and measuring individuals, in a more constant, objective and efficient way than people. However, it will not replace the critical observations of an informed experimenter, as the machine records only what it is told to. Thus, automatic image analysis should only be applied to systems where the natural history of the model is well-studied and understood deeply enough to allow efficient and reliable automation.
We thank Alexis Chaine, Jean Clobert, Linda Dhondt, Michèle Huet, Kate Mitchell and Virginie Thuillier for their help in developing and/or running the T. thermophila ELS. Kate Mitchell, Camille Turlure, Alexis Chaine and Christophe Lebigre provided valuable comments to an earlier manuscript draft. F. Pennekamp is funded by Fonds Spéciaux de Recherche, Université catholique de Louvain. N. Schtickzelle is a Research Associate of the Fund for Scientific Research (F.R.S.-FNRS). Financial support to acquire scientific material needed for the T. thermophila ELS was provided by F.R.S.-FNRS and Université catholique de Louvain (ARC 10-15/031). This is publication BRC291 of Biodiversity Research Centre.