Notice: Wiley Online Library will be unavailable on Saturday 27th February from 09:00-14:00 GMT / 04:00-09:00 EST / 17:00-22:00 SGT for essential maintenance. Apologies for the inconvenience.
Acquiring information about the expression of a gene in different cell populations and tissues can provide key insight into the function of the gene. A high-throughput in situ hybridization (ISH) method was recently developed for rapid and reproducible acquisition of gene expression patterns in serial tissue sections at cellular resolution. Characterizing and analysing expression patterns on thousands of sections requires efficient methods for locating cells and estimating the level of expression in each cell. Such cellular quantification is an essential step in both annotating and quantitatively comparing high-throughput ISH results. Here we describe a novel automated and efficient methodology for performing this quantification on postnatal mouse brain.
If you can't find a tool you're looking for, please click the link at the top of the page to "Go to old article view". Alternatively, view our Knowledge Base articles for additional help. Your feedback is important to us, so please let us know if you have comments or ideas for improvement.
Over the past decade, the application of in situ hybridization (ISH) methods using enzyme-based detection systems to visualize hybridized probes has become increasingly common. With non-radioactive ISH, a complementary DNA or RNA probe containing nucleotides tagged with a hapten is hybridized and subsequently revealed by either a single enzymatic colour reaction or by a two-step amplification reaction involving catalytic reporter deposition followed by an enzymatic colour reaction (reviewed in Speel, 1999). If the colour reaction is catalysed by alkaline phosphatase, then a localized blue–purple deposit is formed and it can be imaged by bright-field microscopy.
Recent work has begun applying a high-throughput non-radioactive ISH method toward the establishment of a transcriptome-wide digital gene expression atlas of the mouse brain (Carson et al., 2002) and mouse embryo (Reymond et al., 2002). This publicly available online resource (http://www.genepaint.org) will eventually include the expression patterns of more than 20 000 genes and several million tissue sections (Visel et al., 2004). The usefulness of this transcriptome atlas can be significantly enhanced by annotation of the locations and intensities of cellular expression for each gene. An objective representation of spatial gene expression on a cell-by-cell basis will enable expression pattern mining using well-defined criteria, comparisons between expression patterns and enhanced visualization of expression patterns (Carson, 2004).
In principle, non-radioactive ISH is amenable to quantification (Emson, 1993; Asan & Kugler, 1995; Zreiqat et al., 1998), although serial enzymatic amplification reactions can affect the linearity of the relationship between the original mRNA level and the intensity of the colour precipitate. Methods have been developed to quantify radioactive ISH data in which sites of gene expression are determined by hybridizing radioactive probes and detecting expression signals by autoradiography (Noguchi et al., 1989; Wang & Wessendorf, 2002; Hashimoto et al., 2003; Wessendorf et al., 2004). However, such protocols are not applicable to non-radioactive ISH. Most cell-based quantification techniques are designed for immunocolorimetrically detected proteins or fluorescently labelled nucleic acid probes (Ruifrok, 1996; Benali et al., 2003). The applications of these techniques are either low throughput or limited purely to cell counting, which makes them unsuitable for rapid analysis of data collected by high-throughput ISH analysis. As a critical step in the establishment of a transcriptome atlas, we have developed an efficient, automated high-throughput analytical procedure that both determines the locations of cells in postnatal brain tissue sections and estimates the level of gene expression in each cell. The output created by this analysis is a digital false colour map representing the original expression pattern in a simplified form. This paper describes the technique and its application on postnatal day 7 (P7) mouse brain.
Materials and methods
Freshly collected P7 C57BL6 mouse brains were placed into a chamber filed with OCT cryomount medium (see http://www.genepaint.org/Image7.htm for a picture of this custom-made chamber). The brain was aligned with its medial plane orientated parallel to one pair of walls of the freezing chamber. Subsequently, the chamber containing the brain was placed on a metal block immersed in a mixture of dry ice and ethanol, and then gradually frozen within a period of 10–12 min. The brain in the resulting frozen block had a defined orientation with respect to the planes of the block surface. Blocks were stored indefinitely in a −80 °C freezer and then equilibrated for 1 day at −20 °C prior to sectioning. Blocks were sagittally sectioned at 20 µm thickness using a cryostat. Sections were then fixed in 4% paraformaldehyde, acetylated and dehydrated for further storage at −80 °C.
High-throughput in situ hybridization
A Tecan Gemini 200 solvent delivery platform was used to carry out prehybridization, hybridization, stringency washes and colorimetric detection (Carson et al., 2002). This platform accommodates up to 192 flow-through hybridization chambers, each housing four sections. Digoxygenin-tagged RNA probes were used and the hybridized complementary probe was detected by catalysed reporter deposition (CARD) using biotinylated tyramide followed by colorimetric detection of biotin with an avidin–alkaline phosphatase conjugate (Bobrow et al., 1989; Kerstens et al., 1995).
Microscopy and image collection
Without applying a counterstain, brain sections were digitally imaged in a bright-field microscope at 50× magnification for a resolution of 3.3 µm pixel−1 (Fig. 1a,b). A motorized stage translocated the sections in a meander pattern and bitmap images of adjacent fields (575 × 575 pixels) were collected. These were subsequently stitched together automatically to produce a single mosaic image representing the complete brain section. Using Adobe Photoshop (Adobe Systems Inc.), mosaic images were cropped and saved as red–green–blue (RGB) tagged image file format (TIFF) files with Lempel Ziv Welch (LZW) lossless compression.
Categories of signal strength
Cells possessing copies of the probed transcript contain dye deposits as a result of the non-radioactive ISH. The quantity of these cellular precipitates increases with the number of detected transcripts. Visible levels of gene expression strength range from cells with no detected expression to cell bodies completely filled with dye precipitate. This breadth in cellular expression is clearly visible in the example of cannabinoid receptor 1 (Cnr1) expression at 200× magnification (Fig. 1c), which is significantly higher than that used during high-throughput data collection. Although the amount of cellular gene expression could be quantified as a percentage of total cell area at this more powerful resolution, each cell is only represented by approximately nine pixels at the 50× magnification used for high-throughput digitization (Fig. 1b). To facilitate automated semiquantitative classification of cellular expression of the high-throughput data, we divided the range of observable expression strengths into four categories: strongly expressing cells filled with dye precipitate (+++), moderately expressing cells partially filled with precipitate (++), weakly expressing cells with scattered minute particles of deposit (+) and cells with no detectable precipitate (–) (Fig. 1c).
Gene expression signal detection involves identifying the location and signal strength category for each cell in an image. The strategy selected was to identify pixels representing precipitate first, and then to classify clusters of pixels by size. This automated cellular expression detection approach was implemented as Celldetekt, a python script (http://www.python.org/) that uses the Python Imaging Library (http://www.pythonware.com/).
This script applied a fixed threshold method to identify pixel type (Castleman, 1996). Pixel values in these RGB images ranged from black (0,0,0) to white (255,255,255). Across the colour spectrum, three basic intensity types were observed in the images: darkly stained blue–purple dye precipitates, lightly stained cellular materials absent of precipitate and nearly white space in which cells are absent. Celldetekt identified pixel type by employing two user-provided threshold values, t1 and t2 (Fig. 1d). Across an entire image, pixel intensities were assigned as follows: between 0 and t1 as dye precipitate, between t1 and t2 as cellular areas without precipitate, and between t2 and 255 as the absence of cell bodies. For the example in Fig. 1, t1 was set at 100 green-channel and t2 was set at 240 grey-channel. The green channel was used for t1 because it was the channel with the greatest contrast for selecting purple signal. Although fixed throughout a given image, the thresholds could be adjusted between sets of data to compensate for variations in the high-throughput ISH protocol that could result in differences in either precipitate or background staining.
Next, detecting cells was accomplished using a sliding window technique. A series of small square windows of various sizes traversed the entire image and marked the locations where the signal filled the window. The first window was chosen to be 3 × 3 pixels (Fig. 1e) to approximate the average size of a neuron cell body (∼10 µm in diameter). At locations where every pixel within the window was precipitate signal, a point beneath was marked as a cell strongly expressing the gene (Fig. 1f). After searching the entire image for strongly expressing cells, signal near the detected cells was removed by a circular 7-pixel diameter mask (Fig. 1g,h) to prevent detecting the same cells in subsequent steps (Russ, 2002).
After detecting strongly expressing cells and removing them from the image, moderately expressing cells were then identified using the same sliding window procedure, but instead with a 2 × 2-pixel window (Fig. 1i). After applying the masking step again (Fig. 1j) with a mask of the same size and shape, the remaining precipitate pixels were marked as locations containing weakly expressing cells (Fig. 1k). Following a masking of the weakly expressing cells using the same mask shape and size, the cells without any dye precipitate were located. In this final detection step, the same series of windows and masks were applied to the signal intensity range representing cells without precipitate. All cells detected in this step were labelled as not expressed (Fig. 1l). The same windows and masks were used for both expressing and non-expressing cells so that any methodological bias in counting cells would be applied equally to both types of cells.
After marking the cell locations, clusters of cells were segmented into cell-sized units by applying a 4 × 4-pixel grid on the cell-marked image (Fig. 1m) and assigning cell signal strengths to each grid square (Fig. 1n). By converting each grid square to a single pixel, the image size was reduced to a 12.5× magnification digital false colour map with each pixel representing one cell colour-coded by the expression strength of the cell.
Results of the automated expression detection method were compared with calculations performed on images collected at 200× magnification for a sample set of cells. At this resolution, the percentage cell area containing dye precipitate was determined by applying a t2 threshold of 240 grey to define cell area and a t1 threshold of 100 green to define the dye precipitate area (Fig. 2a). These were the same thresholds used during the automated detection. Calculations were performed for Cnr1 expression for 120 cells from three different structures of the brain – thalamus, amygdala and cortex – and plotted as histograms (Fig. 2b). The distributions of percentages were 64 ± 9% for strongly expressing cells, 38 ± 10% for moderately expressing cells and 14 ± 10% for weakly expressing cells. The comparison demonstrates that whereas there is overlap between adjacent signal strength categories (e.g. none vs. weak, weak vs. medium, and medium vs. strong), there is essentially no overlap between separated categories (e.g. none vs. medium, and weak vs. strong). This suggests that categories automatically assigned at 50× magnification are meaningful.
The accuracy of Celldetekt was also evaluated by comparing the automated classification of 256 cells to that of an expert using a microscope at 200× magnification to classify visually the expression levels in the same cells. The confusion matrix of this comparison (Table 1) shows that the two methods matched 85% of the time, with the remaining 15% only off by one level of strength. This is consistent with the observed distributions in Fig. 2(b). These results suggest that the automatically assigned cellular expression strength categories at 3.3 µm pixel−1 resolution are accurate and suitable for comparing cell populations.
Table 1. Confusion matrix of cellular gene expression signal classification.
In theory, Celldetekt can assign gene expression strengths for up to 36 cells in a 6 × 6-pixel region in the reduced output image. This size region corresponds to a 79 × 79-µm square area. To verify both that this maximum density is sufficient for brain tissue, and that the density represented by Celldetekt is accurate, 13 different 79 × 79-µm square regions were selected across a brain tissue section showing Neuropeptide Y (Npy) gene expression as generated with high-throughput ISH (Fig. 3a). Regions were selected from the cortex, the hippocampus, the medulla, the cerebellum and outside of the brain tissue section. To count cells, a bisbenzimide stain was applied to the tissue section after high-throughput ISH (Fig. 3b). Bisbenzimide binds to chromosomal DNA and under fluorescence at a high magnification allows an accurate visual counting of cells. The dye precipitate from high-throughput ISH blocks the bisbenzimide stain, and thus the ISH-stained cells were counted using combined bright-field/fluorescence illumination (Fig. 3b). The same 13 square regions were identified in the Celldetekt-generated output (Fig. 3c), and in each region the number of cells was counted. The comparison between visual counts at high resolution and automated Celldetekt counts is plotted in Fig. 3(d). The relationship demonstrates a good correspondence, except when the density exceeds 36 cells per region. However, such high densities were only found in the internal granular layer of the cerebellum where cell bodies were consistently tightly packed and overlapping. Most high-density regions such as those found in the hippocampus were fewer than 36 cells per region. This result, when coupled with the other validations, suggests that the gene expression strength assignments generated by Celldetekt are accurate for the vast majority of the brain.
Visually, Celldetekt produced a false-colour representation of the ISH results (Fig. 4). Despite a reduction in resolution such that each image pixel represented one cell, this output image remained consistent with the basic anatomical structures of the original ISH image. This allows us to observe with a brief glance which regions across the tissue section contain strong, medium and weakly expressing cells.
The Celldetekt program analysed 2000 × 4000-pixel images at a rate of approximately one image per minute. This was benchmarked using a computer equipped with an Athlon 2600 processor with 512 megabytes of memory. The observed rate appropriately matched the speed at which images could be collected with the automated microscope.
The signal detection method described here is notable for its rapid objective characterization of cellular expression levels determined by non-radioactive ISH on postnatal mouse brain tissue sections. The output generated by this image processing technique is a semiquantitative representation of gene expression, which can be applied in conjunction with atlas-based segmentation of the tissue section (Ju et al., 2003) toward the automated classification of gene expression patterns (Carson et al., 2004). In addition, this cell-based quantification could possibly serve as the foundation for comparing ISH results under different experimental conditions (e.g. tissue from normal and treated animals). The speed of this detection protocol makes it particularly suitable for processing images produced by high-throughput ISH of the mouse brain transcriptome, as the output generated by Celldetekt could be an operand for pattern-based database queries.
Cell-based detection is a necessary component of gene expression characterization. At many locations in the mammalian brain, there are different types of neurons and other cells performing a variety of functions. In these cases, knowledge of the cellular distribution of gene expression levels is informative. Under different conditions, a small population of cells could possess the same amount of total transcript, but the transcript could be evenly distributed across all cells in one instance, while unevenly distributed in the other. The functional relevance is substantially different in the two situations.
Comparisons based on output generated by Celldetekt should only be performed on tissue sections analysed under identical ISH conditions, as any variation in analytical conditions (e.g. duration of detection reaction, probe length) will affect signal amplification and thus the quantity of precipitate deposited. Moreover, comparisons should be limited to populations of cells as opposed to single cells because the segmentation step may label one large cell body as two, or multiple small cells as one. Although higher image resolutions would probably increase the accuracy of the method, the time required to process images quadruples each time the microscope magnification doubles.
This work was supported by a training fellowship from the Keck Center for Computational and Structural Biology of the Gulf Coast Consortia (NLM Grant no. 5T15LM07093), by the Burroughs Wellcome Fund, and by the National Center for Research Resources (P41RR02250).