SEARCH

SEARCH BY CITATION

Keywords:

  • Alignment;
  • image registration;
  • image segmentation;
  • montage;
  • software;
  • stereology;
  • three-dimensional reconstruction;
  • tracing;
  • ultrastructure

Summary

  1. Top of page
  2. Summary
  3. Introduction
  4. Data representation
  5. Accessing the data
  6. Tracing
  7. Calibration and measurement
  8. Movements and alignment
  9. Objects and 3D display
  10. Obtaining the software
  11. Conclusions
  12. Acknowledgements
  13. References

Many microscopy studies require reconstruction from serial sections, a method of analysis that is sometimes difficult and time-consuming. When each section is cut, mounted and imaged separately, section images must be montaged and realigned to accurately analyse and visualize the three-dimensional (3D) structure. Reconstruct is a free editor designed to facilitate montaging, alignment, analysis and visualization of serial sections. The methods used by Reconstruct for organizing, transforming and displaying data enable the analysis of series with large numbers of sections and images over a large range of magnifications by making efficient use of computer memory. Alignments can correct for some types of non-linear deformations, including cracks and folds, as often encountered in serial electron microscopy. A large number of different structures can be easily traced and placed together in a single 3D scene that can be animated or saved. As a flexible editor, Reconstruct can reduce the time and resources expended for serial section studies and allows a larger tissue volume to be analysed more quickly.


Introduction

  1. Top of page
  2. Summary
  3. Introduction
  4. Data representation
  5. Accessing the data
  6. Tracing
  7. Calibration and measurement
  8. Movements and alignment
  9. Objects and 3D display
  10. Obtaining the software
  11. Conclusions
  12. Acknowledgements
  13. References

Physical or optical sectioning of a specimen into parallel planar sections is often used for high-resolution microscopy. As sectioning reduces complex three-dimensional (3D) structures to sets of 2D profiles, a method is needed for evaluating 3D shape from section profiles. The many ingenious methods developed by microscopists for understanding 3D structure have evolved with technology, from drawing methods and model building in the 1800s, to cinematographic methods in the period 1905–1975, to computer-aided methods (Ware & LoPresti, 1975; Huijsmans et al., 1986). With the transition to digital imaging, microscopists have an even greater need for computer software capable of organizing large amounts of serial section image data while also facilitating quantitative measurements and 3D structure visualization.

When sections are cut and imaged separately, each section may be exposed to independent amounts of scaling and/or non-linear deformation as a result of cutting, folding, drying, specimen tilt, temperature changes and optical distortions in the imaging system (Stevens & Trogadis, 1984). In addition, just positioning the sections in the microscope introduces rotational and translational offsets between section images. Although computer algorithms have been developed for automatic image registration (Brown, 1992; Toga & Banerjee, 1993; Van den Elsen et al., 1993; Zitová & Flusser, 2003), the success of a particular method for aligning serial sections is highly data dependent. No existing image registration method will always succeed in aligning arbitrary images with arbitrary deformations (Zitová & Flusser, 2003). One popular technique is based on the maximization of mutual information, but even this method frequently fails on reasonable images (Penney et al., 1998; Roche et al., 2000). As a consequence, alignments often rely on user-guided registration of intrinsic or imposed fiducial marks (Humm et al., 1995; Papadimitriou et al., 2003).

Once serial sections are aligned, additional algorithms are needed for 3D visualization. When the sections contain only a single structure, such as might be obtained from fluorescent confocal microscopy, various volume rendering techniques can be applied directly to the image data. However, when sections contain a dense feltwork of different structures, as in electron microscopic (EM) sections of the brain, a segmentation step is required to identify and separate the structures for 3D reconstruction. Algorithms for automatic segmentation are only reliable for relatively simple types of images and, in general, structure-orientated reconstructions rely on the expertise of trained microscopists. The most reliable method at present is computer-aided tracing of profiles on sections followed by automatic 3D surface generation and rendering with appropriate shading.

A number of software packages are available to support structure-orientated reconstruction from serial sections (Table 1). Some of these programs (e.g. 3D Doctor, Amira, Neurolucida) provide algorithms for automatic alignment of an entire stack of sections. Automatic alignment plugins are also available for the public domain image processing package ImageJ (http://rsb.info.nih.gov/ij/). Automatic alignment algorithms are typically based on cross-correlation, mutual information or related techniques, with the consequent limitations that these entail. Popular commercial packages also often include facilities for arranging multiple images within a section to form a montage (e.g. Bioquant, Neurolucida). Most packages also provide one or more methods of automatic segmentation, in addition to fundamental tools for manual tracing. Excellent 3D rendering hardware and associated graphics libraries are now standard on most computers, so 3D visualization is relatively easy once a surface of triangular patches is created from the traces. Automatic surface generation from traced profiles is available in all structure-orientated reconstruction packages, although a variety of undisclosed methods are used.

Table 1.  Some software packages that include structure-orientated analysis of serial sections.
PackageLinkAlignMontageTraceMeasure3DCost
3D Doctorhttp://www.ablesw.com$$
Amirahttp://www.amiraviz.com?$$
Bioquanthttp://www.bioquant.com?$$
Neurolucidahttp://www.neurolucida.com$$
Reconstructhttp://www.synapses.bu.edufree

A free software environment for browsing and editing serial section data has been under development for several years, beginning in the Image Graphics Laboratory at Children's Hospital, Boston. Two programs, IGL Trace and sEM Align, embodied the easy-to-use approach (Fiala & Harris, 2002). These programs were validated in a number of serial section studies (Table 2). A new, more versatile serial section editor, Reconstruct, has recently been developed as a continuation of this approach. Reconstruct contains all the basic components for structure-orientated reconstructions in a free and open software package designed to facilitate large serial section studies efficiently. The following describes the main features of the software and the underlying methods for organizing, transforming and displaying serial section data.

Table 2.  Studies aided by IGL Trace/sEM Align software.
ReferenceJournalSubject
Alberio et al., 2004Parasitology ResearchDifferentiation of Leishmania parasite
Cerri et al., 2004J AnatomyDevelopment of teeth
Cooney et al., 2002J NeuroscienceDistribution of dendritic endosomes
Dhanrajan et al., 2004HippocampusSynaptic plasticity in ageing
Fiala et al., 1998J NeuroscienceSynaptogenesis in hippocampus
Fiala et al., 2002Nature NeuroscienceDendritic spine plasticity
Fiala et al., 2003J Comp NeurologyIschaemic injury to neurone dendrites
Jourdain et al., 2002J NeuroscienceAnoxia-induced synaptic plasticity
Kirov et al., 2004NeuroscienceCold injury to neurone dendrites
Leitinger & Simmons, 2002J NeurobiologySynapses in locust visual neurones
Lindemann, 2001NatureTaste receptors
Nikonenko et al., 2003J NeuroscienceActivity-dependent synaptogenesis
Ostroff et al., 2002NeuroneDendritic polyribosomes
Peychl et al., 2002Frontiers in BioscienceApoptosis
Popov et al., 2003Biofizika3D organization of hippocampal synapses
Rowland et al., 2000J NeuroscienceSynaptic structure of calyx of Held
Sandi et al., 2003Eur J NeuroscienceSynaptic plasticity in hippocampus
Segev & London, 2000ScienceModelling of dendritic electrophysiology
Shepherd & Harris, 1998J NeuroscienceAxonal varicosities in hippocampus
Shepherd et al., 2002Proc Nat Acad SciAxonal varicosities in cerebellum
Shum et al., 2003Br J DermatologyScalp biopsies
Sorra et al., 1998J Comp NeurologyDendritic spine morphology
Spacek & Harris, 2004J NeuroscienceEndocytosis in hippocampus
Telgkamp et al., 2004NeuroneSynapses in cerebellar nuclei
Teng & Wilkinson, 2000J NeurosciencePresynaptic endocytosis
Toni et al., 1999NatureSynaptic plasticity in hippocampus
Toni et al., 2001J NeuroscienceSynaptic plasticity in hippocampus
Ventura & Harris, 1999J NeurosciencePeri-synaptic astrocytes in hippocampus
Yankova et al., 2001Proc Nat Acad SciEstrus-related synaptogenesis
Xu-Friedman et al., 2001J NeuroscienceSynapses onto Purkinje neurone dendrites
Xu-Friedman & Regehr, 2003J NeuroscienceCerebellar glomerular synapses

Data representation

  1. Top of page
  2. Summary
  3. Introduction
  4. Data representation
  5. Accessing the data
  6. Tracing
  7. Calibration and measurement
  8. Movements and alignment
  9. Objects and 3D display
  10. Obtaining the software
  11. Conclusions
  12. Acknowledgements
  13. References

Data in Reconstruct are organized into sections corresponding to the pieces of the specimen imaged in the microscope. Data for each section, which includes digitized microscopic images, traces drawn on the images, and transformations applied to these images and traces, are stored in a file indexed by the section number as the filename extension. The actual digital image data are not stored in the section data file; rather a reference to a corresponding digital image file (.bmp, .jpg, .tif, etc.) is stored. Information about the whole series, including user-defined options and multisection traces, is stored in a separate series data file.

Reconstruct uses eXtensible Markup Language (XML) to represent series and section data in an open data format. The two XML file formats are defined by Document Type Definition files that allow automatic validation and easy interoperability. The defined XML data structures in the section file reflect the data elements of a series: transformations, domains, images and traces. When section data are accessed by the software, these XML elements are used to fill a similarly organized data structure in computer memory.

Accessing the data

  1. Top of page
  2. Summary
  3. Introduction
  4. Data representation
  5. Accessing the data
  6. Tracing
  7. Calibration and measurement
  8. Movements and alignment
  9. Objects and 3D display
  10. Obtaining the software
  11. Conclusions
  12. Acknowledgements
  13. References

Reconstruct is designed to run in a Windows environment, using conventional mouse and keyboard input. The software can be used on a tablet PC or with a digitizing tablet as an input device, but access to a keyboard is desirable for command accelerators, incremental movements and browsing sections. The main window is a single document window that displays the images and traces for one section of the series (Fig. 1). By using the Page Up and Page Down keys, the display can be quickly advanced to adjacent sections in the series. Large sections can be quickly panned and zoomed with the mouse.

image

Figure 1. The user interface for Reconstruct. The main window displays a section containing two domains and one trace (filled with yellow). Reconfigurable floating windows are used to display measurements and quickly access data elements. The tools and trace palette windows have been placed in the upper left corner, while the domain and trace list windows are at the lower right. Additional windows not shown: the list of sections, section thumbnails, the list of objects, the 3D scene, the list of z-traces and the list of interobject distances.

Download figure to PowerPoint

Domains

Reconstruct facilitates cropping, montaging, scaling and alignment of images by the user. Each image within a section is incorporated into a domain that has a defined boundary and independent location within the section. Cropping is achieved by drawing a new domain boundary. The domain boundary outlines the region of the image to be displayed in the section. Multiple image domains can be placed side-by-side within a section to make a montage, a composite picture of a large section from many smaller images (Fig. 1). Montaging is realized by independent transformations associated with each domain. Each transformation incorporates a unique amount of translation, rotation, scaling, skew and deformation.

When the image data are imported into a section, the image pixels are scaled using a magnification factor, the pixel size. The pixel size parameter tells how big a pixel of the image is in the units defined for the series. Each image has its own magnification factor. This allows images obtained at different magnifications or scanned with different settings to be accurately incorporated into the same series.

The transformation used to place the image within the section is also used to align the image with adjacent sections. This is accomplished by adjusting the parameters of the transformation according to user input, by image cross-correlation, or using a set of traces as alignment fiducial marks. By restricting display to polygonal domains within an image, this approach can also be used to correct a folded or cracked section. Domains can be defined on either side of the fold. By adjusting the transformation for each domain, the image could be effectively unfolded, placing each part correctly within the section.

An important feature of Reconstruct is that original image data are never altered. Cropping, montaging, scaling and alignment are all performed dynamically on the image data each time they are accessed and displayed. This allows these edits to be undone and continuously modified, and reduces the storage overhead associated with large section bitmaps, but it also requires that the display processing be carried out as efficiently as possible.

Sections

Section data are organized as linked lists of transformations in computer memory. Each transformation contains either a domain or one or more traces defined in a local coordinate system. The transformation parameters define how the local coordinate system of the image or trace will be mapped into the section.

To display a section, each domain boundary is transformed into section coordinates and the interior region mapped onto a bitmap representing the display area (Fig. 2). For those domains that have a region on the display not obscured by other domains, the associated image file and transformation are used to fill the region pixel-by-pixel. After all images are rendered in this manner, the traces are transformed into the section and drawn. Traces may be filled with colour to highlight the interior. Finally, the display bitmap is copied to the client area of the window.

image

Figure 2. The mapping of data elements onto the section. Each trace and domain is associated with an independent transformation that determines the size and location of the element on the section. In this example each domain has a rectangular boundary that defines the area of the image to be displayed. In general, domains can be any shape that can be created by the drawing tools.

Download figure to PowerPoint

The process of creating the section display can be time-consuming for a large amount of image data. The majority of this time is consumed by the retrieval and decompression of image files from the hard drive. Once the image data are in memory the rendering of whole sections is relatively rapid, depending primarily on the number of domains and the complexity of the domain transformations. Only those images actually visible on screen are read from the drive and transformed onto the display. This helps speed up display processing and reduce memory requirements. When the whole section is visible on screen the resolution requirements are generally low, so the costly image retrieval step is further reduced by the use of lower-resolution proxy images that are more quickly accessed and displayed when the full resolution of the data is not needed. This reduces the section display time to about half a second even when a section contains more than 100 Mb of data (Table 3).

Table 3.  Performance times for section display onto a 1600 × 1200-pixel screen using a Dell Optiplex GX260 with a 2-GHz processor, 512 Mb of memory, and a standard IDE/ATA hard drive.
Domain size (Mpixels)No. of coloursAverage domain file size (Mb)FormatNo. of domains per sectionSection rendering time (ms)Average proxy file size (Mb)Rendering time with proxies (ms)
4.71677721614.2BMP916900.034630
45.6     25612.4JPEG498002.85610
10.4     256 4.7JPEG1 9400.885450

When the display is advanced to a new section, the previously displayed section is retained in memory. Because display bitmaps have already been created for both the current and the previous sections, the main window display can quickly alternate between them. This allows rapid flicking between adjacent sections and the ability to detect misalignments by apparent motion (Levinthal & Ware, 1972). The current and previous sections can also be rapidly blended together as another means for evaluating alignments.

Series

Most processing in Reconstruct pertains to the two recently accessed sections, but commands are also provided to accelerate processing of multiple sections throughout the series. A list of all the sections is available in a floating window. This section list can be used to change the current section, renumber sections and delete sections. Thumbnail display of multiple sections is also possible using the floating thumbnails window. The size and magnification of thumbnail images are adjustable. Thumbnails can be displayed as an array of selectable buttons or as a set of overlaid images. Overlaid thumbnails can be animated to obtain a dynamic view of the 3D structure of the series data.

Images can be distributed as a group to sequential sections, so that a user can quickly go from a set of image files to a series filled with image data. After a section has been filled with data, the entire collage of transformed domains and traces can be exported to a new image file. This operation allows alignments and segmented images to be exported to other programs for further processing.

Data elements (sections, domains or traces) or parameter settings from another series can be copied into a series using the Series Import operation. By allowing multiple investigators to later combine their analyses, this operation facilitates collaboration and division of labour on large projects. The Import Series function can also be used to copy, backup or rename an entire series.

Tracing

  1. Top of page
  2. Summary
  3. Introduction
  4. Data representation
  5. Accessing the data
  6. Tracing
  7. Calibration and measurement
  8. Movements and alignment
  9. Objects and 3D display
  10. Obtaining the software
  11. Conclusions
  12. Acknowledgements
  13. References

Traces are the basis for quantitative measurements, and can also be used to guide alignments and 3D surfacing of objects. Although automatic segmentation is appropriate for some kinds of data, manual tracing is still required for most microscopic images, so Reconstruct includes facilities for creating traces on sections by drawing with a mouse or pen. A form of user-guided automatic tracing is provided by the Wildfire region growing tool. With this tool the user clicks in the interior of the region to be outlined and the software automatically determines the boundary of the region based on user-defined criteria on the hue, saturation and brightness of the region.

The polyline traces are stored in the section file as a sequence of (x,y) points. The actual location of the trace on the section is determined by applying an associated transformation to the points (Fig. 2). To relate traces between sections each trace is named. Traces with the same name identify the 3D object to which the traces belong or define other correspondences between sections. Special symbols are used in the name string to number trace names automatically. Automatic numbering of traces allows the rapid creation of unique pairs of corresponding traces on adjacent sections. Corresponding traces can then be used for aligning the sections. Automatic numbering also allows objects to be split apart into component traces. These components can later be recombined into one object by again renaming without automatic numbering.

The attributes of a trace include the name, the border and fill colours, the fill style, and an optional comment string. A default set of attributes is used for new traces. These default values can be rapidly changed with a user-defined palette. In addition to trace attributes, the palette allows rapid selection of a previously defined trace shape. The trace shape can be repeatedly applied to the section by a single click of the mouse.

Traces can be used to form sampling grids for stereological measurements. These grids can also be placed on a section with a single mouse click. Built-in stereological traces currently include point, rectangular and cycloidal grids, and grids of unbiased sampling frames. Other types of grids can be easily created by drawing the component shape and then setting the grid tool options to incorporate this shape into a grid with the desired dimensions.

In addition to section traces, Reconstruct supports the drawing of z-traces that span multiple sections. These z-traces are useful for making 3D measurements and for defining paths through the sections. The location of z-traces can be visualized in the 3D scene along with objects reconstructed from section traces.

Calibration and measurement

  1. Top of page
  2. Summary
  3. Introduction
  4. Data representation
  5. Accessing the data
  6. Tracing
  7. Calibration and measurement
  8. Movements and alignment
  9. Objects and 3D display
  10. Obtaining the software
  11. Conclusions
  12. Acknowledgements
  13. References

Reconstruct works in an arbitrary system of units. The user defines the units of measurement for each series. All input and output values are consistently in these units. For example, if the units are declared to be ‘micrometres’ then all values entered for section thickness, movements, image pixel size, etc., are in micrometres. Traces and transformations are stored in micrometres, and quantitative output, such as trace lengths, are reported in micrometres as well.

The magnification of image data can be calibrated by drawing traces on an image of a scale of known size, i.e. a calibration specimen that was imaged at the same magnification and digitized with the same settings as the data. The square pixel size is determined by dividing the total trace length in series units by the length in pixels. Images of calibration specimens can be stored anywhere within the series. By convention, a calibration image for the series is placed in an empty section at the start of the series in section 0. Section 0 is a special section that is not used to define objects or compute z-distances (see below).

Section thickness is another critical parameter for obtaining accurate 3D measurements and reconstructions. To provide maximum flexibility, Reconstruct allows each section in a series to have a different thickness. The thickness of individual sections can be measured from minimal folds (Small, 1968) or by measuring the dimensions of sections obtained from a truncated pyramid (Papadimitriou et al., 2003). Alternatively, mean section thickness for the series can be estimated using longitudinally sectioned cylindrical objects such as mitochondria (Fiala & Harris, 2001a), in which case all sections would be given this mean thickness.

The z-distance, the distance along the axis perpendicular to the plane of sectioning, is used to compute 3D values. The z-distance for any section is computed by adding the section thicknesses of all preceding sections to the thickness of that section (Fig. 3). This allows distances between sections of different thicknesses to be accurately represented.

image

Figure 3. Two methods are provided for the computation of z-distances. One computes the distance to the middle of the section (z = t1 + t2 + t3 + …+ tn−1 + tn/2), while the other computes the distance to the top of the section (z = t1 + t2 + t3 + … + tn−1 + tn). The latter method provides an easy way to represent gaps of missing sections by giving a larger thickness to the sections at the top of the gaps, while the former gives more accurate z-distances and 3D representations when sections have different thicknesses.

Download figure to PowerPoint

Once a series has been calibrated, accurate measurements will be automatically generated whenever a list of elements is displayed. The domain list displays the length, area and midpoint of the boundary of each domain. The object list displays the surface area and volume of objects, and the number of component traces. The list of traces in the current section displays trace length and area, the trace centroid, and the minimum and maximum x- and y-values for the trace. A Series Export operation allows the trace lists for all sections to be collated in one file for further processing by spreadsheet or statistics software. Additional lists calculate and display the lengths of z-traces and the 3D distances between objects. The rows and columns of all lists can be limited by defining limit strings that use wildcard characters to match the names of the elements. All lists can be saved as text files for spreadsheet or statistics processing.

Movements and alignment

  1. Top of page
  2. Summary
  3. Introduction
  4. Data representation
  5. Accessing the data
  6. Tracing
  7. Calibration and measurement
  8. Movements and alignment
  9. Objects and 3D display
  10. Obtaining the software
  11. Conclusions
  12. Acknowledgements
  13. References

Every domain and trace has a transformation associated with it to allow independent movement of data elements (Fig. 2). A single domain or trace can be moved within the section by changing its transformation. The entire section can be moved by changing all the transformations together. Each transformation maps trace points or image pixels into the section using a combination of basis functions in two dimensions (Fiala & Harris, 2001b). Essentially, each basis function represents an elementary motion such as translation, rotation, slant, scaling, deformation or bending. By combining these movement components in different proportions, a complex remapping of the underlying data is possible.

Movements of domains, traces and entire sections can be carried out by explicitly specifying the movement, e.g. ‘rotate 37 degrees clockwise’, or by using the keyboard to make incremental adjustments. Doing this while blending or flickering two sections allows sections to be manually aligned. In addition, Reconstruct provides two easier ways to align sections: by image correlation and by correspondence of traces (Fig. 4). All methods for movement and alignment can be applied to the problem of arranging domains into a montage as well.

image

Figure 4. Reconstruction of a pollen grain from 46 confocal microscopy sections. (A) Each aligned section was captured with a square border (grey) created by the computer. The profiles of the pollen grain (white region) were traced using Reconstruct's Wildfire region growing tool (black line at border of white region). (B) Misaligned sections were created by applying random rotations and translations. (C) Trying to realign the sections using the correlation method only corrects for translations. (D) The pollen grain as displayed by Reconstruct from the original aligned data. (E) The misaligned series was completely realigned by the point correspondence method. The user entered a point at each corner of the square border and Reconstruct computed the alignment of each section from these points. (F) The pollen grain displayed by Reconstruct after realignment of sections using the point correspondence method. Both types of alignments, correlation and point correspondence, could be performed rapidly in Reconstruct, requiring only 8–9 min to align 46 sections.

Download figure to PowerPoint

Aligning by correlation computes the peak of the cross-correlation function between the current and previous sections, and determines the translation required to move the current section to the peak location. Aligning by correlation only works for pure translational offsets between sections, but it can sometimes be used iteratively with keyboard rotations and scaling to achieve more general alignments.

Alignments are more readily achieved by treating pairs of traces in adjacent sections as fiducial marks to guide registration. In this approach, Reconstruct computes the transformation that minimizes the distance between the centroids of pairs of traces in adjacent sections. Any type of intrinsic or imposed fiducial markers can be utilized to align sections including material embedded with the specimen before sectioning (Humm et al., 1995) and the geometric points of the truncated pyramid method (Papadimitriou et al., 2003).

An alignment transformation computed from trace correspondences can be limited to rotations and translations by using a subset of the transformation basis functions. Using more basis functions allows more degrees of freedom for alignment but introduces the potential for distorting the section. To avoid distorting the sections through ambiguous alignments, the entire section can be aligned at once. A combination of blending and flickering is used to evaluate alignments and help minimize distortions. Flickering helps reveal whether the differences between sections are simply local changes in object structure vs. uniform apparent motion consistent with whole section misalignment.

An individual movement or alignment operation can be repeated with a keystroke. An adjustment made to one section can be easily applied to the rest of the sections. This allows the correction of a misalignment at one point in a series to be propagated to the rest of the series. A sequence of movements can also be recorded and applied to other sections.

Objects and 3D display

  1. Top of page
  2. Summary
  3. Introduction
  4. Data representation
  5. Accessing the data
  6. Tracing
  7. Calibration and measurement
  8. Movements and alignment
  9. Objects and 3D display
  10. Obtaining the software
  11. Conclusions
  12. Acknowledgements
  13. References

An object is defined to be the set of all traces in the series that have the same name. A list of the objects in the series is used to delete, rename and modify objects, to display object measurements, and to add or remove objects from the 3D scene. The 3D scene is an OpenGL graphics window for previewing the 3D representation of an object. When an object is added to the scene, a 3D representation is generated from the object's component traces. The 3D representation can be as simple as the traces placed at the correct z-distances, or as complex as surfacing the traces with triangular surface patches (Fig. 5).

image

Figure 5. Examples of 3D representations of one object generated by Reconstruct. The object was defined by tracing profiles of a dendritic spine on a pyramidal neurone in the hippocampus. All images were saved from the 3D scene at the same magnification and viewing angle. (A) The traces in three dimensions. (B) The areas inside the traces. (c) Triangular faces placed at the midpoints of the traces. (D) A box centred on the object. (E) A Boissonnat surface drawn as a wireframe. (F) A Boissonnat surface without smoothing. (G) A Boissonnat surface with smooth shading. (H) A cylinder centred on the object. (I) An ellipsoid representing the axes of the scatter matrix of the trace points. (J) A double pyramid centred on the object. (K) A sphere centred on the object. Shapes such as boxes, cylinders and spheres are useful as 3D calibration objects and as 3D substitutions for objects only in single sections.

Download figure to PowerPoint

The scene window can be used to arrange multiple objects from a series in space. Objects from another series can be combined with the scene, allowing a large 3D reconstruction to be easily generated from multiple series. The scene is rendered with the fidelity of the computer's OpenGL implementation, and this generally allows the diffuse, ambient, emissive, specular and transparency properties of objects to be specified. The scene can be interactively rotated, panned, zoomed and animated with the mouse. The final scene can be saved to a bitmap or VRML file. VRML (Virtual Reality Modelling Language) output allows the 3D reconstruction to be viewed over the web or imported into other programs for high-resolution rendering.

Obtaining the software

  1. Top of page
  2. Summary
  3. Introduction
  4. Data representation
  5. Accessing the data
  6. Tracing
  7. Calibration and measurement
  8. Movements and alignment
  9. Objects and 3D display
  10. Obtaining the software
  11. Conclusions
  12. Acknowledgements
  13. References

Reconstruct is freely available at http://synapses.bu.edu/ or http://synapses.mcg.edu/. Reconstruct will run on all Microsoft Windows operating systems and may run on other platforms with an appropriate Win32 emulator/translator, such as WINE (GNU/LINUX) or Virtual PC (Mac OS X). The only required libraries are Win32 and OpenGL.

A users group has been established to help users exchange information about Reconstruct (http://groups.yahoo.com/group/reconstruct_users/). The users group serves as a forum for sharing software updates and information on how to use Reconstruct. The full functionality of Reconstruct is detailed in the online user's manual. The manual contains important documentation such as the formulas used for all measurements reported by the software. In addition, the manual includes strategies and protocols for obtaining good alignments, and for ensuring that the 3D surfacing algorithm returns reasonable results.

Conclusions

  1. Top of page
  2. Summary
  3. Introduction
  4. Data representation
  5. Accessing the data
  6. Tracing
  7. Calibration and measurement
  8. Movements and alignment
  9. Objects and 3D display
  10. Obtaining the software
  11. Conclusions
  12. Acknowledgements
  13. References

Reconstruct facilitates the study of complex anatomical arrangements within serial sections and serves as an organizing medium for storage and review of large digital data sets. Series with tens of thousands of sections and hundreds of images per section can be quickly examined at any desired magnification. The basic functionality has been proven in numerous serial section studies (Table 2). Reconstruct offers further increases in productivity, by allowing these methods to be applied more quickly and with greater flexibility.

Although Reconstruct is useful in its present form, there is plenty of room for improvement. Additional image import and data export options are desirable, as are additional tracing and editing tools and techniques for alignments, segmentation and 3D surfacing. Full source code is available to facilitate continued expansion and development of the software. Of particular interest for future development will be increased automation of alignment and montaging functions.

As free software, Reconstruct makes it easier for laboratories to conduct serial section microscopy studies. Reconstruct can be disseminated along with the data to facilitate collaborative projects. In summary, the software makes computer analysis of large volumes of sectioned tissue less time-consuming, more cost-effective and more amenable to data sharing.

Acknowledgements

  1. Top of page
  2. Summary
  3. Introduction
  4. Data representation
  5. Accessing the data
  6. Tracing
  7. Calibration and measurement
  8. Movements and alignment
  9. Objects and 3D display
  10. Obtaining the software
  11. Conclusions
  12. Acknowledgements
  13. References

The development of Reconstruct was supported, in part, by Human Brain Project grants R01-MH/DA057351 and R01-EB002170 to Dr Kristen M. Harris (Medical College of Georgia), and by Mental Retardation Research Center grant P30-HD18655 to Dr Joseph Volpe (Children's Hospital, Boston), with funding from the National Institute of Mental Health, the National Institute on Drug Abuse, the National Institute of Child Health and Human Development, the National Aeronautics and Space Administration, the National Institute on Aging, the National Institute of Neurological Disorders and Strokes, and the National Institute of Biomedical Imaging and Bioengineering.

References

  1. Top of page
  2. Summary
  3. Introduction
  4. Data representation
  5. Accessing the data
  6. Tracing
  7. Calibration and measurement
  8. Movements and alignment
  9. Objects and 3D display
  10. Obtaining the software
  11. Conclusions
  12. Acknowledgements
  13. References
  • Alberio, S.O., Dias, S.S., Faria, F.P., Mortara, R.A., Barbieri, C.L. & Haapalainen, E.F. (2004) Ultrastructural and cytochemical identification of megasome in Leishmania (Leishmania) chagasi. Parasitol. Res. 92, 246254.
  • Brown, L.G. (1992) A survey of image registration techniques. ACM Computing Surveys, 24, 325376.
  • Cerri, P.S., De Faria, F.P., Villa, R.G. & Katchburian, E. (2004) Light microscopy and computer three-dimensional reconstruction of the blood capillaries of the enamel organ of rat molar tooth germs. J. Anat. 204, 191195.
  • Cooney, J.R., Hurlburt, J.L., Selig, D.K., Harris, K.M. & Fiala, J.C. (2002) Endosomal compartments serve multiple hippocampal dendritic spines from a widespread rather than a local store of recycling membrane. J. Neurosci. 22, 22152224.
  • Dhanrajan, T.M., Lynch, M.A., Kelly, A., Popov, V.I., Rusakov, D.A. & Stewart, M.G. (2004) Expression of long-term potentiation in aged rats involves perforated synapses but dendritic spine branching results from high-frequency stimulation alone. Hippocampus, 14, 255264.
  • Fiala, J.C., Allwardt, B. & Harris, K.M. (2002) Dendritic spines do not split during hippocampus LTP or maturation. Nat. Neurosci. 5, 297298.
  • Fiala, J.C., Feinberg, M., Popov, V. & Harris, K.M. (1998) Synaptogenesis via dendritic filopodia in developing hippocampal area CA1. J. Neurosci. 18, 89008911.
  • Fiala, J.C. & Harris, K.M. (2001a) Cylindrical diameters method for calibrating section thickness in serial electron microscopy. J. Microsc. 202, 468472.
  • Fiala, J.C. & Harris, K.M. (2001b) Extending unbiased stereology of brain ultrastructure to three-dimensional volumes. J. Am. Med. Informatics Assoc. 8, 116.
  • Fiala, J.C. & Harris, K.M. (2002) Computer-based alignment and reconstruction of serial sections. Microscopy Anal. USA Edition, 52, 57.
  • Fiala, J.C., Kirov, S.A., Feinberg, M., Petrak, L., George, P., Goddard, C.A. & Harris, K.M. (2003) Timing of neuronal and glial ultrastructure disruption during brain slice preparation and recovery in vitro. J. Comp. Neurol. 465, 90103.
  • Huijsmans, D.P., Lamer, W.H., Los, J.A. & Strackee, J. (1986) Toward computerized morphometric facilities: a review of 58 software packages for computer-aided three-dimensional reconstruction, quantification, and picture generation from parallel serial sections. Anat. Record, 216, 449470.
  • Humm, J.L., Macklis, R.M., Lu, X.Q., Yang, Y., Bump, K., Beresford, B. & Chin, L.M. (1995) The spatial accuracy of cellular dose estimates obtained from 3D reconstructed serial tissue autoradiographs. Phys. Med. Biol. 40, 163180.
  • Jourdain, P., Nikonenko, I., Alberi, S. & Muller, D. (2002) Remodeling of hippocampal synaptic networks by a brief anoxia-hypoglycemia. J. Neurosci. 22, 31083116.
  • Kirov, S.A., Petrak, L., Fiala, J.C. & Harris, K.M. (2004) Dendritic spines disappear with chilling but proliferate excessively upon rewarming of mature hippocampus. Neuroscience, 127, 6980.
  • Leitinger, G. & Simmons, P.J. (2002) The organization of synaptic vesicles at tonically transmitting connections of locust visual interneurons. J. Neurobiol. 50, 93105.
  • Levinthal, C. & Ware, R. (1972) Three dimensional reconstruction from serial sections. Nature, 236, 207210.
  • Lindemann, B. (2001) Receptors and transduction in taste. Nature, 413, 219225.
  • Nikonenko, I., Jourdain, P. & Muller, D. (2003) Presynaptic remodeling contributes to activity-dependent synaptogenesis. J. Neurosci. 23, 84988505.
  • Ostroff, L.E., Fiala, J.C., Allwardt, B. & Harris, K.M. (2002) Polyribosomes redistribute from dendritic shafts into spines with enlarged synapses during LTP in developing rat hippocampal slices. Neuron, 35, 535545.
  • Papadimitriou, C., Yapijakis, C. & Davaki, P. (2003) Use of truncated pyramid representation methodology in three-dimensional reconstruction: an example. J. Microsc. 214, 7075.
  • Penney, G.P., Weese, J., Little, J.A., Desmedt, P., Hill, D.L.G. & Hawkes, D.J. (1998) A comparison of similarity measures for use in 2-D – 3-D medical image registration. IEEE Trans. Med. Imaging, 17, 586595.
  • Peychl, J., Husak, J., Spring, H., Cervinka, M. & Rudolf, E. (2002) 3D-computer based reconstructions of apoptotic nuclei. Frontiers Biosci. 7, f89.
  • Popov, V.I., Medvedev, N.I., Rogachevskii, V.V., Ignat’ev, D.A., Stewart, M.G. & Fesenko, E.E. (2003) Three-dimensional organization of synapses and astroglia in the hippocampus of rats and ground squirrels: new structural and functional paradigms of the synapse function. Biofizika, 48, 289308.
  • Roche, A., Malandain, G. & Ayache, N. (2000) Unifying maximum likelihood approaches in medical image registration. Int. J. Imaging Sys. Technol. 11, 7180.
  • Rowland, K.C., Irby, N.K. & Spirou, G.A. (2000) Specialized synapse-associated structures within the calyx of Held. J. Neurosci. 20, 91359144.
  • Sandi, C., Davies, H.A., Cordero, M.I., Rodriguez, J.J., Popov, V.I. & Stewart, M.G. (2003) Rapid reversal of stress induced loss of synapses in CA3 of rat hippocampus following water maze training. Eur. J. Neurosci. 17, 24472456.
  • Segev, I. & London, M. (2000) Untangling dendrites with quantitative models. Science, 290, 744750.
  • Shepherd, G.M.G. & Harris, K.M. (1998) Three-dimensional structure and composition of CA3[RIGHTWARDS ARROW]CA1 axons in rat hippocampal slices: implications for presynaptic connectivity and compartmentalization. J. Neurosci. 18, 83008310.
  • Shepherd, G.M.G., Raastad, M. & Andersen, P. (2002) General and variable features of varicosity spacing along unmyelinated axons in the hippocampus and cerebellum. Proc. Natl Acad. Sci. USA, 99, 63406345.
  • Shum, D.T., Lui, H., Martinka, M., Bernaldo, O. & Shapiro, J. (2003) Computerized morphometry and three-dimensional image reconstruction in the evaluation of scalp biopsy from patients with non-cicatricial alopecias. Br. J. Dermatol. 148, 272278.
  • Small, J.V. (1968) Measurement of section thickness. Abstracts Fourth European Regional Conference on Electron Microscopy, Rome, 1, 609610.
  • Sorra, K.E., Fiala, J.C. & Harris, K.M. (1998) Critical assessment of the involvement of perforations, spinules, and spine branching in hippocampal synapse formation. J. Comp. Neurol. 398, 225240.
  • Spacek, J. & Harris, K.M. (2004) Trans-endocytosis via spinules in adult rat hippocampus. J. Neurosci. 24, 42334241.
  • Stevens, J.K. & Trogadis, J. (1984) Computer-assisted reconstruction from serial electron micrographs: a tool for the systematic study of neuronal form and function. Advan. Cell. Neurobiol. 5, 341369.
  • Telgkamp, P., Padgett, D.E., Ledoux, V.A., Woolley, C.S. & Raman, I.M. (2004) Maintenance of high-frequency transmission at Purkinje to cerebellar nuclear synapses by spillover from boutons with multiple release sites. Neuron, 41, 113126.
  • Teng, H. & Wilkinson, R.S. (2000) Clathrin-mediated endocytosis near active zones in snake motor boutons. J. Neurosci. 20, 79867993.
  • Toga, A.W. & Banerjee, P.K. (1993) Registration revisited. J. Neurosci. Methods, 48, 113.
  • Toni, N., Buchs, P.-A., Nikonenko, I., Bron, C.R. & Muller, D. (1999) LTP promotes formation of multiple spine synapses between a single axon terminal and a dendrite. Nature, 402, 421425.
  • Toni, N., Buchs, P.-A., Nikonenko, I., Povilaitite, P., Parisi, L. & Muller, D. (2001) Remodeling of synaptic membranes after induction of long-term potentiation. J. Neurosci. 21, 62456251.
  • Van den Elsen, P.A., Pol, E.-J.D. & Viergever, M.A. (1993) Medical image matching – A review with classification. IEEE Eng. Med. Biol. 12, 2629.
  • Ventura, R. & Harris, K.M. (1999) Three-dimensional relationships between hippocampal synapses and astrocytes. J. Neurosci. 19, 68976906.
  • Ware, R.W. & LoPresti, V. (1975) Three-dimensional reconstruction from serial sections. Int. Rev. Cytol. 40, 325440.
  • Xu-Friedman, M.A., Harris, K.M. & Regehr, W.G. (2001) Three-dimensional comparison of ultrastructural characteristics at depressing and facilitating synapses onto cerebellar Purkinje cells. J. Neurosci. 21, 66666672.
  • Xu-Friedman, M.A. & Regehr, W.G. (2003) Ultrastructural contributions to desensitization at cerebellar mossy fiber to granule cell synapses. J. Neurosci. 23, 21822192.
  • Yankova, M., Hart, S.A. & Woolley, C.S. (2001) Estrogen increases synaptic connectivity between single presynaptic inputs and multiple postsynaptic CA1 pyramidal cells: a serial electron-microscopic study. Proc. Natl Acad. Sci. USA, 98, 35253530.
  • Zitová, B. & Flusser, J. (2003) Image registration methods: a survey. Image Vision Computing, 21, 9771000.