SEARCH

SEARCH BY CITATION

ABSTRACT

  1. Top of page
  2. ABSTRACT
  3. INTRODUCTION
  4. VISUALIZING DYNAMIC PROCESSES
  5. THE CHALLENGES OF MULTIDIMENSIONAL IMAGING
  6. SUITABILITY OF EXISTING INFRASTRUCTURES FROM OTHER COMMUNITIES
  7. THE OPEN MICROSCOPY ENVIRONMENT
  8. SOFTWARE INTEROPERABILITY
  9. CONCLUSION
  10. REFERENCES

Over the last 50 years modern cell biology has been driven by the development of powerful imaging techniques. In particular, new developments in light microscopy that provide the potential to image the dynamics of biological events have had significant impact. Optical sectioning techniques allow three-dimensional information to be obtained from living specimens noninvasively. When used with multimodal fluorescence microscopy, advanced optical sectioning techniques provide multidimensional image data that can reveal information not only about the changing cytoarchitecture of a cell but also about its physiology. These additional dimensions of information, although providing powerful tools, also pose significant visualization challenges to the investigator. Particularly in the current postgenomic era there is a greater need than ever for the development of effective tools for image visualization and management. In this review we discuss the visualization challenges presented by multidimensional imaging and describe three open-source software programs being developed to help address these challenges: Image J, the Open Microscopy Environment, and VisBio.


Abbreviations:
MPFE

multiphoton fluorescence excitation

NIH

National Institutes of Health

OME

open microscopy environment

SHG

second harmonic generation

INTRODUCTION

  1. Top of page
  2. ABSTRACT
  3. INTRODUCTION
  4. VISUALIZING DYNAMIC PROCESSES
  5. THE CHALLENGES OF MULTIDIMENSIONAL IMAGING
  6. SUITABILITY OF EXISTING INFRASTRUCTURES FROM OTHER COMMUNITIES
  7. THE OPEN MICROSCOPY ENVIRONMENT
  8. SOFTWARE INTEROPERABILITY
  9. CONCLUSION
  10. REFERENCES

Much of our knowledge of the biological world has been obtained through the study of images. Research scientists use images to understand the workings of a cell, the development of an organism or the pathological state of a tissue. Selected images originating from research laboratories are edited, stylized and annotated for use in educational texts. Early biologists used simple lenses or compound microscopes to observe the natural world and record what they saw as drawings. Modern microscopes capture images electronically and store them as digital data files. Two-dimensional images are observed on a computer monitor by an appropriate viewing program. Recent developments in microscopy have allowed extra dimensions of data to be extracted and recorded from a specimen over and above the two dimensions of a simple image.

Techniques for optical sectioning have been developed that can provide three-dimensional data from living specimens. Nomarski imaging is a relatively simple technique that uses gradients of refractive index to produce contrast (1). It is the most benign of the optical sectioning techniques but has little specificity.

Fluorescence microscopy, on the other hand, through the use of fluorescent proteins or other selective probes can be used to reveal the distribution of a selected few molecular species out of the 100000 components that typically make up a living organism. However, this technique can give rise to problems of phototoxicity. We and others have demonstrated that the newly developed optical sectioning technique of multiphoton fluorescence excitation (MPFE) imaging gives low levels of phototoxicity and can obtain images considerably deeper into a specimen than the other commonly used fluorescence optical sectioning method, confocal imaging (2,3). Indeed, we have shown that by using MPFE imaging it is possible to make continuous multifocal-plane time-lapse recordings of a vertebrate embryo for 24 h without compromising viability or developmental potency (4). Optical sectioning techniques have proven to be powerful tools in biology including in the studies of embryogenesis (5–8), developmental biology (9–12) and neuroscience (13–15).

Microscopy techniques capable of extracting even higher dimensions of data are currently being developed. Every individual element of an image (pixel) can have a color, represented as a multichannel spectrum. In the case of fluorescence microscopy, every spectral channel can also have an array of time elements representing a histogram of the excited state lifetime of the fluorescence signal at that wavelength (16), leading to a total of six dimensions. The extra dimensions of spectra and lifetime can provide valuable information about the identities and relative abundance of combinations of fluorescent probes being detected and also about the physiological state of the cells being observed (17–20). The visualization of image data of greater than two dimensions is a challenge that must be met by programs that provide more sophisticated capabilities than just simple image viewing (21).

VISUALIZING DYNAMIC PROCESSES

  1. Top of page
  2. ABSTRACT
  3. INTRODUCTION
  4. VISUALIZING DYNAMIC PROCESSES
  5. THE CHALLENGES OF MULTIDIMENSIONAL IMAGING
  6. SUITABILITY OF EXISTING INFRASTRUCTURES FROM OTHER COMMUNITIES
  7. THE OPEN MICROSCOPY ENVIRONMENT
  8. SOFTWARE INTEROPERABILITY
  9. CONCLUSION
  10. REFERENCES

Living organisms are, by their very nature, dynamic structures. Therefore, the study of many biological processes requires the analysis of three-dimensional structures changing over time. Individual cells undergo complex rearrangements of their cyto-architecture as they progress through the cell cycle in the course of cell division or become terminally differentiated. Groups of cells undergo coordinated shape changes and migrations during the development of shape and form in an embryo. Understanding these processes is a fundamental goal of the cell or developmental biologist. The eye and brain are particularly adept at detecting movement in an image if the movement occurs reasonably rapidly. However, if the changes occur slowly, it is very difficult to discern the nature of these changes. One cannot see how grass grows, for example. But if a time-lapse movie is made so that the movements are hastened to a value that is optimal for perception, then the process of growth can be readily observed.

Imposed movements can help us to visualize static structures. For example, a microscopist typically twiddles the focus of a microscope when observing a specimen instead of observing static planes of focus; movements that are detected as the focus is adjusted give an impression of how the specimen's shape changes in the axial direction. In this way, the perception of movement is used to convey three-dimensional information. Human visual perception has evolved with special emphasis on making sense of a colorful three-dimensional reality in which we move and which moves around us. With the development of new imaging and labeling technologies, light microscopy has now evolved so that visual data collected from a single living specimen can go beyond the two-dimensional domain to engage the viewer's full capacity for seeing in space and through time. A challenge rising out of these advances is to display these data so that the investigator can sense and actively explore the recording's full spatial and temporal content to better understand visually what cannot be seen directly through the microscope's eyepiece.

Traditionally, biologists have deduced biological structure by cutting a specimen into slices and observing these slices under a microscope. Three-dimensional structure is inferred from reconstructing images of sections. Such methods obviously preclude the studies of the dynamics of living specimens such as the study of embryonic development. Noninvasive optical-sectioning microscopy provides the potential to observe the dynamics of three-dimensional structures in living tissue. However, optical sectioning by itself is generally not an adequate tool for determining structure, let alone structural dynamics. As we have discussed above, the use of animation provides another powerful tool for visualizing structure and dynamics. Optical sectioning techniques can be used to produce multifocal-plane time-lapse (4D) movies that can provide animation in both the time- and the Z-axes. There have been successful attempts by several groups, including our own, to develop software tools for acquiring and doing basic analyses on these 4D data sets (5,22–25). Unfortunately, there has been no successful attempt yet to develop a framework for the storage, manipulation and dissemination of 4D data. With the burgeoning sequence data that is emerging from several animal genomes, there is a pressing need to match this newly acquired genotype knowledge with the corresponding phenotype information. This need to correlate genotype with phenotype and to understand how organisms develop is establishing 4D developmental data as an important emerging data type. Much as has happened in the genetics community with sequence data, tools need to be developed by biologists for the visualization and analysis of developmental dynamics.

THE CHALLENGES OF MULTIDIMENSIONAL IMAGING

  1. Top of page
  2. ABSTRACT
  3. INTRODUCTION
  4. VISUALIZING DYNAMIC PROCESSES
  5. THE CHALLENGES OF MULTIDIMENSIONAL IMAGING
  6. SUITABILITY OF EXISTING INFRASTRUCTURES FROM OTHER COMMUNITIES
  7. THE OPEN MICROSCOPY ENVIRONMENT
  8. SOFTWARE INTEROPERABILITY
  9. CONCLUSION
  10. REFERENCES

Multidimensional data sets of the form described above document dynamic changes within the full volume of a specimen over time, often simultaneously monitoring several different parameters. Examples include: (1) optically sectioned image recordings of the distinct signals produced by a reporter gene product or an organelle-specific chemical probe during the development of an embryo (26); (2) MRI, PET or SPECT tomography of a live patient's tissues responding to a treatment (27); and (3) a reconstruction of serially sectioned fixed specimens representing sequential stages in the changing ultrastructure of a particular cell type (28).

The availability of a wide range of instrumentation for biological image data acquisition (primarily microscopic in nature) has led to an unfortunate plethora of image file formats. Commercial manufacturers of laser-scanning confocal microscopes generally use their own proprietary file formats, which include information in the file headers describing the instrument parameters used during data collection and the dimensionality of the data (1D for line scans, 2D for simple images, 3D for volume data and 4D for time-series volume data).

The plethora of imaging formats is but one problem confronting a laboratory that is involved in the acquisition and analysis of electronically captured images. Other problems include the following:

  • Effective archiving of large quantities of image data is difficult. CDs (and recently DVDs) are frequently used. However, CDs are off-line media that can lead to delays in data retrieval because the disk with its associated documentation has to be found, mounted and searched.
  • Locating all images with a particular combination of attributes can be tedious. For instance, a researcher might want to examine all confocal images taken of a particular cell type in a particular strain of animal. Such a simple request is very difficult to address in a typical laboratory that has acquired even modest amounts of individually archived image data.
  • Data stored in a rigid, proprietary file format is often readable only with the instrument on which it was collected.
  • Categorizing and indexing of archived data are often arbitrary, leading to lost data. Often, each member of a laboratory evolves his or her own cataloging scheme for archived image data. This lack of organization can lead to problems when other laboratory members need to access the data, particularly when the person that generated the data eventually leaves the laboratory.
  • Original data files are often overwritten and discarded as the data are analyzed, resulting in data loss. To take a simple but all too common example, suppose a student captures a confocal image and translates it into another more convenient proprietary format for manipulation by a separate viewer application, such as Photoshop. In so doing, all the captured instrument parameters are lost. Subsequent image processing (e.g. contrast adjustment, smoothing, resizing or archiving as a JPEG compressed file) is likely to remove information that was in the image. If the archived image eventually gets selected to become incorporated into a publication and is further massaged to optimize its appearance, more degradation may occur, resulting in an often-unsatisfactory final image. However, the original image along with needed information relating to magnification and other collection parameters may be no longer available.
  • The relative inaccessibility of research images (typically stored in boxes of CDs with inconsistent or nonexistent catalog information) makes it very difficult for educators or students outside of the laboratory to access the data. This situation is unfortunate, because it can lead to a failure to communicate significant research findings to a wide and general audience.

SUITABILITY OF EXISTING INFRASTRUCTURES FROM OTHER COMMUNITIES

  1. Top of page
  2. ABSTRACT
  3. INTRODUCTION
  4. VISUALIZING DYNAMIC PROCESSES
  5. THE CHALLENGES OF MULTIDIMENSIONAL IMAGING
  6. SUITABILITY OF EXISTING INFRASTRUCTURES FROM OTHER COMMUNITIES
  7. THE OPEN MICROSCOPY ENVIRONMENT
  8. SOFTWARE INTEROPERABILITY
  9. CONCLUSION
  10. REFERENCES

The need for an infrastructure to store and visualize image data from heterogeneous sources is not unique to the biological microscopy community. Those in the photography, digital media and publishing communities have been tackling these issues of compatibility and readability for some time. As in microscopy, these other communities need some universal way to store their images and associated metadata. Although there is definitely not a single standard within these communities, several solutions have been developed such as the PDF in the publishing community, and the International Press Telecommunications Council metadata standard for digital photos (http://www.iptc.org). Rather than have the microcopy community develop yet another format for defining its metadata, it would seem ideal if one of these existing schemas could be used and adapted. This approach would seem to not only result in great economy of code but also aid in the general compatibility and readability of these files between fields. Hence, the microscopy community did look carefully at such solutions from other fields. Each has features that meet some of the needs of the imaging community. These features include the ability to link metadata to the pixel data, strategies for keeping the primary original data inviolate and protected (e.g. watermarks), and image viewers that can read this metadata and add to it. However, there are several areas where these solutions fall short for the microscopy community. Although both the modern microscope and the digital-media camera produce images with a basic set of attributes in common (e.g. pixel size, resolution, brightness, contrast, etc.) the modern microscope presents many more challenges and variability in its pixel data and metadata than do the cameras used in consumer and publishing fields. Issues of space, time, spectra, and lifetime dimensionality go beyond the scope of other solutions. The modern microscope can produce data of vast size as well—in some labs a single data set can easily be a gigabyte; labs in the fluorescence-screening community can collect single data sets of 20 gigabytes or more. Solutions such as PDF were not designed for this size or dimensionality. Not only is PDF ill-equipped for these elements, but it is also inherently disadvantageous as a standard file format because it was not designed as an image processing or manipulation format, but rather as an exchange or output format.

Other solutions, particularly extensible markup language (XML)–based ones such as the “Technical Metadata for Digital Still Images” standard currently under development by the Library of Congress are more promising due to their flexibility. XML allows for a more flexible description of the metadata and its relationship to the pixel data can vary. With such an XML–based approach, the pixels can be stored in different formats: the metadata and pixels can be stored in one file (for example, in a TIFF with a header containing the XML) or a companion file approach can be used where a corresponding XML file exists for each image or set of images. The main problem with these XML solutions is that although they are adaptable to some extent, they are not flexible enough. The modern microscope has image characteristics and attributes that are unique to its data, necessitating the development of tailored software for the microscopy community. Although the community has had to develop software like the three packages reviewed below, all these packages use some features from other communities, including the publishing world and other research fields.

THE OPEN MICROSCOPY ENVIRONMENT

  1. Top of page
  2. ABSTRACT
  3. INTRODUCTION
  4. VISUALIZING DYNAMIC PROCESSES
  5. THE CHALLENGES OF MULTIDIMENSIONAL IMAGING
  6. SUITABILITY OF EXISTING INFRASTRUCTURES FROM OTHER COMMUNITIES
  7. THE OPEN MICROSCOPY ENVIRONMENT
  8. SOFTWARE INTEROPERABILITY
  9. CONCLUSION
  10. REFERENCES

To help address the above identified needs for an imaging infrastructure, a consortium of academic laboratories and commercial companies has defined some common standards that can be used in the development of software for archiving and analysis of biological image data. The resulting Open Microscopy Environment (OME, http://www.openmicroscopy.org/) has established a basis for the development of an object-oriented database system for the quantitative analysis of biological images. The OME (29) has several features that make it an attractive environment in which to develop software that can address many of the problems outlined above:

  • Image data are considered to consist of two separate entities, photon data (signal intensity values at each pixel) and metadata (all other descriptive data pertaining to the image). Metadata includes user-defined searchable tags and the filename of the photon data.
  • Metadata are coded in XML, a universal character-based format that can be parsed by many commercial programs including web browsers. XML is also used to define the structure of the metadata.
  • Metadata are stored in a relational database that can be queried using any of the predefined tags.
  • All OME software is open source.

The conceptual separation of photon data and metadata has several advantages for image data. Multidimensional image data sets are often very large (a multifocal plane, time-lapse recording can produce gigabytes of binary photon data). However, metadata typically consists of descriptions of the experimental parameters used, the number and extent of each dimension of data, the photon data file name and a set of searchable keyword tags. These data are naturally in character format and can easily be incorporated into a database to facilitate searching and retrieval. The OME system has a built-in image viewer that can view 4D data sets within the browser. For large data set visualization or analysis, OME has a connection framework that allows it to be interfaced with client-side analysis tools such as MATLAB and VisBio (discussed below).

ImageJ

ImageJ is a public domain image analysis program written by Wayne Rasband of the National Institutes of Health (NIH). ImageJ is the cross-platform successor to NIH Image and much like NIH Image it has a vast multidisciplinary user community. What is most significant about the ImageJ effort is that many of these users are also developers, adapting ImageJ for their specialized analysis purposes and making the resulting code freely available (30). This developer support is the hallmark feature of ImageJ; a plug-in facility has been developed that allows nonprofessional programmers to develop their own analysis routines in code that can be run in ImageJ. In addition, ImageJ features a macro language that allows simple scripts to be written that can harness and execute any ImageJ function (including those in plug-ins). ImageJ excels as a 2D analysis tool for which it was originally designed. However, ImageJ is also capable of analyzing 3D and 4D data due to such plug-ins as the hypervolume browser, which allows one to explore 3D and 4D data sets, and VolumeJ, a plug-in for volume rendering.

VisBio

VisBio is a computer application that we are developing for the interactive graphical display and quantitative analysis of biological image data of arbitrary dimensionality (31). Through the VisBio interface, users are able to import microscope data in any file format and interactively explore and measure the data within 4D recordings of specimens. VisBio goes beyond the capabilities of current commercial and public domain software because it is being specially tailored to the demands of handling and animating massive data sets fluidly. Furthermore, VisBio enables the interactive representation of recordings in which each spatiotemporal pixel element contains multiple dimensions, e.g. emission intensity, color spectrum, and fluorescence excited state lifetime. The program will thereby satisfy demands that are being generated by current systems and new imaging systems under development. Over the last couple of years, we have been developing a pilot implementation of VisBio in response to the need for advanced analysis tools to process data generated by novel imaging instrumentation being developed by our group and others. This implementation is available from the VisBio website as the v2.31 stable release.

Meanwhile, to guarantee the flexibility necessary for effective analysis of more exotic multidimensional data types such as spectra and lifetime, we have begun work on VisBio's third major revision, v3.00. We have remodeled the architecture of VisBio to be even more powerful and flexible. Core data and display code have been refactored, we have begun work on a VisBio–OME interface and support for several additional microscopy formats has been added. Because this new feature set offers numerous advantages over VisBio v2.31, we are releasing a series of beta versions as new functionality is implemented and existing features are ported from v2.31.

Built on existing software. VisBio has been built with the VisAD scientific visualization toolkit (http://www.ssec.wisc.edu/~billh/visad.html) and includes ImageJ; it therefore inherits a multitude of features from both tools. Work done to improve or enhance ImageJ or VisAD automatically benefits VisBio. More importantly, as additional or improved functionality in VisBio has been needed, we have implemented it in the core VisAD package, benefiting not only the biological community, but also all users of VisAD in general.

File formats. Because very few image analysis packages support every format, we have made it a priority to support every major microscopy imaging format, including multipage TIFF, Bio-Rad PIC, Zeiss LSM, Zeiss ZVI, Metamorph STK, Olympus Fluoview, Openlab LIFF and QuickTime movie, with planned support for several others (Leica, Nikon, IPLab). Support for these formats has been wrapped into the VisAD toolkit itself, so that all VisAD-based applications can take advantage of our work.

Fully multidimensional. VisBio can import image data of any dimensionality. It provides sophisticated subsampling features, and enables visualization by mapping dimensional axes to the “image stack” and “animation” parameters. Flexible export features allow data sets to be saved to a group of files in Bio-Rad PIC, multipage TIFF or QuickTime movie format.

Generalized data engine. VisBio can work with any number of data objects simultaneously, imported from file groups on disk or computed from other data objects using built-in “data transforms.” A “subsampling” data transform enables visualization of a subset of a data object. Also implemented are a preliminary multispectral mapping algorithm that allows interactive weighting of each spectral channel (Fig. 2), a transform that computes maximum intensity projections across a given dimensional axis, one for creating image overlays in 2D and one for performing arbitrary slicing of an image stack in 3D. We have designed the data transform functionality to handle any algorithmic visualization need, including fluorescence lifetime curve fitting, spectral analysis routines, denoising and other image manipulation routines.

image

Figure 2. Seventeen-channel spectral image of a methyl green-stained section of uterus imaged with MP excitation. VisBio displays image chrominance as a weighted function of wavelength values in each channel. The left-hand image is colored according to VisBio's “best guess” mapping, with the first 1/3 of the channels equally weighted toward red, the second 1/3 toward green and the last 1/3 toward blue. The right-hand image illustrates the discrimination of nuclei by overweighting the contribution of channel 1 (660 nm) and negatively weighting channel 5 (600 nm) with respect to the red color component. This scheme reveals nuclei as red and surrounding tissue as turquoise. (Uterus section was prepared by Al Kutchera, Midwest Microtech.)

Modular display logic. VisBio's display logic allows the creation of any number of displays. Each data object can be added to any number of displays, and each display can contain any number of data objects. Thus it is possible to visualize two or more data sets in 3D, side by side, or two image stacks simultaneously within the same display window. The dimensional axes mapped to Z-axis and animation can be quickly switched on the fly. The user can query pixel values at any position within the display. Projection options for zooming, rotation, panning and aspect ratio control are available.

Variable resolution. VisBio makes use of multiresolution functionality, displaying data in lower resolution to improve animation speed, but “burning in” data at full resolution when more detail is needed for closer inspection. This approach not only improves rendering and animation speed, but also cuts down on memory use, because only the currently displayed time step must be maintained in memory at full resolution at any given time.

Flexible color mapping. VisBio provides precise control over how colors are mapped. A data set may have more than one color channel associated with each pixel. VisBio allows for complete control over each channel's color table. It also provides shortcuts to compute a color composite from all channels; to map channels to the red, green and blue color components of an RGB color space; or map them to the hue, saturation and value components of an HSV color space.

Arbitrary slicing (Fig. 3). Planes can be placed at any orientation in the 3D image stack, “slicing” it at any angle. Data along these arbitrary slices are interpolated and displayed at a user-defined resolution. Arbitrary slice computation is fast enough that animating these slices at reasonable speeds is practical.

image

Figure 3. VisBio performing an arbitrary slicing of a C. elegans embryo undergoing cell fusion and imaged by multiphoton microscopy. The resultant slice is seen superimposed in the 3D display, as well as in the 2D display, (data set provided by Dr. William Mohler of the University of Connecticut, Farmington, CT.)

Volume rendering (Fig. 4). Support for rendering each image stack as a semitransparent volume is implemented. Control over the level of transparency is provided in order to locate the optimum visual setting for eliminating noise but preserving the important aspects of the data.

image

Figure 4. Three-dimensional reconstruction of a confocal data set of mouse vasculature, (data set provided by Dr. Rich Halberg, Mr. Lance Rodenkirch and Dr. William Dove of the University of Wisconsin-Madison)

Measurement tools. A set of tools for performing measurements on the data are available. The distance between any two points in an image stack can be computed, and important events in the data can be flagged with markers. Measurements can also be set to be standard across each slice of every time step. Lastly, if more complex analysis is desired, the measurements can be saved to a text file formatted for easy importing into Excel or other spreadsheet application.

OME import interface (Fig. 5). We have added the ability to upload a data set from VisBio to an OME image database across the network. This functionality provides OME with an interactive, client-side, multidimensional data import procedure that works with any of the growing number of VisBio-supported microscopy file formats. We have also created a preliminary adaptation of this interface as a plug-in for ImageJ, so that image stacks within ImageJ can be imported into the OME system as well.

image

Figure 5. ImageJ and VisBio interacting with OME. The browser displays multiple data sets that have been uploaded to our OME database using VisBio and ImageJ.

SOFTWARE INTEROPERABILITY

  1. Top of page
  2. ABSTRACT
  3. INTRODUCTION
  4. VISUALIZING DYNAMIC PROCESSES
  5. THE CHALLENGES OF MULTIDIMENSIONAL IMAGING
  6. SUITABILITY OF EXISTING INFRASTRUCTURES FROM OTHER COMMUNITIES
  7. THE OPEN MICROSCOPY ENVIRONMENT
  8. SOFTWARE INTEROPERABILITY
  9. CONCLUSION
  10. REFERENCES

As mentioned above, VisBio is capable of uploading data sets to an OME image database. This mechanism is the first step toward VisBio/OME interoperability. Many labs have gigabytes of data rolling in from their acquisition systems on a weekly basis, and the OME system provides a metadata-rich facility for storing and organizing it. By enabling software tools to work together and communicate, a truly open analysis environment is fostered. Along these lines, we are also developing an ImageJ plug-in to communicate with OME, both for data upload and for browsing and download data sets in the system, and will soon bring these features to VisBio as well.

CONCLUSION

  1. Top of page
  2. ABSTRACT
  3. INTRODUCTION
  4. VISUALIZING DYNAMIC PROCESSES
  5. THE CHALLENGES OF MULTIDIMENSIONAL IMAGING
  6. SUITABILITY OF EXISTING INFRASTRUCTURES FROM OTHER COMMUNITIES
  7. THE OPEN MICROSCOPY ENVIRONMENT
  8. SOFTWARE INTEROPERABILITY
  9. CONCLUSION
  10. REFERENCES

We are currently in a time of rapid imaging instrumentation and technique development and advancement. These new instruments and methods are being deployed in new, innovative ways to further our understanding of key biological processes. However, with these advances comes the significant analysis challenge of handling the data they produce. We are now capable of producing data sets that are beyond what the human eye is accustomed to seeing and designed to see. To meet these analysis challenges, there needs to be a corresponding development of software tools and algorithms to match the rapidly developing instrumentation requirement. Due to their magnitude, these needs cannot be met by the commercial or academic communities alone but rather by a unique partnership of these entities. Tools such as ImageJ, VisBio and the Open Microscopy Environment are a promising start because they are already interdisciplinary and widely used. However, much needs to be done; we are still far away from a unified analysis infrastructure.

REFERENCES

  1. Top of page
  2. ABSTRACT
  3. INTRODUCTION
  4. VISUALIZING DYNAMIC PROCESSES
  5. THE CHALLENGES OF MULTIDIMENSIONAL IMAGING
  6. SUITABILITY OF EXISTING INFRASTRUCTURES FROM OTHER COMMUNITIES
  7. THE OPEN MICROSCOPY ENVIRONMENT
  8. SOFTWARE INTEROPERABILITY
  9. CONCLUSION
  10. REFERENCES