SEARCH

SEARCH BY CITATION

Keywords:

  • automated microscopy;
  • computational microscopy;
  • feedback regulation;
  • image processing;
  • live imaging

Abstract

  1. Top of page
  2. Abstract
  3. Introduction
  4. Computational microscopy
  5. Image processing for microscope regulation
  6. Feedback regulation by image processing
  7. High-speed tracking system for C. elegans
  8. Conclusion
  9. Acknowledgments
  10. References

Computational microscope systems are becoming a major part of imaging biological phenomena, and the development of such systems requires the design of automated regulation of microscopes. An important aspect of automated regulation is feedback regulation, which is the focus of this review. As modern microscope systems become more complex, often with many independent components that must work together, computer control is inevitable since the exact orchestration of parameters and timings for these multiple components is critical to acquire proper images. A number of techniques have been developed for biological imaging to accomplish this. Here, we summarize the basics of computational microscopy for the purpose of building automatically regulated microscopes focus on feedback regulation by image processing. These techniques allow high throughput data acquisition while monitoring both short- and long-term dynamic phenomena, which cannot be achieved without an automated system.


Introduction

  1. Top of page
  2. Abstract
  3. Introduction
  4. Computational microscopy
  5. Image processing for microscope regulation
  6. Feedback regulation by image processing
  7. High-speed tracking system for C. elegans
  8. Conclusion
  9. Acknowledgments
  10. References

Computational control of microscope and image analysis with computers are just as important as optics in many cases of modern microscopy (Eliceiri et al. 2012). In fact, we cannot acquire proper images of confocal microscopy or time-lapse imaging without computers. As represented by the popularity of multi-dimensional time-lapse imaging (Kato & Hayashi 2008), which can capture the dynamic status of cells and organs, the technology of computational control of multi-parametric microscope adjustment, including regulation of focus, light condition, and stage position, has become widespread these days (Edelstein et al. 2010). Such technological availability in both hardware and software enables the design of feedback system for microscopy (Conrad et al. 2011); this is valuable for monitoring dynamic phenomena of living objects (Lee & Howell 2006).

Monitoring a moving object, such as a migrating Caenorhabditis elegans, is an example of effective use of feedback regulation of a microscope, since the object can rapidly move out of the microscopic field during observation without an automatic tracking system (Ben Arous et al. 2010; Kuhara et al. 2011; Leifer et al. 2011; Piggott et al. 2011; Kocabas et al. 2012). Such an automatic tracking system usually consists of a sensor, a regulating device, and regulating software. In the case of tracking systems for migrating C. elegans, sensor and regulating devices correspond to camera and motorized stage, respectively. The motorized stage is regulated according to the camera input in order to keep the target in a center of the microscopic field. Although a camera is a useful input device that can capture plenty of information, acquired image data needs processing to extract essential information. In the example of migrating micro-organisms, acquired time-lapse images need to be processed to extract direction and speed information for stage regulation.

This article aims to introduce how to develop a feedback microscopy system to acquire dynamic phenomena of living organisms. This has been written for biologists as well as engineering researchers who are interested in biological research or are involved in a biological collaboration. We begin with a description about basics of a computational microscope system and several image processing methods for feedback regulation. Then we review basics of feedback regulation system, and discuss high-speed systems, which are important for high magnification microscopy regulation.

Since biological systems essentially contain dynamic properties, feedback regulation systems for microscopes expand our ability to observe these essential aspects of biological phenomena.

Computational microscopy

  1. Top of page
  2. Abstract
  3. Introduction
  4. Computational microscopy
  5. Image processing for microscope regulation
  6. Feedback regulation by image processing
  7. High-speed tracking system for C. elegans
  8. Conclusion
  9. Acknowledgments
  10. References

Computational control of microscopes has been popular since microscope systems have increased in complexity with many components needing to be orchestrated to carry out the methods necessary for modern microscopy. Because feedback regulation for microscopes is based on computational microscope control, we will first describe the components of computational microscope systems. Understanding basic mechanisms of computational microscope regulation is indispensable to work with these systems and to expand them for new experimental design.

Popular computationally controllable components of microscope system include cameras, motorized stages, filter wheels and shutters, but often other devices are also controllable. There are commercial packages for computationally controllable microscopy system offered by most microscope venders, but additional designs and customization are often required when you setup feedback regulation. Figure 1 shows a simple diagram of controllable components and parameters of a typical epi-fluorescence microscope. These components are controlled by software to function in combination for exact parameterization, and timing. The main role of software has been an interface for human operators to orchestrate and monitor the connected devices; however, additional programming scripts can enable automation or feedback regulation instead of human operators (cf. Fig. 2). In the case of long-term (hours to days) time-lapse observation or millisecond scale timing control, automated regulation is indispensable. Here, we look through each component of a typical computational microscope system designed to perform automated observation.

image

Figure 1. Common controllable hardware for live fluorescence microscopy. Coordination of these devices with exact timing and parameters is necessary for feedback control. It is helpful to draw these kinds of schematic views for designing and refining your own microscope system.

Download figure to PowerPoint

image

Figure 2. Schematic view of the role of software. (a) Software manages tedious regulating commands for all connected devices with proper timing and parameters according to an operator's (or a program's) demand. (b) It also monitors the status of the connected devices. An operator or an additional program receives the coordinated information from connected devices via software. Repetition of these controls and detection events by additional programs enables feedback regulation and thus can be used for the observation of dynamic phenomena.

Download figure to PowerPoint

Camera and sensing devices

Camera

The main crucial sensing device for microscope regulation is, of course, a camera.

In modern microscopy, camera technology is constantly advancing with new features and better specifications being released every year; therefore, camera performance greatly affects both available methods and quality of results. As usual for such advancing technologies, although the performance of individual components is important, compatibility of the components with the rest of the system is crucial. Availability of device driver for controlling software is critical to work with the other components of the computational microscope system. Some device drivers may regulate only restricted components of the camera, or be compatible only with specific software. Thus, selecting the proper camera should be a critical aspect of designing an automated microscope system.

Other sensing devices

Cameras are not the only input devices; different kinds of sensors, such as photomultipliers, can provide input that is useful for the regulation of a microscope (Faumont et al. 2011). Although such sensing devices are not commonly used in all microscope control, it is valuable to consider various sensing devices that could provide supporting input in addition to the camera for more specialized methods.

Output device

Motorized microscope

There are many commercial motorized microscopes available from major microscope venders. They provide pre-design systems that are often state of the art with highly compatible components for specific tasks, such as super resolution microscopes (Schermelleh et al. 2010); however, it is usually difficult for users to customize them. There are some packages available for more basic systems that are composed of a combination of third party instruments and software. These combination systems are a common option for automated microscopy in the laboratory. Additional devices include controllers for shutters, motorized wheels for selecting objective lenses or filters, and motorized stages, etc. To expand the computational control systems into feedback regulation, a combination of components from several manufacturers is usually necessary in order to allow flexibility, although the components in package systems provided by major venders can be used for combination systems in many cases.

Motorized stage

Motorized stages play a valuable role in computational microscopy since it expands several aspects of imaging ability, for example, tracking migrating organisms, monitoring multiple samples, and performing long-term time-lapse imaging with an autofocus system. Exploiting the ability of motorized stages promotes many advantages of a computational microscope system. The control of motorized stages, however, tends to be a challenging task compared to other devices in computational microscopy. Without a sensory system, the iterative action of these stages tends to cause insufficient accuracy of position control over time. This problem becomes worse when a stage moves at high speeds. Speed control also relates to traceability of moving objects, and therefore a trade-off may exist between accuracy and speed. For precise movement control, users should always pay attention to proper calibration and accuracy of movement in each iterative action. Estimation of movement accuracy within one action is essential especially for performing iterative movements. It is also very important to be aware of the range-of-motion of the stage to avoid injuring the operator or damaging nearby objects, such as objective lenses.

Regulation of light source

Shutter controller

The shutter device is particularly important for live fluorescence imaging since minimization of photo-bleaching and photo-toxicity is critical for delicate specimens. Although high speed and robust ON/OFF state changes are required in many cases, adding a shutter is considered good practice for extending a microscope system. Controlling problems rarely occur because of its simplicity and compatibility.

Several light sources

As a result of the development of LEDs (light-emitting diodes), progress in light source technology is one of the prominent advances in modern microscopic devices. Significant difference between new light sources like LEDs and common light sources like mercury or xenon lamps include the speed of turning ON/OFF: micro second order state change is available for LEDs. A selection of wavelengths is also available from the light source itself. Moreover, implementation of LED controller is flexible and low cost (Teikari et al. 2012). On the other hand, luminescent power of the LED may not be enough for many cases, so more improvement is required for practical use. In the case of controlling incandescent light sources like mercury lamps, additional controlling devices like shutters and filter wheels are needed; the lamp itself is not recognized by the controlling system. Light sources like LEDs could omit shutters and filter wheels. Instead the system would communicate with the light source itself.

Software

In a computational microscope system, orchestration of connected devices is conducted by software. As shown in Figure 2, software takes care of the coordination of each device responding to the commands by users or programs. For example, “take a snap image” includes commands of proper parameter settings for a camera such as exposure time, gain, and binning etc. Moreover, the other devices, like the shutter, are required to be coordinated with the camera and also require their own parameters and exact timing. All of these tedious commands need to be executed by the software, and therefore, a crucial function of the microscopic software is the capability to communicate with the devices connected to the microscope. Thus, device drivers, or adaptors, are important components of computational microscope software and availability of proper device driver is necessary to use the device of interest.

Selecting communication protocols for connected devices is also a key issue to set up a system with various devices. Common communication protocols such as serial communication or TTL (Transistor-Transistor-Logic) are reliable options to communicate with both commercial and non-commercial devices, since most of the commercial packages are compatible with these protocols and the implementation for homemade programs with these protocols is simple. In other words, availability of these simple communication protocols is a valuable point for selecting a device from the software point of view.

There are several commercial software packages for microscopic regulation provided by both major microscope venders and third parties. These companies provide the most straightforward solutions when you operate your microscope in a way that they expect. However, customizing tools inevitably includes various kinds of devices and complex or unique protocols for microscopy. Toolkit environments such as LabView (National Instruments) or Matlab (Mathworks) provide useful components to develop computational microscope systems, and therefore, have been used for many applications (Eliceiri et al. 2012). The apparent advantages of these toolkit environments are compatibility with many devices and plenty of functional components, which tremendously reduce developmental costs. Although there are advantages for common programming languages like C or Java when a system needs high speed or specific compatibility for using a specific device, the developmental period may be much longer. In the case of such handmade projects, a popular package for computer vision called OpenCV (http://opencv.org/) greatly helps to manage image processing.

Open source software called micro-manager is becoming a common option to develop computational microscope systems (Edelstein et al. 2010). Micro-manager works with the popular open source image processing software ImageJ (Schneider et al. 2012), and provides a flexible environment for microscope control with plenty of device drivers. Combination of micro-manager with Matlab or other programs extends its ability for advanced implementation of automated experiments.

Building a computational microscope system

When designing a computational microscope system, it is important to plan the system as a whole from the beginning of a project. In fact, gathering components without keeping the entire system in mind can lead to an uncoordinated system for which one cannot choose suitable devices for, despite vast availability. Setting a clear and practical goal is helpful to focus the decision for necessary devices and software from available options.

Although the compatibility between the connected devices and software is necessary for orchestrating the computational microscope system as described in previous sections, this might not be enough to satisfy the purpose of implementing a computational microscope system. Even with complete compatibility between all devices and regulating software, interference among the connected devices could occur when a microscopic system runs as a whole. Particularly, the reaction speed of each device is an important point to consider for designing a feedback microscope system. Differences between reaction speeds in a set of devices that work together in a feedback task result in a bottleneck with the system operating at the speed of the slowest device. The choice of computer used to run the software could also cause such a problem: the running processes become slow when regulating multiple tasks approaches the limit of available RAM (random access memory) and the processing power of the computer. Combinations of multiple systems or computers could avoid these problems, although the system increases in complexity.

Data storage and image analysis are practically important components to conduct a series of experiments because a computational microscope system often generates numerous images to analyze. These components are able to separate from data acquisition and device regulation, and thus one could keep the computational microscope system simple. However, in the case of feedback regulation like tracking system, real time image analysis is required. The following section will discuss about real time image processing for microscope regulation in order to construct a feedback system.

In most cases, there are several solutions to design a computational microscope system for a given experiment; however, optimal systems are changing all the times because the development of devices and software is constantly advancing. Including flexibility when designing the system can avoid constructing a system that quickly becomes out dated or difficult to upgrade.

Image processing for microscope regulation

  1. Top of page
  2. Abstract
  3. Introduction
  4. Computational microscopy
  5. Image processing for microscope regulation
  6. Feedback regulation by image processing
  7. High-speed tracking system for C. elegans
  8. Conclusion
  9. Acknowledgments
  10. References

The purpose of the image processing components for feedback regulation of microscope systems is to determine the parameter values for regulating connected devices. Thus, the image processing part is a core component of feedback regulation of a microscope system. However, it is challenging to develop a practical image-processing program for feedback regulation because a particular image-processing component for a particular microscopic image is not often suitable for other images. Although there are some common features of bio-imaging data, which are often used with microscope regulation and image analysis, a variety of image qualities and experimental conditions make it difficult for common algorithms and parameters to be applied in order to extract focused features (Peng 2008). The variation in the implementation is also considerable when developing image-processing parts: the required feature for feedback regulation could be created using several algorithms, and there are many ways to code an image-processing algorithm. Systematic understanding of image processing is helpful for constructing efficient image processing components for feedback regulation; here we categorize the image processing for microscope regulation into two parts as preprocessing and feature extraction.

Note that, an important difference between stand-alone image analysis and image processing for feedback regulation is the limitation of processing time. Image processing for feedback regulation must be conducted within a feedback loop time. Therefore, this time limitation should be kept in mind while selecting algorithms.

Preprocessing

Most microscopic images require preprocessing before the extraction of image features since the raw images usually contain undesirable information and/or faint information. The purpose of preprocessing is to maximize the essence of the information in the acquired image without undesired distortion. Note that, the order of the processing can affect the results of image processing when several processes are used. Less preprocessing is better because complex preprocessing can cause an application to become too specific and lose generality.

Filtering

Filtering processes convert input signal according to a set of temporal or spatial sequence of input signal, thus the spatial or temporal information is used to modify the input signal. For example, a median filter takes a median of a small set of input and eventually produces smoothed output, which reduces shot noise keeping edges of the objects in the input image. Although the abuse of such filtering processes can easily lead to inappropriate distortion, the choice of proper filtering processes can reduce noise and highlight required features in the image without distorting the essence of the original information.

The choice of filtering process is usually empirically determined since a variety of image input and microscope systems make it difficult to select a proper filtering process. One should also consider processing time when selecting the filtering process, since all tasks including image processing and device regulation must be conducted within a feedback loop; sometimes filtering can consume an excess amount of processing time.

Binarization

Binarization, or making binary images, is widely used in image analysis for identification of morphological property and detection of a position of interest in an image. The most critical thing for binarization is the selection of an appropriate threshold. Although there are many methods to determine the threshold parameters (Mehmet & Sankur 2004), the optimal method for a given image cannot easily be determined because it is dependent on the interests of the researcher. Usually, a histogram of the entire image intensity is used to determine the proper threshold.

In the case of feedback regulation for a microscope, the choice of the threshold parameters is not only important for image processing, but is also constrained by the selection of proper parameters for lighting and camera regulation. Therefore, the threshold parameters should be verified with several images at various settings of microscope in order to optimize the parameters for robust feedback regulation.

Feature extraction

Feature extraction from preprocessed or raw images generates values to determine parameters for regulating connected devices. These values can be used both directly and indirectly. In a direct use, extracted values are almost directly put into devices; for example, putting positional coordinates into motorized stages and move the microscopic field to a target object. Indirect uses utilize the extracted values to compare to criteria in order to make decisions during microscope regulation, for example, finding the proper size of a cell from an acquired image and deciding whether to continue to take detailed images or not.

Representative values

Since an image is a high dimensional dataset with many pixels, reduction of the dimensionality is a standard approach to extract useful information. Therefore, a statistically representative value that indicates a specific aspect of an image is the simplest solution to determine the parameters for the connected devices or categorize the image according to the value. The average and the variance of the pixel intensities are often used as an index of the image: for example, the variance of the pixel intensities can be used as an index of focus (Yeo et al. 1993). Histogram of the intensities in an image is also useful as a profile of the image. Otsu's method uses histogram of the image intensities to determine the threshold for binarization (Otsu 1979). These values can be also extracted with the region of interest; for example, the average image intensity for the pixels of thresholded cell region can be used as an index of each object, and local image entropy is used for detection of nuclear region (Hamahashi et al. 2005).

Segmentation, edge detection, skeletonization

Segmentation of the acquired image for identifying the region of interest is a basic step of feature extraction during image processing. Properly thresholded binary images are often used to separate the target objects such as a cell or a nucleus from the background. Also, after the identification of such a target region, the edge of the target object is generally used as a useful image analysis property. The length of the edge is a good morphological index for the target object particularly in combination with the size of the area that the object occupies. Calculating the curvature of the detected edge can provide another type of information about the shape of the target object.

The skeletal structure of the binary object, called skeletone, is also generally used as a useful image analysis property. Although a simple skeletonize algorithm often produces undesired small branches, depending on the structure of the input binary image, additional processes can prune these branches and generate a more accurate processed image. The resolution and shape of the binary images can affect the definition of the edge and skeletone.

Feedback regulation by image processing

  1. Top of page
  2. Abstract
  3. Introduction
  4. Computational microscopy
  5. Image processing for microscope regulation
  6. Feedback regulation by image processing
  7. High-speed tracking system for C. elegans
  8. Conclusion
  9. Acknowledgments
  10. References

Combination of the computational control of microscope and real time image processing enable the construction of a feedback system, which interactively acquires the microscopic image following dynamic biological phenomena. The extracted features from the acquired image determine new parameters for regulation of a microscope, and the new image will be taken with the new parameters. The system can be applied to both relatively fast regulation such as automatic tracking systems for moving objects, and slow regulation such as screening microscope for a multi array microscope system.

In the case of fast regulation, a system tends to be fragile because of its dynamics. To prevent such vulnerability, we will discuss how to deal with these extracted features for feedback regulation based on the control theory.

Feedback regulation

Control theory is a mathematical framework to model, design and analyze the behavior of dynamical systems. Dynamical systems are the systems described by differential equations such as equations of motions for mechanical systems and Kirchoff equations for electrical circuits. The system is usually composed of system inputs and system outputs. The objective of the “control” is to regulate the system outputs to desired values. Sensory feedbacks are used to implement a regulation robust against external disturbances and parameter fluctuations. The sensors detect the system outputs and send the signals to the controller. The controller compares the measured outputs and the reference signals, and computes the system inputs. Figure 3 shows a block diagram of the feedback control system.

image

Figure 3. Block diagram of feedback control. The objective of a feedback system is to regulate the system output to a desired value. The system output value is measured by a sensor and the measured output is compared with a reference. The reference is actually the desired value of the system output measured by the sensor. The difference between the reference and the measured output is sent to the controller, which adjusts the system input.

Download figure to PowerPoint

Mathematical model of the system

First we assume that the system is linear, single input and single output. Then the system model is given by

  • display math(1)

where t is the time, x(t) is the system output, u(t) is the system input, a and b are constant parameters that define the property, i.e., time constant, stability, oscillation etc., of the system. The sensor is modeled by

  • display math(2)

where y(t) is the measured output and c is a constant expressing the sensor characteristics. When the system (1) is single input and single output, then it is called first-order lag system or leak integral system. The condition for the system (1) to be stable is < 0, which defines the time constant τ = 1/|a|.

A typical feedback controller is given by

  • display math(3)

where r(t) is the reference and r(t) – y(t) is the measured error and k is a constant called “feedback gain.” The purpose of the feedback control is to let the system output converge to the desired value. In order to do so, the output measured by the sensor is compared with the reference. By substituting Eqns (2) and (3) into (1) yields

  • display math(4)

The stability, robustness and disturbance rejection property are then discussed using these equations. The feedback control (3) with large feedback gain k is effective to make the system stable or to improve the time constant especially when the system models are exact and the system contains no delay.

Discrete time systems

Continuous time systems are modeled by differential equations and thus they are easy to analyze. However, practically, all elements of the system are composed of digital devices, which mean that all components have their own clocks. Because these components work in parallel, the component having the longest processing time limits the total working rate. And the sum of the processing time of all components defines the total delay. Multi-rate feedback control has been discussed (Kranc 1957), but for simplicity, we assume that all elements are synchronized with a single processing time T. Then the system can be modeled by a set of differential equations

  • display math(5)
  • display math(6)
  • display math(7)

where n is the time index, x, u, y, r are system state, input, output, reference, respectively; and these signals with brackets “[ ]” means that they are expressed in discrete time. The parameters f, g, h are constants expressing the given system and k is the design parameter (feedback gain). By considering t = nT, the differential Equation (1) and the difference Equation (5) have the same behavior at the sampling instance (Kailath 1980).

We give a simple example when the system becomes stable. The equivalence of continuous and discrete representations is also shown. For example, when = −4, = 1, = 4 and r[n] = 1 (> 0), we have the following response (cf. Fig. 4a), which is called first-order delay with time-constant 0.25. The discrete time system equivalent to this can be derived by continuous to discrete transformation eaT, beaT and c. Then the step response of the discrete time system (5) is shown in (cf. Fig. 4b). One can see that these two systems are equivalent at the sampling instance.

image

Figure 4. Response against the step change of the reference. (a) When the system is on continuous time, the response is exponential. (b) Discrete time equivalent can be obtained. The values of the response coincide with the continuous version at the time of sampling. (c) When a feedback control is introduced, we can design an ideal step response. The system output coincides with the reference input with just one sampling time delay. (d) However, if the feedback system contains a time delay, the system output becomes unstable and generates big oscillation. (e) For larger delay (two-step delay), the system becomes unstable. (f) Smaller feedback gain (k = 0.5/gh) will give us a stable response.

Download figure to PowerPoint

Dead-beat controller

One may think that the measured error r[n] – y[n] should be compensated in one step. This can be achieved by

  • display math(8)

The resulting dynamics can be easily checked by substituting (8) into (5) and (6), and yielding

  • display math(9)
  • display math(10)

which clearly shows that y becomes equal to r within one step. This feedback law is called a “dead-beat” controller. Figure 4c shows the step response. This property is ideal when all the parameters are known exactly and there is no external noise or no feedback delay. However, it is usual that our model contains errors; also the system contains delays due to various reasons including image acquisition, image processing and PC interface. Then the situation becomes different because of these delays.

Suppose that our system includes a time delay of one step. Then the control law is written by

  • display math(11)

and the step response is given by Figure 4d. The system is stable but vibrative. For larger delay, the system becomes unstable (the case of two step delay is given in Fig. 4e). Smaller feedback gain will give us a stable response (the case of k = 0.5/gh is given in Figure 4f).

Design of controller

As reviewed in the previous example, the feedback gain in dead-beat control is easy to become unstable when the system includes delay. These phenomena (delay and high feedback gain oscillation) are frequently observed in our daily life, for example, laser, heart-beat, and microphone-speaker howling. From the stabilization point of view, an easy solution is decreasing the gain or inserting damping elements. With low gain, the feedback system tracking performance (the time required to follow the reference change) is degraded. This means that the measurement error displacement is not compensated in one sampling time and it requires several loops to converge the error to zero. If hardware improvement can be considered, then re-design the system to yield smaller sampling delay T. Realtime operating systems (Tanenbaum 2008) are also very important to keep the worst-case delay smaller.

Feedback regulation of discrete time systems

Suppose that the motorized stage is used to keep the moving target at the center of the image. Let the state be the relative position of the target and the center of the observed image in the microscope. The objective of the feedback regulation is to converge the relative position to zero. The camera captures the target image = [I11, I12,…, Imn], where m and n are the numbers of pixels of the camera in vertical and horizontal directions and Iij is the brightness of the [i, j] -th pixel of the image. This image contains the sufficient information to control the motorized stage (“Result” in Fig. 5). After some manipulation of the target image, the target position displacement in the image = [cx, cy] is calculated. This is the necessary information to control the stage (“State” in Fig. 5). The controller then transfers the displacement into the motor displacement or the motor velocity u (“Control input” in Fig. 5). This transfer mechanism is called control law. The input command drives the motors of the stage system (“Action” in Fig. 5); results in the stage movement; and changes the microscope image (“Result” in Fig. 5), where the target motion is considered as a disturbance. Outer reference signal = [0, 0] represents the desired displacement in the image (keep the target at the image center).

image

Figure 5. Block diagram of visual feedback control. The robot means the motorized stage we want to control. The relationship between the objective lens and the target is measured by the camera. The microscope image is processed by the computer/software and the target position is calculated. The displacement of the target from the center of the microscope view is considered as measured error. The displacement is transferred to the motor drive voltage and the voltage is applied to the motor. The motor is usually in velocity control mode and the motor velocity is changed. The motor drives the XYZ stage.

Download figure to PowerPoint

Increasing the image acquisition/processing speed

The camera measures the displacement of the observing target. If the camera acquisition speed (frames-per-second: fps) is high, then the target motion during the successive frames becomes small, which will considerably relax the difficulty of the image processing. Meanwhile, because the differences between successive frames are small, an image processing algorithm converges much more quickly, which results in decreasing the required computational time for image processing. When we need real-time high-speed feedback, both image acquisition time and image processing time are very important factors to realize high-performance tracking.

Commercially available cameras for feedback control

Recent technology development of digital computer interface (USB, Firewire, Thunderbolt, GigE) allows us to access high-speed cameras. The image acquisition speed and maximum resolution are limited by the image capturing device (CMOS, CCD). The digital interface speed (transfer byte per second) defines the limitation of the image transfer. When the binning and the ROI setting are used, the number of pixels transfused can be reduced. Table 1 shows several examples of image acquisition rate as a function of digital interface and resolution. Development of image processing algorithms and parallel processing technology including the use of GPU (Zang & Hashimoto 2010) are also important to improve the feedback rate.

Table 1. Data transfer rate and frames per second (fps) for commercially available digital interface cameras. VGA: 640 × 480 (pixels by pixels), SXGA: 1280 × 1024, Full HD: 1920 × 1080, QSXGA: 2560 × 2048. These data are from the specification sheets of typical commercially available cameras
 Data transfer rate (Gbps)VGA (fps)SXGA (fps)Full HD (fps)QSXGA (fps)
USB 2.00.486023126
Firewire 8000.8120702611
GigE120512010556
CameraLink2.21600500300180
USB 3.055001506060

High-speed tracking system for C. elegans

  1. Top of page
  2. Abstract
  3. Introduction
  4. Computational microscopy
  5. Image processing for microscope regulation
  6. Feedback regulation by image processing
  7. High-speed tracking system for C. elegans
  8. Conclusion
  9. Acknowledgments
  10. References

This section gives a summary of the high-speed system we have developed. An example is the system to track the motion of C. elegans (Maru et al. 2010). The system is composed of a digital camera (IEEE 1394b 200fps at VGA resolution, Grasshopper GRAS-03K2M-C by Point Grey Research, a lot of camera types with different digital interfaces are available and we selected the one that has highest frame rate with reasonable price) to transfer the image to the main memory of the connected PC, a three-axis DC servo motor stage (HV-STU02-1, commercially available by HawkVision company, several types of stage configuration are available, the stage specifications are customizable for different types of microscopes, videos of tracking C. elegans are on the web page of the company), and a PC (with realtime linux operating system, also commercially available by HawkVision, customizable). The stage is adjustable for several types and makers of optical microscope. The DC servomotors are used because stepping motors used in popular XY stages are not suitable for smooth and floating stage motion, which is necessary to improve the fluorescence observation quality. The PCI interface boards (PCI-3329 DA, PCI-6205 counter and PCI-2798 DIO, commercially available from the Interface Corporation, several types are available) for motor drivers are installed on the PC. The realtime linux driver software packages for these boards are also available from the Interface Corporation. The sampling frequency of the hardware is 1 kHz (this is a standard for industrial motors; to keep 1 ms sampling time real-time OS is necessary; usual operating systems such as Windows and MacOS are not able to respond to this frequency). The camera images are transferred to the PC every 5 ms. The image is interpolated to generate the motor commands (Obara et al. 2011).

Cameras and image processing algorithms for tracking C. elegans

A photograph of our C. elegans tracking system is shown in Figure 6a. As shown in Figure 6a, the bright field CCD camera (Point Grey, GRAS-03K2M-C) is the sensor to measure the target motion and the EMCCD camera (Hamamatsu Photonics K.K., ImageEM 9100-13) is used for fluorescence observation. The bright field CCD camera works in infrared bandwidth. Thus, the bandwidth usually does not interfere with the fluorescent observation. In other words, the system can track the motion of the target and also fully enjoy the fluorescent observation. Because infrared or relatively low wavelength transmission light is used for motion detection, the tracking based on bright field images can be achieved without any fluorescent markers and it is robust against the brightness change of fluorescence or wavelength shift of the Ca2+ sensors. These bright field images also can be used to adjust the Z depth.

image

Figure 6. Tracking microscope system. (a) The photo of the system. The stage moves in XYZ and EM-CCD is added to observe fluorescence signal. (b) Conceptual diagram of the cell tracking. Microscope image is captured by a high-speed camera and the image is processed by PC. The XY stage velocity is controlled to keep the target in the center of the microscope view.

Download figure to PowerPoint

Several image processing algorithms are implemented in the PC using OpenCV. A binarization tracking can be used to track the whole body of C. elegans, which robustly keeps the target in the field of view (see 'Binarization' part in Image processing for microscope regulation section). A pattern matching tracking is also developed to track a specified part for example, pharynx of the C. elegans. The algorithm used for the pattern matching is based on image brightness map (Baker & Matthews 2004; Benhimane & Malis 2004) and a high-speed implementation with GPU (Zang & Hashimoto 2010) is developed to adapt the shape change of the C. elegans. This mode is useful to observe the Ca2+ activity in neurons around the nerve ring (Maru et al. 2011). The tracking algorithm is developed by the Tohoku university team.

Observation example

Several examples of tracking results are given in this section. First, by using our system, we have succeeded in recording both the behavioral changes of C. elegans and its neural activity simultaneously for analysis using fluorescent calcium probe called cameleon (Nagai et al. 2004). The behaviors of a freely moving C. elegans are recorded in the bright field images. The fluorescence images of ASER neuron are obtained accordingly. The middle column of Figure 7 shows the fluorescence images of CFP and YFP in ASER. With the support of our tracking system, ASER is able to be kept inside the visual field under the microscope. However, the motions of ASER inside the obtained images still disturb the analysis of Ca2+ concentration. Therefore, off-line motion estimation is applied to stabilize the interested neuron. White rectangles represent the ROI for off-line motion estimation. The right column shows the stabilized fluorescent region obtained by applying inverse motion transformation.

image

Figure 7. Simultaneous time-lapse imaging of bright field and fluorescence images of ASER neuron of a freely moving C. elegans. YFP (yellow fluorescent protein) and CFP (cyan fluorescent protein) channels of fluorescence images are simultaneously acquired.

Download figure to PowerPoint

Optogenetic stimuli device

Optogenetics is a quite important technique to explore the functional analysis of neurons in the network. We want to put light stimuli to multiple optogenetic neurons in different color and different timing. A projector (BenQ MP515) is mounted in our system as a programmable light source for optogenetics (cf. Fig. 8a). The projector casts the pattern of RGB-controlled dots in multiple pixels whose locations are controlled by the image processing. The experimental results of light stimulation on C. elegans are shown in Figure 8b. An area where interesting neurons are located is chosen as ROI represented as a white box. With the support of our automatic tracking microscope, the ROI is kept in the center of the visual field. Two cast patterns for projector are prepared. One is all the pixels being black, which means no stimuli. The other is a white dot in the center of the black background, which is for point light stimulus. By switching these two patterns, a point light stimulus is turned on from 33 ms to 66 ms on the ROI of the C. elegans (Fei & Hashimoto 2012).

image

Figure 8. (a) Optogenetic stimuli device. The target motion is tracked by using high-speed camera and its image is also used to control the pattern of the projector light. (b) Time-lapse images of light stimulated Caenorhabditis elegans.

Download figure to PowerPoint

Conclusion

  1. Top of page
  2. Abstract
  3. Introduction
  4. Computational microscopy
  5. Image processing for microscope regulation
  6. Feedback regulation by image processing
  7. High-speed tracking system for C. elegans
  8. Conclusion
  9. Acknowledgments
  10. References

As microscopic methods develop, more and more computational techniques are being involved in modern microscopy. The use of computer is not only for automating the tedious procedures, but also for combining each step of microscopic operation including analysis, expanding the ability to monitor dynamic biological phenomena. The framework of feedback regulation by image processing is one of the expansions for modern microscopy. Combination of these computational microscopic approaches and image analyses accelerate quantitative data acquisition, and thus shed light on the mechanisms of the dynamic properties of biological phenomena.

Acknowledgments

  1. Top of page
  2. Abstract
  3. Introduction
  4. Computational microscopy
  5. Image processing for microscope regulation
  6. Feedback regulation by image processing
  7. High-speed tracking system for C. elegans
  8. Conclusion
  9. Acknowledgments
  10. References

This work was supported by JSPS KAKENHI Grant Number 21246040, 21115502, 23135501, 23650080, 23115507 and 24700302. We acknowledge members of our laboratories, in particular A. Giles for useful comments and proof reading, X. Fei for revision of the figures and comments.

References

  1. Top of page
  2. Abstract
  3. Introduction
  4. Computational microscopy
  5. Image processing for microscope regulation
  6. Feedback regulation by image processing
  7. High-speed tracking system for C. elegans
  8. Conclusion
  9. Acknowledgments
  10. References
  • Ben Arous, J., Ben Arous, J., Tanizawa Y., Rabinowitch I. & Chatenay D., Schafer W.R. 2010. Automated imaging of neuronal activity in freely behaving Caenorhabditis elegans. J. Neurosci. Methods 187, 229234.
  • Baker, S. & Matthews, I. 2004. Lucas-Kanade 20 years on: a unifying framework. Int. J. Comput. Vision 56, 221255.
  • Benhimane, S. & Malis, E. 2004. Real-time image-based tracking of planes using efficient second-order minimization. IEEE/RSJ. Int. Conf. Intel Robots and Systems 1, 943948.
  • Conrad, C., Wünsche A., Tan T. H.,  Bulkescher J., Sieckmann F., Verissimo F., Edelstein A., Walter T., Liebel U. & Pepperkok R., Ellenberg J. 2011. Micropilot: automation of fluorescence microscopy-based imaging for systems biology. Nat. Methods 8, 246249.
  • Edelstein, A., Amodaj N., Hoover K. & Vale R., Stuurman N. 2010. Computer control of microscopes using μManager. Curr. Protoc. Mol. Biol. 92, 14.20.114.20.17.
  • Eliceiri, K. W., Berthold, M. R., Goldberg, I. G., Ibáñez, L., Manjunath, B. S., Martone, M. E., Murphy, R. F., Peng, H., Plant, A. L., Roysam, B., Stuurman, N., Swedlow, J. R., Tomancak, P. & Carpenter, A. E. 2012. Biological imaging software tools. Nat. Methods, 9, 697710.
  • Faumont, S., Faumont, S., Rondeau, G., Thiele, T. R., Lawton, K. J., McCormick, K. E., Sottile, M., Griesbeck, O., Heckscher, E. S., Roberts, W. M., Doe, C. Q. & Lockery, S. R. 2011. An image-free opto-mechanical system for creating virtual environments and imaging neuronal activity in freely moving Caenorhabditis elegans. PLoS ONE 6, e24666.
  • Fei, X. & Hashimoto, K. 2012. Exploration of brain function through behavior,neural activity observation, and optogenetic manipulation. Int. Symp. Optomechatronic Tech. 17.
  • Hamahashi, S., Onami, S. & Kitano, H. 2005. Detection of nuclei in 4D Nomarski DIC microscope images of early Caenorhabditis elegans embryos using local image entropy and object tracking. BMC Bioinformatics 6, 125.
  • Kailath, T. 1980. Linear Systems. New Jersey, USA: Prentice Hall.
  • Kato, K. & Hayashi, S. 2008. Practical guide of live imaging for developmental biologists. Dev. Growth Differ. 50, 381390.
  • Kocabas, A., Shen, C. H., Guo, Z. V. & Ramanathan, S. 2012. Controlling interneuron activity in Caenorhabditis elegans to evoke chemotactic behaviour. Nature 490, 273277.
  • Kranc, G. 1957. Input-output analysis of multirate feedback systems. Trans. Automat. Contr. 3, 2128.
  • Kuhara, A., Ohnishi, N. & Shimowada T., Mori I. 2011. Neural coding in a single sensory neuron controlling opposite seeking behaviours in Caenorhabditis elegans. Nat. Commun. 2, 355.
  • Lee, S. & Howell, B. J. 2006. High-content screening: emerging hardware and software technologies. Meth. enzymol 414, 468483.
  • Leifer, A. M., Fang-Yen, C., Gershow, M., Alkema, M.J. & Samuel, A.D.. 2011. Optogenetic manipulation of neural activity in freely moving Caenorhabditis elegans. Nat. Methods 8, 147152.
  • Maru, M., Igarashi, Y., Arai, S. & Hashimoto, K.. 2010. Fluorescent microscope system to track a particular region of C. elegans. IEEE/SICE. Int. Sympo. Sys. Integration 347352.
  • Maru, M., Chen, M. & Hashimoto, K. 2011. Visual servo microscope for locking on single neuron of a worm. IEEE. Int. Conf. Robot. and Biomimetic. 28442849.
  • Mehmet, S. & Sankur, B. 2004. Survey over image thresholding techniques and quantitative performance evaluation. J. Electron. Imaging 13, 146165.
  • Nagai, T., Yamada, S., Tominaga, T., Ichikawa M. & Miyawaki A. 2004. Expanded dynamic range of fluorescent indicators for Ca(2 + ) by circularly permuted yellow fluorescent proteins. Proc. Natl Acad. Sci. USA 101, 1055410559.
  • Obara, T., Igarashi, Y. & Hashimoto, K. 2011. Fast and adaptive auto-focusing algorithm for microscopic cell observation. IEEE/RSJ. Int. Conf. Intel Robots and Systems 712.
  • Otsu, N. 1979. A threshold selection method from gray-level histograms. IEEE. Trans. Syst. Man. Cybern. 9, 6266.
  • Peng, H. 2008. Bioimage informatics: a new area of engineering biology. Bioinformatics 24, 18271836.
  • Piggott, B. J., Liu, J., Feng, Z., Wescott, S. A. & Xu, X. Z. 2011. The neural circuits and synaptic mechanisms underlying motor initiation in C. elegans. Cell 147, 922933.
  • Schermelleh, L., Heintzmann, R. & Leonhardt, H. 2010. A guide to super-resolution fluorescence microscopy. J. Cell Biol. 190, 165175.
  • Schneider, C. A., Rasband, W. S. & Eliceiri, K. W. 2012. NIH Image to ImageJ: 25 years of image analysis. Nat. Methods 9, 671675.
  • Tanenbaum, A. S. 2008. Modern Operating Systems. New Jersey, USA: Pearson-Prentice Hall.
  • Teikari, P., Najjar, R. P., Malkki, H., Knoblauch, K., Dumortier, D., Gronfier, C. & Cooper, H. M. 2012. An inexpensive Arduino-based LED stimulator system for vision research. J. Neurosci. Methods 211, 227236.
  • Yeo, T. T. E., Ong, S. H. & Jayasooriah, Sinniah R. 1993. Autofocusing for tissue microscopy. Image Vis. Comput. 11, 629639.
  • Zang, C. & Hashimoto, K. 2010. Using GPU to improve system performance in visual servo. IEEE/RSJ. Int. Conf. Intel Robots and Systems 39373942.