Flow cytometer electronics



Flow cytometers, like all things electronic, have undergone rapid evolution since the introduction of the first commercial instruments 30 years ago. Cytometers then had so little automation and sophistication that the operator had to be expert in many different scientific disciplines to be able to coax any data at all out of the instrument. But trends in electronic components toward higher speed, smaller size, and lower power consumption have allowed the instruments to become faster, more sensitive, and actually quasi-intelligent. Automation of complex processes that once required considerable operator expertise, such as running a high-speed sort or correctly compensating a multicolor experiment, has expanded the number of flow cytometry applications and the number of flow cytometry users; there is no reason to expect that trend to stop. This review discusses the electronic components of a current “generic” flow cytometer from a functional point of view with the hope that a better understanding of what is going on “inside the box” will lead to a better understanding of how operator-selectable instrument settings can influence the quality of the data produced. © 2004 Wiley-Liss, Inc.

The successful practice of flow cytometry used to require a more than passing acquaintance with a host of completely disparate and unrelated technologies. Members of this technology “global village” speak completely separate languages, including optics, fluid mechanics, biology, chemistry, biochemistry, electronics, and mathematical statistics. This heterogeneity of technologies made flow cytometry a field where there were a million ways things could go wrong and only one way they could go right. It is no wonder that intra- and interlaboratory quality control efforts took so long to achieve the current level of proficiency. Current “bench-top” flow cytometers can generate quality data without considerable operator expertise as long as “everything is OK.” When anything is not OK, it can still take a renaissance person to figure out whether the problem is the laser, the fluidics, the alignment, the optical filters, the detector gains or voltages, the gates, or the sample.

Manufacturers of flow cytometers have dealt with this technologic complexity over the years by providing increasingly automated instruments. If you look inside an old flow cytometer, what you see is mostly fluidics: valves, regulators, sometimes exotic hand-blown glass vessels, lots of knobs, dials, and tubing, miles and miles of it. What you see inside one of today's cytometers is mostly electronics. This review describes the function of the main electronic components of a modern flow cytometer, what they do, and why they are there.

No attempt is made to describe any one specific instrument; rather, the functions all flow cytometers must perform to produce quality data comprise the focus of this review. For clarity, the electronics are divided into three sections: the detector electronics, the measuring circuits, and the computational electronics. The functional interconnections of these electronic circuits are depicted in Figure 1.

Figure 1.

Functional interconnections of electronic circuits for one of the detectors in a “generic” flow cytometer. Solid arrows represent signal pathways; dashed arrows indicate signal processing control logic connections.


We use the term detector electronics to describe the circuits that perform “signal conditioning” functions, such as photon-to-photoelectron changes, current-to-voltage conversions, baseline restoration, amplification, DC restoration, and discrimination. The difficulty in measuring especially dim fluorescence is that the measurements have to be made in a very noisy environment, where there is lots of optical noise and lots of electronic noise. When a small signal is superimposed on a high noise background, the variability of the background seriously limits small signal measurement precision. In essence, the detection limit for small signals is determined by the uncertainty of the background. The detector electronics really do background conditioning. The job of the detector electronics is to ensure that our measurements obtain the highest possible signal-to-noise ratio. Simply put, we want to measure only photons from the interaction of the laser beam and the sample and not other photons. The “signal conditioning” functions performed by these electronics attempt to reduce the noise, thus improving the signal-to-noise ratio.

The Detectors

Because light at a variety of wavelengths is what is measured in most flow cytometers, photodiodes (PDs) and photomultiplier tubes (PMTs) are the detectors of choice because of their sensitivity. The photons coming from the sample are first converted by the detectors into photoelectrons, the electric currents of which are then converted into voltages. Whereas PDs are actually more efficient at converting light into photoelectrons than are the photo cathodes of PMTs, the PMT has the greater sensitivity because of the many (often 9 or 10) internal gain stages it contains. PMTs used in current flow cytometers can easily have a low-noise gain of tens of thousands to a million or more (1). The gain of a PMT is the lowest noise gain available in the instrument, because the flow of electrons inside the PMT occurs in a vacuum. Electron flow in amplifiers occurs in solids, either metals or semiconductors, where the vibration of atoms and molecules caused by heat introduces randomness into the electron flow. We call this randomness noise.

Typical PDs do not have gain and therefore are used only to detect bright light signals such as forward and side light scatter at the illuminating laser wavelength. One type of PD, called an avalanche photo diode (APD), does have a modest internal gain that provides a sensitivity intermediate between the PD and the PMT. But for best performance, the APD requires an external cooling system and a high-voltage supply that, together with the cost of the APD, approach the cost of a PMT.

For the really dim light presented by most immunofluorescence samples, the PMT is still the most sensitive detector for light at visible wavelengths.

The Trans-impedance Amplifier

Both PD and PMT detectors produce a current or flow of electrons as an output in response to an input of light photons. Because the field of electronics evolved measuring and comparing voltages, rather than currents, we must convert the detector's output to a voltage to conveniently measure it. The conversion of a current into a voltage is accomplished by allowing the current to flow through a resistor. Ohm's law predicts what happens next: the voltage produced is the product of the current times the resistance, and because the resistance is a constant, the output voltage is directly proportional to the input current. The electronic circuit that does this transformation, called the trans-impedance amplifier, can be hidden inside the PMT module itself. It converts the current of photoelectrons into a voltage and provides linear amplification of that voltage.

The Baseline Restorer

PDs and PMTs are so sensitive that if you monitor the output of either, having first masked off the window so that no light can reach the sensing area inside, you will still detect a somewhat steady train of low-level pulses, called the dark current, caused by “thermionic emission.” These pulses come from the detection of electrons ejected from the PMT photo cathode by the ambient temperature. If the detector is cooled down, the rate of these pulses is diminished.

Besides the dark current, there can be enough “stray” light reaching the detectors to influence the position of the negative population on the histogram. This light can come from Raman light scatter; glow from the incandescent filament and bore of the laser, from the fluorescence of components exposed to laser light, including the optical filters; and even from fluorescence of the sheath fluid. These “noise” sources have a disproportionate impact on low-level signals, effectively producing non-linearity by increasing the size of the pulses to be measured; therefore, we must subtract them from the signal path leading to the measuring circuits. The baseline restoration circuit does this. It acts as a short-pass filter, removing the DC component of the stray light and AC of 60 Hz or less.

The Discriminator, or Trigger

To accurately measure the photons coming from the sample, we must tell the instrument when the sample is present, so that the measurement can begin. We identify the presence of the sample by setting a “threshold level” on one of the parameters to be measured, which we call the discriminator or trigger parameter.

If the application is measuring immunofluorescence, the discriminator parameter is usually forward light scatter instead of fluorescence because all cells have some degree of light scatter and we need to be able to measure the fluorescence of all the cells, not just the bright ones.

Conversely, if the application is measuring cellular DNA content, the DNA fluorescence parameter is usually chosen to be the discriminator because the DNA dye fluorescence of non-nucleated cells is usually not of interest.

A threshold level of the discriminator parameter is chosen according to the application such that, once the output of the discriminator parameter detector exceeds the threshold level, the accumulation and storage of the signal is begun. The accumulation ends when the detector output falls below the threshold level of the discriminator parameter. Setting the threshold of the discriminator is probably the most important aspect of running a flow cytometer from the operator's point of view because the instrument can only capture data from events having a discriminator parameter value above the threshold level. Particles creating discriminator parameter levels below the threshold level are still passing through the laser beam but are not generating stored data.

The Amplifier

An additional circuit present at this point in the front-end electronics may be a programmable gain amplifier. The amplifier allows smaller signals to be moved up in the eventual histogram to a point at which useful measurements can be made. However, amplifiers have an inherent noise level that can cause a reduction in the signal-to-noise ratio at higher gain settings. For this reason, it is usually best to increase the PMT voltage by using the gain of the PMT itself to amplify weak signals rather than using amplifier gain. As mentioned earlier, the gain of the PMT is the least noisy gain available in the instrument. Once the detector, trans-impedance amplifier, and baseline restoration circuits have done their jobs, the discriminator allows us to capture a voltage pulse that is a faithful representation of the light photons emitted by a cell or particle as it traverses the laser beam. Two aspects of this voltage pulse are important: the pulse height, otherwise known as the peak value, and the pulse area, also called the integral, because it represents the total area under the curve of the pulse.

Pulse Characterization: Peak and Integral Signals

Pulse height measurements are usually adequate to quantitate the light scatter or fluorescence intensity of small particles as long as the particle diameter is clearly less than the vertical dimension of the focused laser beam (2, 3). However, when the particle diameter is similar to or larger than the beam height, the particle cannot be fully illuminated in the beam at any one time; rather, the photons given off come first from the entering edge of the particle, from the middle, and then from the trailing edge as the particle passes through the beam. For this reason, the instantaneous numbers of photoelectrons collected must all be added up across the entire pulse width. This process, called integration, allows accurate quantitation of the scatter and fluorescence intensity of larger particles.

The DC Restorer

When running the sample at high data rates, amplifiers can contribute a bias, or DC offset, to the amplifier output signal, which, if not corrected, would lead to non-linearity of the measurement. The DC restorer centers electronic noise on analog ground and eliminates any offsets.

The Integrator

In early flow cytometers, integration was accomplished by using amplifiers that had low bandwidth limited by electronic low-pass filters. As a particle passed through the beam, the output of the amplifier rose so slowly that what came out was really the integral of the original pulse rather than its peak value. This “passive” integration was forced, by faster and faster throughput requirements of the marketplace, into an evolution to “active” integration in which fast switches, responsive to the trigger or discriminator, control the charging of a capacitor. The total charge on the capacitor at the end of the “discriminator-satisfied” interval should be exactly proportional to the integral, the area under the entire pulse.

Coincidence Detection

One common problem in flow cytometry is the presence of doublets, or even higher-order aggregates, in the sample. In static cytometry, such as light microscopy, the eye has no problem differentiating one red and one green cell next to each other on a slide. However, in flow cytometry, with thousands of cells passing through the laser beam each second, a green cell closely followed by a red cell could be counted as a dual positive, red-and-green cell. To minimize this possibility, the operator selects the peak or height version of the discriminator parameter, because the peak has the fastest rise time (and therefore fall time), thus allowing the discriminator to fall below the threshold level between the two closely spaced cells. In some instruments the discriminator-satisfied interval is translated into a “width” measurement that can be used to detect coincidences. Some flow cytometers abort the data when the likelihood is high that more than one cell is in the beam at the same time because the data are not interpretable.


The measuring electronics translates analog voltages into numbers, a process called digitization. Different manufacturers of flow cytometers have accomplished this in very different ways but the end result is the same: the sample is ultimately represented as a list of digital values (a “list mode” file) that can be analyzed by a computer. Digitizing data has advantages because all of the things we do with data can be done faster if the data are in a digital format. Archival or retrieval transforms, such as conversion from linear to log scales, fluorescence compensation, gating, and sort decision making, are all precisely and quickly accomplished in the digital domain. If any or all of the above need some kind of “tweaking,” this can be done by simply loading a modified version of the signal processing control logic software.

The implementation varies with each manufacturer and has been influenced by the types of analog-to-digital converter (ADC) chips available. A great deal of engineering time has been spent on the question of just where in the signal processing pathway digitization should occur, and more is mentioned about this in the following section discussing the ADC.

Peak Sample-and-Hold Circuits

The first step in the measuring process is to align in time all the values to be digitized. Some of these values may be peaks, others may be widths, and some may be integrals; this presents a timing problem for the subsequent measuring process. All the information in a peak signal is contained in the height of the pulse. We could ignore everything about the peak pulse after the highest point and obtain perfectly valid measurements. The width and integral, however, do not reach their highest values until the end of the peak pulse because the width and integral are related to the entire duration of the peak pulse curve.

To be able to digitize all the parameters being collected—peak, width, or integral—we must store them until they have reached their highest values. Peak sample-and-hold or “stretched pulse” circuits do this. They store the values of each parameter being collected until all the values have been digitized. Modern instruments have several “layers” of sample-and-hold circuits allowing pulses to be “pipelined” so that the pulses from newly arriving cells will not be lost while previous pulses are being measured.


It was shown in 1975 that discrete digital samples could be combined by using a universal interpolation function to exactly “re-create” a continuous analog signal, if the sampling frequency was high enough (4). The ADC's job, digitization, is to convert analog voltages, usually between 0 and several volts, into discrete digital values, strings of 1s and 0s that correspond to integers, numbers without decimal points. If the sampling frequency and sampling resolution are high enough, the potential of digital pulse processing to provide fluorescence localization information becomes obvious.

Much of the discussion in the past 2 years on the Purdue University Cytometry E-mail Web site concerned the merits of directly digitizing pulses immediately after the amplifier, instead of using integrators and peak sample and hold circuits and then digitizing the stored values. The difference between these two designs is that with direct pulse digitization you have many digitized values for each parameter for each event. With integrators and stretched pulse circuits, you have one digitized value for each parameter for each event.

Either approach has advantages and disadvantages depending mainly on the performance of the ADC used. The performances of ADCs are usually described in two terms, speed in megahertz and resolution in bits.

ADC speed is important because, in direct pulse digitization, if we are to capture the height or peak value of a typical pulse with no more than 0.1% error, we need to sample that pulse height about 120 times. The need for such a large number of samplings comes in part from the fact that the pulses arrive at random and are not synchronized in any way to the ADC frequency.

If our cytometer conditions produce 3-μs pulse widths and the ADC is performing 40 million conversions a second, we will just meet our accuracy requirement with 120 measurements per pulse. The relation between pulse sampling frequency and the percentage error in correctly capturing the height of a “typical” peak signal is modeled with MathCad 7 (Math Soft Inc., Cambridge, MA) in Figures 2, 3, and 4.

Figure 2.

MathCad simulation of a “typical” peak signal. The x axis is time, with each box representing 10 μs. The y axis is pulse height normalized to 1. The baseline pulse width is about 30 μs, and the half-height width is 12 μs. [Color figure can be viewed in the online issue, which is available at www.interscience.wiley.com.]

Figure 3.

MathCad simulation of sampling the pulse at 4 MHz. The x axis is time in sample number, and the y axis is pulse height normalized to 1. Baseline to baseline, the pulse is sampled about 120 times. Diamonds correspond to ADC samples. [Color figure can be viewed in the online issue, which is available at www.interscience.wiley.com.]

Figure 4.

Zoomed version of Figure 3; MathCad simulation of the peak portion of pulse. The x axis is time in sample numbers, and the y axis is zoomed pulse height, nearing the maximum, 1. Peak error due to sampling frequency only (not including sample-and-hold errors, integrator and ADC errors, noise, etc.) is less than 0.1% (or one part in 1,000) for the 3-μs pulse at 40 MHz with less than 0.1% error. [Color figure can be viewed in the online issue, which is available at www.interscience.wiley.com.]

Table 1 shows how error decreases as the number of samples increases. ADC bit resolution is important because of the wide dynamic range of flow cytometry signals. A brightly stained cell such as a cytokeratin-positive tumor cell can easily be 10,000 times as bright as a negative cell; seeing the positive and negative populations on the same histogram would require at least a four-decade log display. To have a one-to-one correspondence between ADC output channels and display channels on a four-decade histogram, you would need a 20-bit resolution ADC. The problem is that, currently, 40-MHz, 20-bit ADCs are not available at any price.

Table 1. Expected Percentage Error in Correctly Capturing the Height of a Peak Signal as a Function of Pulse Sampling Frequency*
Peak detection error (%)No. of samples
  • *

    This represents a “best case” scenario, where the peak position coincides with a sampling or is symmetrically in between two samples. Error increases above that shown in the table if either of these two conditions is not met.


The alternatives, using slower or lower-resolution ADCs, will sacrifice accuracy or leave gaps in the data, especially in the lowest decade (5). One solution to this problem is to use more than one ADC, effectively connecting them in parallel. In this design “read” commands are fed directly to the first ADC and delayed by one half the inter-sample period to the second ADC. Each ADC outputs to a separate memory, and at the end of the pulse the memories are read and the stored values are interleaved. This approach could be used in future instruments if DC offsets, gains, and bit boundaries are adequately matched between the ADCs.

However, until faster and higher-resolution ADCs become available, the utilization of integrators and stretched pulse circuits has two compelling advantages, accuracy and speed.

Analog electronics can be designed to be extremely accurate; in fact, Shapiro has described the integrator used in one current flow cytometer as being accurate to one part in 10,000 (3).

In addition to accuracy, analog circuits can be extremely fast because they do not have to wait for interrupts or other cycle-dependent aspects of digital circuitry. The deciding factor in this “where to digitize” question should be the quality of the data produced.

Synchronizing Circuitry

If there is magic to be found anywhere in a flow cytometer, this is where it lives. Consider the enormity of the data stream: every second, thousands of cells are spending a few microseconds passing through the focused laser beam, and the resulting scattered laser light and any fluorescence are being measured and ascribed to the correct cell. No confusion, no mistakes, and the cells just keep on coming.

Current flow cytometers actually achieve a quasi-intelligence through the implementation of numerous state machines, firmware logic devices that perform entire processes. The actions of the front-end components and the measuring electronics are being coordinated by synchronizing circuitry signals that are repeating at a rate about 1,000 times higher than the event rate. The synchronizing circuitry is where, through the use of fast clocks, shift registers, and FIFOs (“first in, first out”) the instrument becomes prescient, i.e., able to decide what to do about an event before that event has “happened.” Shift registers and FIFOs are the hybrid devices similar to analog delay lines. These delays preserve important pulse characteristics, such as shape and height, but delay the pulses in time. The use of delays allows determinations and decisions to be made about the pulse before it is presented to the ADC for digitization. This allows a greater dynamic range of measurements because small pulses can be routed to additional amplification and large pulses are passed directly to integrator or peak sample and hold circuits unchanged.

State machines continually check to see if the discriminator or trigger is satisfied and, if so, turn on integrators and peak sample and hold circuits to begin accumulating. Now you could say that the “acquisition window is open.” Clock signals are counted until the discriminator is no longer satisfied, generating width signals. When the discriminator is no longer satisfied, the integrators and peak sample-and-hold circuits are disconnected; the “acquisition window is closed.” State machines dynamically lengthen or shorten this acquisition window, thus ensuring capture of the entire pulse while minimizing acquisition of any noise. Next, the ADC is prompted to read the charge in the sample-and-hold circuits and convert these voltages to numbers, at which point the event has “happened.” After the ADC is done, the data are handed off to the computational electronics and state machines switch the integrators and sample-and-hold circuits to 0 potential so that the next pulse can be accumulated without any contribution from the previous pulse. All these separate actions are repeated, thousands of times a second, by the embedded and distributed state machines running the cytometer signal processing software synchronized by a nanosecond clock (6).


The computational electronics, usually a personal computer near the cytometer, runs the data acquisition software, which performs functions related to data analysis and display such as instrument settings or “protocol” storage and retrieval, color compensation, gating, sorting, “ratioing,” and linear-to-log conversions.

Fluorescence Compensation

Color compensation has undergone its own sort of evolution since the early days of flow cytometry. Twenty years ago, in the 1980s, compensation for fluorescent dye “spillover” from one color into the adjacent fluorescence channels was accomplished in the front-end electronics by analog subtraction. Analog subtraction for a two-color experiment required two circuits; each circuit subtracted a user-adjustable amount of signal from one color from the signals of the other color (7). This worked adequately for two colors but quickly became unmanageable as the numbers of different fluorescent dyes used simultaneously expanded. A three-color experiment required six circuits, four-color required 12, and five-color required 20! Besides requiring the operator to twiddle 20 knobs, each circuit added electronic noise. Most modern flow cytometers collect uncompensated data; the operator programs the computational electronics to perform compensation by using the linear algebraic matrix inversion method introduced by Bruce Bagwell and Adams (8). For a five-color experiment, five equations in five unknowns must be solved to obtain correct compensation values, an easy task in matrix algebra for a modern computer.

An additional complication in modern instruments comes from adding more lasers not with the beams collinear but with separate sample interrogation stations. When tandem fluorescent dyes are run in a single beam instrument, the (usually blue) laser excites the “donor” fluorochrome (usually phycoerythrin) and the acceptor dye (usually a cyanine such as Cy5 or Cy7) “accepts” the energy and radiates it at a longer wavelength. When these dyes are run on an instrument having a non-collinear red laser, the cyanine portion of the tandem dye will absorb the red laser light and emit at the same wavelength as if phycoerythrin had pumped the dye from the blue laser. This necessitates having what is called cross-beam compensation.

For the same five-color experiment, we would have to have 10 equations in 10 unknowns to solve the matrix, but a modern computer can still handle it.

Log Transformation

Linear-to-log transformation of the data has also evolved over the years; this function was originally performed by “log amplifiers” but, as the number of decades expanded, the faithfulness of the transform became worse (9). Well-designed four-decade log amps often had as much as 20% error in the lowest decade or a similar error in the highest decade and sometimes both. At present, if log data are required, the computational electronics simply “looks up” the log value of the linear data in a log look-up table, thus eliminating error if the resolution of the ADC is adequate.

Errors in Flow Cytometer Electronics

Errors in flow cytometer electronics are essentially additive; you have to square each error value, add up all the squared errors, and then take the square root of their sum to get the total error. There are errors in amplifiers, especially log amplifiers (see above); there are errors in sample-and-hold circuits (the so-called droop of held values), errors in integrator and ADC non-linearity, and the ever-present noise. The errors of all the individual circuits add to each other to produce the final “system” error.

It might seem arbitrarily “picky” to try to achieve errors of one part in 1,000, or even one part in 10,000, for individual circuits, but that is what is required if you want to do fluorescence quantitation, or if you want the mean of the G2/M population to be twice the mean of the G0/G1 cells in cellular DNA content measurements.


Most current flow cytometers use a combination of analog and digital circuitry to measure the photons given off by particles passing through the laser beam. Because these photons are accompanied by so many others not related to the sample, considerable “filtering” of optical noise is required if the measurements are to have a high signal-to-noise ratio. Additional filtering is necessary to eliminate the contribution electronic noise would otherwise make to the measurement. The use of delays allows surrounding the signal with an adjustable-width “acquisition window,” thereby avoiding the acquisition of much of the noise. After digitization, color compensation can be accomplished by linear algebraic matrix inversion and a look-up table can be used for linear-to-logarithmic conversion.

As the operation of these instruments becomes more user-friendly and increasingly automated, in the absence of rigorous controls and calibrators, it becomes easier to generate bad data and not know it.

To minimize this possibility, instrument designers must collaborate with their counterpart biologists, chemists, and medical specialists to develop “systems”-based flow assays where there is some kind of reality check built into the measurement. As challenging as that might seem, the future of the field of flow cytometry depends on it.