A review of advanced measurement techniques in gas–liquid–solid systems is presented. Most GLS systems are operated at relatively high volume fractions. This makes the systems opaque and less suited for investigations based on laser application. Several measuring techniques have been developed to deal with non-transparent systems. Amongst these are Electrical Capacitance Tomography, X-ray CT scanning and Magnetic Resonance Imaging. This contribution discusses some of the recent development and identifies the directions research is taking.
Une analyse des techniques de mesure avancées des systèmes gaz–liquide–solide est présentée. La plupart des systèmes gaz–liquide–solide fonctionnent à des fractions de volume relativement élevée. Cela rend les systèmes opaques et moins adaptés aux examens fondés sur les applications laser. Plusieurs techniques de mesure ont été développées pour s'adapter aux systèmes non transparents. Parmi celles-ci on trouve la tomographie électrique capacitive, la tomographie à rayons X et l'imagerie à résonance magnétique. Cet article examine quelques uns de ces récents développements et identifie les directions qu'emprunte la recherche.
Many chemical reactors are multiphase phase systems, in which contacting of gas, liquid, and solids is crucial to proper operation. The bubble and slurry column, as well as gas or liquid fluidized beds are key units for chemical conversion. They are used extensively throughout chemical engineering and bio-technology. Despite their common use, our knowledge of the flow and transfer inside the reactors is still limited. This is especially true when the volume fraction of the phases is not small. Then, research can usually not be based on the well-known and powerful laser-based techniques common in single phase flow research.
Various techniques are used for measurement of the flow and transfer in multiphase reactors. These can be grouped into intrusive versus non-intrusive, into local vesus global, or into time-resolved versus time averaged. Alternatively, they can be classified according to the working principle: pressure, light, heat, conduction, radiation, electro-magnetic fields or nuclear resonance, to name the most frequently encountered. There is not a single technique that gives the answer. All have their advantages and disadvantages. For instance, point probes, like optical fibers or resistivity probes, measure in a very localised volume. Therefore, these probes can be treated as if they perform measurements in a point. This makes interpretation of the data relatively straight forward. Moreover, the level of sophistication required in processing the raw data to the desired parameter is usually rather low. The disadvantages are obviously: (i) they are intrusive and therefore the data obtained may be “corrupted” by disturbances introduced by the probe itself; (ii) they only provide time-series is a single point, whereas in many cases spatial information is also desired. On the other hand, tomographic techniques provide spatial information allowing a much more complete picture of what is going on inside the multiphase flow. However, these techniques require sophisticated signal analysis and reconstruction techniques. This in itself may introduce uncertainties and hampers real time application.
Various researchers have been active in developing the measurement techniques for multiphase reactors. Work on optical glass fiber probes have already started in the late eighties and early nineties (Cartellier, 1990, 1992; Groen et al., 1995). These probes are used to get information on the gas fraction distribution and bubble size and velocity. Today, still new developments are reported (Julia et al., 2005; Saito et al., 2008). Similar, Laser Doppler Anemometry and Particle Velocimetry have been used in the research of multiphase reactors. Various groups reported experiments using these techniques, (see e.g., Deshpande and Joshi, 1997; Mudde et al., 1998; Delnoij et al., 1999; Mudde and Van Den Akker, 1999; Kulkarni et al., 2001; Mudde and Saito, 2001; Vial et al., 2003).
Electro-magnetic properties are exploited in Electrical Impedance Tomography, like ECT (based on capacitances) and ERT (based on resistivity), see for example, Xie et al. (1995). These techniques offer fast electronics at reasonable costs. This is in contrast with nuclear densitometry, in which in most cases time averaged information about the phase distribution is obtained (Kumar et al., 1995; Chaouki et al., 1997; Shollenberger et al., 1997). Nuclear radiation is also used for tracking a particle in the multiphase flow (Devanathan et al., 1990; Yang et al., 1993; Fangary et al., 2000). This allows studying the motion of the continuous or dispersed phase in dense systems, where laser based techniques fail.
New developments are found in the area of Nuclear Magnetic Imaging (Gladden et al., 2007) and wire mesh sensing (Prasser et al., 2001). In this paper I present an overview of recent developments. The working principle of the measuring techniques will be highlighted and some new directions will be discussed. The paper is organised via the working principle of the various techniques: (i) light, (ii) electro-magnetic, (iii) radiation, (iv) nuclear magnetic resonance.
GLASS FIBER PROBES
Optical glass fiber probes are tiny instruments to detect whether a gas or liquid is present at a particular point in the multiphase system. They are made of a glass fiber of which one end is used to send light into the fiber. The other end is submerged in the multiphase flow. Depending on the refraction index ratio between the glass of the fiber and the surrounding liquid or gas, light is refracted into the multiphase or reflected back into the fiber. The tip of the fiber is around 100 µm or even less. This allows interpretation of the signals as coming from a point.
The reflected light is via a Y-splitter send to a light sensitive LED. The advantage of the probe is the fast response in combination with a very high signal-to-noise ratio. A typical signature of the passage of a bubble is given in Figure 1.
From the signal the time-averaged gas fraction is obtained according to:
The spatial distribution of the gas phase can be obtained in great detail as is shown in Figure 2 (taken from Groen, 2004).
Apart from the spatial gas fraction distribution, the use of optical fibers also allows study of the structure of bubble swarms in the reactors. Groen et al. (1996) used a number of probes located at the same radial position, but shifted vertically, see Figure 3.
The time series obtained were analysed using cross-correlation techniques. This allowed study of the dominant time scales of the swarms present in the bubbly flow. They modelled the cross-correlation function (ccf) (τ), as:
in which τ is the time leg in the cross-correlation, T equals the life time of a swarm, vs the swarm velocity, Δz the vertical distance between the two optical probes and L the vertical extent of the swarm. A is the amplitude, which is a measure of the strength of the cross correlation between the two signals.
A typical result is given in Figure 4. It shows a well defined peak, that gives the average time shift of the swarms. This is directly related to the mean swarm velocity:
From the size and velocity, an axial dispersion coefficient could be estimated: , with the idea that axial dispersion is a consequence of the motion of swarms. The agreement between the data reported in literature about axial dispersion in bubble columns and the estimate found from the optical probe is quite reasonable. This shows, that not only the local gas fraction but also other “statistical parameters” can be studied with optical glass fibers.
Four Point Probe
Multiple point probes are used to assess the bubble velocity and size. In principle, a two point probe can be used (one probe tip slightly longer than the other) but the accuracy of velocity estimates of such a probe is not high. A four point probe performs better. The layout of such a probe is sketched in Figure 5.
From the time of flight from the central to the other points the velocity is found: . From the velocity combined with the residence time for the central probe, the dimension parallel to the velocity is obtained: D = vbubT. A simple algorithm used by Mudde and Saito (2001) requires the three times of flight from the central tip to each of the others to be equal within a certain percentage. This gives control over the alignment of the bubble with the probe axis. Moreover, for ellipsoidal bubbles it ensures that the short axis of this bubble is measured. More advanced algorithms (Xue et al., 2008a,b; Mudde et al., 2009) have been used to find the velocity vector and both short and long axis of the ellipsoidal bubbles. Figure 6 shows an example of velocity data obtained by Xue et al. (2008a) in a 16 cm diameter bubble column at a superficial gas velocity of 60 cm/s. It shows the power of advanced signal processing, allowing velocity estimates for highly turbulent conditions. Note that this technique biases the velocity data away from vbub = 0 as all time of flight techniques do.
The four point probe has a dimension of about 1 mm, due to the size of a single glass fiber including its cladding. Thus small bubbles or droplets can not be measured. This can be circumvented by the probes introduced by Saito et al. (2008). They used a femto second laser to machine the probe tips to dimensions of about 10 µm. They were also able to make a grove in their tips, which allowed estimating the velocity of droplets or bubbles of only a few hundred micrometer. Figure 7 shows a photo from droplets and the very thin probe (taken from Saito et al., 2008).
Laser Doppler Anemometry (LDA) has been applied regularly to bubbly flows since the nineties. The advantage of LDA is its non-intrusive nature. Moreover, it can, in principle, collect data at a data rate of 1000 Hz. However, it has certain disadvantages as well. It is not a straightforward experimental technique and requires careful analysis and interpretation of the data, including the raw data. But more importantly, it is limited to relatively low gas or particle fractions and/or measurement positions close to the wall of the reactor. Obviously, the system under investigation needs to be transparent. The principle of LDA is based on interference of coherent light. The latter is easily obtained from a single laser beam that is split into two beams by a so-called beam splitter. Using some lenses, the two coherent light beams are focused onto a small volume, the measuring volume. A small particle (typically on the order of 5–10 µm) will scatter some of the light of both beams. If this scattered light is detected by a light sensitive detector, interference of the light from the two beams at the sensor will result in a blinking signal. An easy way to understand this, is to adopt the fringe model. In this model it is assumed that the laser beams form, due to interference, a kind of bar-code pattern in the measuring volume, that is, a collection of light and dark bands. The particle crossing this measuring volume will scatter a fluctuating signal. Its frequency is directly related to the velocity of the particle perpendicular to the fringes. Consequently, no calibration is needed. Moreover, LDA is non-intrusive and thus disturbance of the local flow does not take place. Figure 8 shows the principle. The measuring volume can be made very small (length < 1 mm, width < 0.1 mm). Thus, the technique basically provides a time series of the velocity in a point.
The bubbles or particles present in the multiphase flow may block the laser beams, thereby preventing the two beams to form the measuring volume. Thus, the probability of measuring the velocity of the liquid phase decreases when moving further into the multiphase flow. Similarly, the smaller the particles, for a given volume fraction, the less likely it is to be able to penetrate with both beams into the multiphase flow. Ohba et al., 1976; Mudde et al., 1997 have shown that the probability of being able to measure decreases exponentially with the distance, x, into the multiphase flow: with C ≈ 2.4 (Mudde et al., 1998).
From the time series generated, the time averaged velocity, and in case of a two component LDA system also the shear and normal stresses can be obtained. Figure 9 shows the time averaged axial liquid velocity measured by LDA in a 15 cm diameter column at a gas fraction of 25% (Mudde et al., 1997). Obviously, the data rate drops significantly when approaching the center of the column. The right hand side of the figure shows a time trace at a particular point close to the wall. Evidently at a gas fraction of 25%, the bubble column is in the heterogeneous regime and the flow is turbulent. This is clear from the time series: large fluctuations occur and both negative and positive velocities are observed indicating the presence of large coherent structures. Using the time-series spectra can be calculated and short-time frequency analysis (Mudde et al., 1998) and wavelet analysis (Mudde and Van Den Akker, 1999; Kulkarni et al., 2001) can be performed. The computation of spectra is not straightforward as the data obtained with LDA are not equi-distant in time (Harteveld et al., 2005). Joshi and coworkers (Kulkarni et al., 2004) managed to get the bubble size from LDA measurements.
WIRE MESH SENSOR
A relatively recent development in measuring the spatial distribution of the phases in a multiphase flow is the wire mesh sensor (Prasser et al., 1998). The working principle is resistivity. Two sets of parallel wires (8 or 16 per plane) form two planes, separated by a small distance (on the order of 5 mm). The wires in the different planes are at 90° with each other, see Figure 10. By measuring the resistivity between two wires, each in a different plane, the phase distribution in the sensor is probed. As the two wires are perpendicular, the sensitivity is located, roughly speaking, in a small cubical volume between the wires. The electronics rapidly scan all possible pairs. This generates tomographic information about the phase distribution in the measuring plane. The frame rate is easily 5000 Hz, making this a fast technique. Prasser et al. (2001) used this device to measure bubble sizes. Figure 11 shows a comparison between high speed video images and the reconstruction of a bubble using the wire mesh. This technique has not been applied widely in GLS reactors. It has the potential to measure bubble sizes and surface area, even in high gas fraction or slurry systems.
Descamp et al. (2007) used a wire mesh sensor to measure in a dispersed oil/water/air multiphase flow. They compared their data concerning the distribution of the volume fraction of the air and oil with data obtained from an optical probe. The agreement is good. The wire mesh has the advantage of providing a much finer distribution in the cross-section of the flow in a short time. Moreover, it provides tomograms of a plane and thus reveals spatial information and the possible occurance of structures. This is much harder to find from optical probes and would require several probes measuring at the same time.
ELECTRICAL CAPACITANCE TOMOGRAPHY
Electrical Capacitance Tomography (ECT) is a non-intrusive tomographic technique. It relies on the difference in electrical permittivity of different substances. An ECT system is usually build of 8, 12, or 16 capacitance plates that are mounted flush with the wall of the reactor, see Figure 12 (Yang et al., 1995; Dyakowski et al., 1997).
By fastly scanning over all possible electrode pairs information is gathered for reconstruction of the interior phase distribution. The governing equations to be solved are:
with ε(x, y) the permittivity distribution in the measuring plane, ϕ the electric potential field, Cij the measured capacitance between sensors i and j, Vij the applied voltage over the sensor pair ij and A the surface area of the electrode. The premitivity distribution is directly related to the volume fraction distribution of the phases in the measuring plane. Note, that the height of the sensors can not be made very small as the resulting capacitances will than be too small to be detected. Typical sizes are in the centimeter range. Algorithms like Linear Back Projection (LBP) are popular here, although they are know to blur the reconstructions. On the other hand, LBP is a fast algorithm and can be used for on-line monitoring. Compared to other non-intrusive tomographic techniques, like gamma-densitometry, ECT is fast and relatively cheap. As a disadvantage, it relies on so-called soft fields. This means that the spatial sensitivity is relatively poor. Its sensitivity is localised close to the sensors, thus accurate results in the interior of the vessel are difficult to get.
The frequently used linear back projection (LBP) algorithm is based on the sensitivity matrix, S, which is the response of a particular electrode-pair to a small change of the otherwise uniform permittivity distribution in a particular location. In mathematical terms: C = SG, with G the image vector. LBP solves this (usually ill-posed) problem by approaching STS by the identity matrix. It is fast, but of relatively low quality. An improvement is obtained by using an iterative method (ILBP). Instead of directly solving the permittivity distribution: , the mismatch between the capacitance and the estimate is back projected on the sensitivity matrix to find an updated permittivity distribution until the error is below some criterion. In formula:
with k numbering the iterations and β the relaxation factor controlling the strength of the update.
A recent development is the use of neural networks and multi-criterion optimization for tomographic reconstruction (Warsito and Fan, 2001). Furthermore, Fan and coworkers extended their ECT system to a 3D version by placing a second plane of measuring electrodes around the vessel or tube (see Figure 13).
During the measurements both all in-plane pairs and the cross-plane pairs are measured to obtain as much 3D information as possible. Reconstruction takes place on volume elements (voxels) rather than pixels. Initially 20 × 20 × 20 voxels were used. In more recent work this number is increased. Both the neural network and the multi-criterion optimizations are employed in the reconstructions. In conventional 2D ECT, the image obtained is the projection of the solids distribution on a cross-section by assuming no axial variation. This is a disputable assumption, since ECT electrodes are typically some centimeters high. Moreover, due to the soft field, the field lines are not necessarily confined to the slice defined by the electrodes. In ECVT, the real-time 3D whole-volume solids distribution is reconstructed in the region enclosed by the geometrically 3D capacitance sensor. In Warsito and Fan (2005), 3D reconstructions are presented from simulated data as well as from real applications, namely a gas–liquid flow. The reconstruction technique is based on neural networks. Figure 14 shows results obtained by Warsito et al. (2005) for a moving bubble plume in a bubble column. The individual bubbles are too small to be imaged, but the motion of the plume is followed accurately. In Marashdeh et al. (2008) the state of the present art of ECT is described.
X-ray and γ Densitometry
In contrast to ECT, that relies on soft fields, measurement of the phase distribution based on X-rays or γ radiation is based on hard fields. Consequently, the spatial resolution of these techniques is less of a problem. However, detecting X-rays and γ photons is more difficult and time resolution is much harder to achieve.
Detection of the phase distribution based on X-rays and γ radiation relies on the absorption/scattering properties of the various materials. For a monochromatic beam of high energy photons with initial intensity I0, the Lambert–Beer law describes the transmission through a material of constant density and given thickness
with µ(E) the attenuation coefficient, x the thickness and I(x) the intensity after passage. For a beam along a path of varying density, that is, varying attenuation, the measured intensity is the integral effect of the local attenuation with the local attenuation coefficient:
Due to the high energy, the photons travel in straight lines. Only via an interaction with the fluidizing material will they deviate from this line upon which they are not measured. By measuring over a large number of independent lines, sufficient information can be collected for a tomographic reconstruction. Compared to Electrical Tomography, the nuclear techniques are slow. This is a consequence of safety issues and the random nature with which the photons are generated when employing nuclear sources such as 137Cs. The latter creates inherent noise in the measured beam intensity, which drops off as the inverse of the square root of the number of photons counted. Consequently, the measuring time cannot be made short without the use of excessively strong sources. On the other hand, a high number of beams can easily be used. Thus, the number of independent measurements per tomogram can be made large. This is what is done in medical applications, where the source-detector system is rotated around the object of interest. Therefore, high spatial resolution images can be obtained. However, rotating the measuring system is also a slow process. Therefore, usually the frame rate is on the order of 1 frame/s, clearly insufficient for any dynamic study in GLS-reactors. Time averaged information can be obtained with good accuracy (see e.g., Kumar et al., 1995; Mudde et al., 1999). Figure 15 shows the time averaged gas fraction distribution in a 18 in. diameter bubble column filled with Drake-oil and operated at a superficial gas velocity of 10 cm/s (Duduković, 2000).
These authors used about 4000 lines for the reconstruction and obtained a high spatial resolution. However, the experiment lasted on the order of 1 h, thus only time averaged information could be subtracted.
As an alternative, rather than rotating the source-detector, several sources could be used simultaneously. Mudde et al. build such a tomographic scanner using 3 X-ray sources with 2 sets of 30 detectors per source (see e.g. Mudde et al., 2008). The two sets form two parallel planes, separated by 2 cm. That allows estimate of velocities and thus of the true length of objects that pass through the scanner plane. In Mudde (2010) experiments are reported in which their scanner is used to make tomograms from a bubbling fluidized bed. The time resolution is about 100 frames/s. The fluidized bed has a diameter of 23 cm and the resolution of the reconstructions is about 4 mm. This makes the CT scanner compatible with ECT: its frame rate is approaching the same order of magnitude as used standardly with ECT and its spatial resolution is better.
Several reconstruction algorithms are available, like the Algebraic Reconstruction Technique (ART) family or Estimation-Maximisation method (EM). All methods have in common that they try to reconstruct the images on a pixel array in an iterative manner. This slows down the reconstructions as several hundreds of iterations are required. This prevents usage for real-time applications.
Hampel and coworkers (Bieberle et al., 2007; Hampel et al., 2007) build a very fast CT scanner. They generate X-ray photons by scanning an electron beam at high speed over a Tungsten bar. The detector array forms an arc, with the object in between arc and Tungsten bar. This scanner can reach 10 000 frames/s which is quite comparable with the fastest ECT scanners. The above shows the potential of nuclear CT scanners. As an example, Figure 16 shows the imaging of a small bubbles in a 60 mm bubble column (Fischer et al., 2008) and bubbles in a 23 cm diameter fluidized bed (Mudde, 2010).
The above section focuses on determination of the phase distribution. The velocity field of the continuous phase is not found this way. To that end, two different particle tracking techniques that exploit nuclear radiation are in use: CARPT and PEPT.
Duduković and coworkers developed Computer Automated Radio-active Particle Tracking (Devanathan et al., 1990). In this technique a radio-active particle (made neutrally buoyant) is put in the multiphase flow. Several detectors, placed strategically around the GLS-reactor measure the intensity of the radiation. As this intensity falls of (roughly) with the distance between the particle and the detector squared, from the measured intensities the position of the particle can be calculated. By sampling this radiation at sufficient high sampling frequency, the trajectory of the particle can be found and thus, its velocity can be estimated. Some calibration is required, as the liquid will attenuate the radiation deteriorating the square law dependence. Figure 17 shows schematically the set-up used by Duduković and some results for the velocity profile in a bubble column (taken from Devanathan et al., 1990).
Provided the response time of the particle is small enough, in principle all fluctuations in the liquid flow field can be picked up. The technique, obviously, has the advantage that the GLS-system does not have to be transparent. So, it can be used at high gas and/or solids concentration. By tracking the particle Lagrangian information is found, which is in some cases even an advantage, see for example, Guha et al. (2007). The average liquid flow field can be found from ensemble averaging the velocity-vector in voxels that span up the vessel. Moreover, the stresses (both shear and normal) can also be found from ensemble averaging.
As an alternative, a radio-active particle can be used that in its decay produces a positron. This positron will not travel far, as it is attracted by any electron that is in its vicinity. Upon colliding, the positron and electron annihilate and two back-to-back traveling photons, each with an energy of 511 keV, are produced. By detecting these two photons simultaneously with two position sensitive detectors, it is known that the positron emitting particle should be somewhere on the line connecting the two detected photons. By measuring the positions of these photon pairs during a time interval many lines can be obtained: the particle travels according to the displacing crossings of these lines. This technique is called Positron Emission Particle Tracking (PEPT). Figure 18 illustrates the idea.
This technique is used by the group of Seville and Parker (see Fishwick et al., 2005a,b).
NUCLEAR MAGNETIC RESONANCE
A recent development is the use of Magnetic Resonance Imaging (MRI) in multiphase reactors. In the group of Gladden et al. (2007) this technique has been further developed and it is applied to GLS systems.
Nuclear Magnetic Resonance is based on the magnetic moment of a nuclear particle, like a proton. This magnetic moment is associated with the particle quantum mechanical spin. Analogous to a spinning top (with its rotation axis not aligned with gravity) that starts precessing under the action of gravity, a nuclear spin starts precessing with the so-called Larmor frequency when the magnetic moment is not aligned with an externally applied magnetic field, see Figure 19.
In an NMR experiment a constant, homogeneous magnetic field is applied. All nuclei with a magnetic moment will try to align their moment to this field. Subsequently, a magnetic field at 90° with the static field is pulsed. This kicks the nuclear magnets out of alignment and will cause them to precess with the Larmor frequency. This can be detected as the precessing nuclei radiate energy at the Larmor frequency. After the pulse, the spinning nuclei will decay towards the aligned situation. From this decay it can be found what in a particular spot in space the amount of spins is. In other words: it shows the density of the material. As different phases have different proton density (e.g., because they have different Hydrogen density) measuring with NMR allows assessing the local phase distribution. By scanning over a plane or a volume sufficient information for tomographic reconstruction can be found.
MRI has as a disadvantage its costs: the static magnetic field needs to be very strong. When using permanent magnets this limits the usage to relatively small diameters. In principle, superconducting magnets can be made stronger and for larger scales. However, the costs are considerable. MRI is, nowadays, routinely used in hospitals to non-intrusively scan (without the use of ionising X-rays), for example, the human brain. In GLS applications, it has the advantage of high spatial resolution and the possibility to during the same experiment also find the velocity field of the various phases. This makes this technique unique. In Gladden et al. (2007) 2D and 3D images of the flow in a trickle bed is presented. Figure 20 is taken from this article and shows the spatial distribution of the gas, liquid and particles in a 2D slice out of a 45 mm × 45 mm trickle bed. The spatial resolution is 175 µm × 175 µm. The data are collected at a rate, that 3D images are generated at 3.6 frames/s.
In Holland et al. (2009) a quantitative comparison between ECVT (3D ECT) and MRI is made by applying both techniques to the same fluidized bed (50 mm diameter, particle size 58 µm). The ECVT acquired 3D images at a temporal resolution of 12.5 ms and a spatial resolution of 2.5 mm × 2.5 mm × 4.5 mm. The MRI system did the same at 26 ms and 1.9 mm × 1.9 mm × 3.8 mm resolution. Although there are some discrepancies in the results, the overall outcome is very positive. It shows that the new experimental techniques that have become available can provide valuable quantitative information about the flow inside gas–liquid–solid reactors.
With the continuing increase in computer power and the further development of fast algorithms, numerical simulations of multiphase reactors are providing more and more details. However, we are still far away from closure free simulations that do not need experimental validation. Moreover, the theoretical framework used still needs guidance for improvement from experiments. Therefore, it is very important that the experimental techniques that we have at disposal provide the required information. That means, if possible, non-intrusive, fast, local and preferably 2D or 3D information. Most likely, not a single technique can do the job and we will have to rely on more techniques. In this paper, I have high lighted a number of techniques, ranging from simple and cheap, like optical fiber probes, to complicated and expensive, like X-ray CT and MRI. With all the techniques recent advances allow for much more detailed information about the hydrodynamics of GLS reactors. Especially tomographic techniques, that don't rely on light, are making rapid progress.
On the Electrical Capacitance Tomography side, 3D systems are developing that combine the intrinsic speed of this technique with enhanced spatial resolution. This is needed, as spatial averaged information alone is insufficient to make progress in our theoretical understanding and modelling. Multiphase flows are inherently “multi scale” problems calling for both temporal and spatial resolution. A new development that has not been applied widely to GLS systems is the wire mesh technique. Although intrusive, it has the advantage of combining speed with good spatial resolution and relatively straightforward interpretation of the data. Nuclear techniques have been slow. However, recent developments show that this is not an intrinsic feature. It is possible to generated data at a speed that allows several hundreds of frames per second in 2D tomographic applications. Furthermore, it has been shown by Hampel and coworkers that 10 000 frames/s can be reached. If this enormous speed is combined with the spatial resolution that in principle can be found, also nuclear densitometry qualifies as one of the new techniques we need in GLS research. Particle tracking techniques using radiation (e.g., CARPT and PEPT) are around for quite some time. They allow us to investigate the motion of particles inside the reactors giving unique information about the hydrodynamics. Finally, Magnetic Resonance Imaging, that is routinely used in medical applications has reached the status, that is, speed, such that it can be used in GLS research. Although expensive and complicated, this technique holds promise for our experimental research as it can assess different phases at the same time and provide both the 2D or 3D distribution and the velocity field.
In conclusion, experimental multiphase research has progressed tremendously during the last decade. New developments promise to decrease the gap between numerical simulations and experiments. This will allow us a much better understanding of multiphase reactors and a much better quantitative prediction of the performance and a better handle on scale-up, scale-down issues.