Fast Lipid And Water Levels by Extraction with Spatial Smoothing (FLAWLESS): Three-dimensional volume fat/water separation at 7 Tesla

Authors


Abstract

Purpose:

To quickly and robustly separate fat/water components of 7T MR images in the presence of field inhomogeneity for the study of metabolic disorders in small animals.

Materials and Methods:

Starting with a Markov random field (MRF) based formulation for the 3-point Dixon separation problem, we incorporated new implementation strategies, including stability tracking, multiresolution image pyramid, and improved initial value generation. We term the new method FLAWLESS (Fast Lipid And Water Levels by Extraction with Spatial Smoothing).

Results:

Compared with non-MRF techniques, FLAWLESS decreased the fat–water swapping mistakes in all of the three-dimensional (3D) animal volumes that we tested. FLAWLESS converged in approximately 1/60th of the computation time of other MRF approaches. The initial value generation of FLAWLESS further improved robustness to field inhomogeneity in 3D volume data.

Conclusion:

We have developed a novel 3-point Dixon technique found to be useful for high field small animal imaging. It is being used to assess lipid depots and metabolic disorders as a function of genes, diet, age, and therapy. J. Magn. Reson. Imaging 2011;33:1464–1473. © 2011 Wiley-Liss, Inc.

QUANTIFICATION OF HEPATIC fat accumulation has become important for the study of diseases such as HIV and type II diabetes, which have recently emerged as chronic disorders. The duration of these diseases have made monitoring chemical composition an important indicator of hepatic health because hepatic steatosis has been linked to the development of cirrhosis and hepatocellular carcinoma (1). Biopsy, the current gold standard for nonalcoholic fatty liver disease (NAFLD), is invasive and is prone to sampling errors, so an imaging-based quantification method that can quantify the extent of disease at an organ level is desirable. Although many imaging modalities can be used, to some extent, to monitor the progression of NAFLD, MRI is the one with the most desirable qualities. However, the current cost of MRI-based monitoring prohibits the modality from being widely used (1). Therefore, cheaper, more reliable MRI methods, partially achieved through increased computational efficiency, are needed to allow the many advantages of this modality to be realized.

Small animal models are desirable because genetic and environmental control provides for easier study of the causes of, and solutions to, disease. However, because research and clinical practice share the practical concerns of cost, speed, accuracy, and robustness, development of novel MR methods is needed to enable longitudinal metabolic studies in animals, as well as in humans. The purpose of this report is to present our new fat–water separation technique, based on the 3-point Dixon framework, which includes several implementation features that greatly decrease processing times while maintaining, and even enhancing, the robustness of previously published methods.

The 3-point Dixon methods are able to decompose fat and water while accounting for main field inhomogeneity. The 3-point Dixon methods model the signal as a function of echo time, as shown in Eq. [1], below (2–5):

equation image(1)

where t is the effective echo time, ρwater is the density of water protons, ρ fat is the density of fat protons, δf is the known fat–water frequency shift, and ψ is the unknown local field inhomogeneity shift in Hz. At least three echoes are required to fit this signal equation at every pixel.

A chemical ambiguity arises (6): whether working with magnitude-based reconstructions or phase-based reconstructions, local swaps of the estimates for fat and water proton densities occur (6, 7). In magnitude reconstructions, the ambiguity arises from the inability to know whether the fat signal is being subtracted from the water signal, or vice versa. In phase reconstructions, the ambiguity arises from the inability to know, due to phase wrapping, whether the phase difference is between water and fat, or vice versa. This ambiguity can be seen mathematically in the signal model, which uses imaginary exponentials, i.e., periodic functions, to describe these phase differences. Numerous methods have been proposed to resolve the ambiguity in phase reconstructions, which mostly attempt to use local spatial information to resolve the chemical ambiguity (3, 4, 7), but this sort of disambiguation is difficult in magnitude reconstructions. We choose two phase-based reconstruction methods for comparison: one that uses spatial information (Hernando's method) (3), and one that does not (IDEAL) (8).

We are pushing MRI to higher field strengths (7 Tesla [T] and higher) for small animal imaging applications. This is desirable due to the improved spatial resolution, tissue contrast, and overall image quality, all of which increase the diagnostic usefulness of the modality (9). However, at higher field strengths, the range of the field inhomogeneity, ψ, is increased, which can be viewed as an increase in the feasible range of shifts of the data cost function (Fig. 1). Because fat–water frequency shift also scales proportionally to main magnet field strength (4), the chemical ambiguity of 3-point Dixon methods becomes even more difficult to avoid. We have found that existing methods either cannot completely accommodate this increased field inhomogeneity range, or require such a prolonged processing time as to prohibit their routine use. In this study, we propose a method, termed Fast Lipid And Water Levels by Extraction with Spatial Smoothing (FLAWLESS) to robustly and efficiently solve the 3-point Dixon formulation at 7T.

Figure 1.

Cost functions. The total cost function used in the calculation of the field map value is the sum of the data cost and smoothness cost. The data cost is calculated from the signal equation by guessing values for the parameters. The smoothness cost is calculated from the difference between neighboring pixels (a 5 × 5 neighborhood was used in this implementation). Notice that the data cost is periodic, with two minima in each period that are ambiguous with respect to each other. Adding the smoothness term to this cost removes the ambiguity, allowing us to pick a global minimum from the total cost function.

ALGORITHMS

This section describes our approach for efficiently mitigating the effect of the inherent ambiguity of Dixon methods. A summary of the algorithm can be found in Table 1. A plot of the cost of choosing a certain value of ψ fit the signal at a particular pixel is found in Figure 1. The cost function is exactly periodic, with period 1/ΔTE where ΔTE is the spacing in echo time between the three source images, if the three are equally spaced to maximize theoretical decomposition quality (8). Each period contains two local minima that are ambiguous with respect one another (4).

Table 1. Pseudocode of FLAWLESS
1) Obtain constant initial guess for field map estimate through either extrapolation initialization method or coarser resolution estimate
2) Set up discrete range of possible field map values
3) Compute data cost function at every pixel in image (skipping those not included in image mask)
4) For every pixel in the image (skipping pixels that are stable, and those not included in image mask)
 • Apply smoothness cost at each pixel, calculate new field map guess for that pixel, based on total cost function
 • Iterate over entire image until total change between iterations falls below stopping criterion

Our Iterated Conditional Modes (ICM) method, which builds on the work done by Hernando et al (3), is designed to resolve this ambiguity in choosing the field map inhomogeneity value, ψ. Instead of simply using spatial information to generate an initial guess, as is the case with most region growing methods, FLAWLESS uses the Markov Random Field (MRF) framework to include the spatial information as part of the objective function. The formulation is:

equation image(2)

where μ is a parameter that determines the weight between data and smoothness cost. Example plots of all three components are given in Figure 1, where the total cost function is seen to have a global minimum. Substituting for the data and smoothness cost terms in Eq. [2], ψ be obtained from:

equation image(3)

where Sguess is the signal calculated from equation 1 given the current guess for ψ, Ssignal is the acquired data, N is the set of pixels in the 5 × 5 neighborhood around the current pixel of interest, and w is the Euclidean distance between the current pixel of interest and the pixel in the neighborhood. The value of μ can be calculated as follows:

equation image(4)

However, because σsolution is not known before the solution is computed, we used the following approximation for μ, which is computed for every pixel, to ensure that the data cost and smoothness cost have similar magnitudes:

equation image(5)

The data cost term described above is similar to the objective function used in variations of the IDEAL algorithm (8, 10). The minimum of the total cost function roughly corresponds to the data cost minimum that is closest in value to the surrounding values. When this is done iteratively over the entire image, using the ICM method (11), an overall solution can be found that is free of fat/water swaps.

Starting with this MRF-based formulation, we added several new features to attain a high level of robustness and speed, as described in the next sections.

Iterated Conditional Modes

We used an Iterated Conditional Modes-based algorithm to solve the MRF-based formulation of the 3-point Dixon problem. The goal of the algorithm is to use the Markov Random Field (MRF) framework to generate smooth field maps (11).

The ICM process for each image starts with a constant initial guess for every pixel in the field map. Rather than setting the smoothing parameter, μ, in advance, we update it for every pixel in every iteration when using the strategy in Eq. [5]. If using the strategy in Eq. [4], and if an approximate solution is available (e.g., in the multiresolution image pyramid, described below), we compute μ for each estimation process, with the range clipped to prevent extreme values. This is advantageous because μ has fairly large effects on the convergence of the ICM algorithm: if it is too small, the estimation can converge to an incorrect solution, whereas if it is too large, the estimation process slows down, and/or creates an overly smooth field map estimate.

Once the field map is calculated, the fat and water decompositions, ρfat and ρwater, can be calculated using a Moore-Penrose pseudoinverse, as has been done previously (3, 8). The speed of our algorithm was enhanced through the combined use of three image processing strategies: multiresolution image pyramid, stability tracking, and image masking. The algorithm was made more robust for volume data through the use of our slice-to-slice extrapolation initialization method. A detailed description of each of these strategies follows.

Multiresolution Image Pyramid

We used multiresolution image pyramids (MRIPs) (12, 13) to improve computational efficiency and robustness to noise of our estimation process. In MRIP, low resolution solutions are used as initial guesses for higher resolution images until the estimation is performed at the highest resolution (Fig. 2). We used a three layer MRIP, where each layer was downsampled by a factor of 4 in each of the x and y directions from the previous layer. Thus, the lowest resolution layer contained 1/256 the number of pixels of the original resolution. Downsampling was done by decimation in image space, where every fourth pixel of the original resolution image was retained in the downsampled image, with no filtering. Upsampling of the (real-valued) field map solutions was achieved by bicubic interpolation from the nearest four-by-four neighborhood.

Figure 2.

Multiresolution image pyramid. The multiresolution image pyramid scheme successively downsamples the source image several times. The estimation is then performed on the lowest resolution image, the solution to which is upsampled and used as an initial guess for the next resolution. This is repeated until a solution is found to the original, full resolution image. In this study, we use three total layers for the image pyramid (low resolution, intermediate resolution, and full resolution). Each layer was downsampled to a quarter the size of the previous layer in each dimension.

Stability Tracking

Our “stability tracking” scheme skips pixels which have not changed in the previous few iterations. That is, we determine if the field map estimate of the current pixel, or any of its 24 neighboring pixels, changed in the course of preceding iteration. If a change has occurred, then we calculate the smoothness cost, and pick a new estimate for the field map. If no change has occurred, then we skip the pixel. In Figure 3, we show maps of stable pixels through several early iterations.

Figure 3.

Stability map. The stability tracking scheme allows the algorithm to skip pixels that are not changing. Pixels that are stable between two iterations are labeled with a “1” (white), and are subsequently skipped. Pixels that do change, along with all of their neighboring pixels (we used a 5 × 5 neighborhood in our implementation), are labeled with “0” (black), and are re-calculated in the next iteration. The maps above show that in a typical example, the large majority of pixels stop changing by iteration number 7, even though complete convergence requires many more iterations (approximately 100, depending on the stopping criterion).

The field map value does not need to be re-estimated until a future iteration where one of the neighborhood pixels does change. Because we precompute and store the data costs, they do not change during the estimation process. If none of the neighboring estimates change, the smoothness cost also remains constant between iterations. Therefore, the total cost will not change from the previous iteration, and the field map estimate does not need to be updated.

Image Masking

We masked out air pixels (Fig. 4) to allow skipping of pixels that contain only noise. This reduces the number of pixels needing computation in each iteration. We prioritized inclusion of all pixels with data over exclusion of air pixels by conservatively including all tissue regions plus intervening air regions. Our masking procedure was tested by overlaying several slices from several three-dimensional (3D) data sets, and was shown to correctly include animal data voxels while excluding almost all of the air voxels.

Figure 4.

Masking. Image masking was done to speed up the calculation by skipping voxels that contained only air (i.e., noise). The mask was generated by morphologically dilating the raw image with the shortest echo asymmetry. This dilated image was then thresholded by a value that was half of the threshold calculated by Otsu's method. After the holes in the thresholded image were filled in, the mask was morphologically eroded to generate an almost exactly fit over the input image, as can be seen in the overlay image above. Similar masking methods have been used in the past (16). The masking method was verified, with several 3D volume data sets.

Extrapolation Initialization for 3D Data

Due to multiple local minima of the total cost function, the ICM process is still somewhat sensitive to initial guess, and a zero uniform guess only worked on some images. We tried to leverage smoothness of the field map across slices by using a 3D smoothness cost, but that led to unacceptable computation times. We also tried to use the median of a neighboring slice as an initial guess, but this yielded only a minor improvement over a zero initial guess.

We used an extrapolation initialization method of generating proper initial guesses for processing a 3D volume data set. First, the starting slice was manually chosen by searching for a slice close to center of the data set, with minimal artifacts and minimal air spaces inside of the body. It has been our experience that very little training is required to pick a slice that can serve as a good starting slice. Once the first slice was solved, its median field map value was used to initialize a neighboring slice. After solving the second slice, a line was fit through the median field map values, to initialize other slices in the image volume. A new line was fit after each slice was solved.

EXPERIMENTAL METHODS

All source images were acquired on a Bruker BioSpec 7T/30 cm system using an asymmetric RARE protocol with a RARE factor of 4. Three effective echo times were used, corresponding to fat–water phase differences of π/6, 5π/6 and 3π/2. At 7T, the effective echo times, defined here as the asymmetry time between the Hahn echo and the gradient echo (6), were: 7.9288e-05, 3.9644e-04, and 7.1360e-04 s, with a TR of 1 s, and an actual TE of 9.1 ms. The slice thickness was 1.3 mm, with an in-plane resolution of .2 mm × .2 mm in a 512 × 256 matrix. All of the analysis was performed in MATLAB (The Math Works, Inc., Natick, MA) on a 3.2 GHz computer with 8 GB of RAM (iBUYPOWER, Inc., Los Angeles, CA).

The parameters used in the algorithm were chosen empirically to first, optimize proper convergence, and to second, optimize the efficiency of convergence.

Comparison of Results to Existing Algorithms

We compared the results of FLAWLESS with those of the method of Hernando et al, which has been shown to be more robust to field inhomogeneity than other previous methods (3). For simplicity, we assume this method to be the gold standard, although in reality, it is impossible in some instances to know which of the two methods is actually correct, because most differences occur in border regions. Our extrapolation initialization method was used out of necessity to prevent large fat/water swaps to generate initial guesses for both methods.

The lipid fraction, L, was calculated for both methods at every pixel (14):

equation image(6)

The two lipid fraction maps were then subtracted from one another, and the resulting errors were plotted in histograms.

Several 3D volume mouse data sets, with both high and low body fat content, were used in the comparison. A single volume was chosen for convenience of analysis, but similar results were found with all of the data sets. All of the data sets were processed with the extrapolation initialization method, which will be discussed in further detail, to improve proper convergence.

RESULTS

Figure 5 compares three methods: FLAWLESS, Hernando's method, and IDEAL. Although FLAWLESS requires slightly more computation time than IDEAL, it provides a more reliable decomposition. Even when using extrapolation initialization, every one of our test images resulted in visually apparent, large swapped areas with IDEAL, but not FLAWLESS. A comparison of processing times required for a 3D animal test data set can be found in Figure 6. For this 23-slice high body fat mouse data set, the Hernando algorithm required a total of approximately 45 h of processing time, or almost 2 h per slice. Combinations of the acceleration strategies yielded a further savings in computational time, indicating that these strategies are synergistic. By combining all three of the acceleration strategies, we were able to correctly process the 3D test volume in approximately 2700 s (approximately 45 min), or a mean of 2 min per slice. However, as can be seen from Figure 5, the results of FLAWLESS are visually indistinguishable from Hernando's method. Similar trends were observed for all of the data sets we processed.

Figure 5.

Comparison of methods using lipid fraction maps. Typical lipid fraction maps are shown for three methods. With the same source data, FLAWLESS produced a lipid fraction map that is not visually differentiable from that produced by the Hernando algorithm in approximately 1/30 of the time. Quantitatively, only 1.7% of data pixels had more than a 1% lipid fraction difference. The method presented here does require slightly longer computation time than an IDEAL-like method, but produces a much more reliable result.

Figure 6.

Speed comparison on a logarithmic scale. A single 3D volume was used to compare mean convergence times per slice. The Hernando algorithm required a mean of 7048 s to terminate, per slice. The pyramid strategy required 6459 s. The masking strategy required 1231 s. With stability tracking, convergence required 228 s. Combining stability tracking and masking reduced processing time to 155, and combining all three methods (i.e., FLAWLESS) required 117 s. Overall, this represents an advantage in computation time of 45 min, as opposed to 2 days, for a 3D whole-mouse data set.

Comparison of Results

In Figure 7a, a histogram of the error of FLAWLESS, compared with Hernando's method, is shown. Some minor differences were found between the two methods, but most pixels were not significantly different. The noise to signal ratio of the source images was approximately .08, which means that the minor variation in lipid fraction seen here would not be the largest source of differences between the two algorithms.

Figure 7.

Verification of solutions. A single slice, from an animal data set, was chosen such that both Hernando's method and our method contained no visually identifiable mistakes. Then, the two lipid fractions for the two methods were subtracted from one another. A map of the errors, along with a histogram, is plotted (a), excluding the noise-only pixels. Some minor variation exists between the two results (mean of absolute difference was .01, standard deviation of the difference was .05, 48% of pixels had exactly zero difference, 99% of pixels had less than a 2.5% lipid fraction difference). To further verify the accuracy of FLAWLESS, the same map and histogram were made for an oil–water phantom (b). The lipid fractions calculated by the two algorithms are almost indistinguishable from one another (mean of absolute difference was .002, standard deviation of the difference was .007, 79% of pixels had exactly zero difference, 99% of pixels had less than a 3% ratio difference).

To further validate our algorithm, the same analysis was performed in an oil–water phantom data set (Fig. 7b). The variation between the two algorithms is even more minor here. When averaged over a region of interest in the adipose tissue, the two estimates differed by a mean of .00096. Each of the methods showed a standard deviation of approximately .023 over the same adipose tissue. Approximately 1% of pixels in the animal data set yielded significantly different results. However, it must be noted that in these situations, it was difficult to know which of the methods actually yielded the correct answer. Several images were analyzed in this detail, and one is presented here as a representative result. However, many images volumes were processed and visually confirmed. All image volumes were processed with our extrapolation initialization method, without which neither method yielded correct results. The conclusion from this ratio analysis can be that FLAWLESS and the Hernando's algorithm give largely the same results, with some minor differences.

Choice of Parameters

We determined that the range of possible field map frequencies should be two full periods of the cost function (see Fig. 1), centered at the initial guess. That is: fmax = InitialGuess + 1/ΔTE, and fmin = InitialGuess + 1/ΔTE. A total of 300 discretization points were sufficient to limit discretization error and provide reasonable convergence times.

We found a stopping criterion for the ICM process of 0.008 per masked pixel to be reasonable. For example, if the slice contained 1000 data pixels, the stopping criterion was 8, which would mean that the minimum total change between two iterations was 8 Hz before termination. The exact per-pixel threshold does not greatly change the results; smaller values lead to longer computational times, but give solutions that have been allowed to converge further.

Improvement Due to Multiresolution Image Pyramid

Plots in Figure 8 show that when the MRIP is used, the first few iterations at full resolution change approximately an order of magnitude less than the first few iterations when MRIP is not used. Furthermore, many more iterations are needed to reach the termination criterion when the MRIP technique is not used. This is because, as illustrated by Figure 9, performing the estimation on low resolution images allows the full resolution estimation to start with a partially converged initial guess. Because the low resolution estimates take very small amounts of time, this results in an overall reduction in processing time.

Figure 8.

Convergence of ICM. This plot shows the total change in the field map estimate as a function of ICM iteration. The total change was calculated by first subtracting, pixel-by-pixel, the estimate of the previous iteration from the estimate of the current iteration. The absolute value of these differences was then summed over all of the pixels. The solid line shows the total change per iteration when the MRIP strategy was not used, whereas the dashed line shows the total change per iteration, at full resolution, when the MRIP strategy was used. The total change in the first few iterations is an order of magnitude higher when the MRIP strategy is not used. As a result, many more iterations are necessary to reach convergence. Additionally, because the change per iteration is less with MRIP, the combination with Stability Tracking creates a synergistic advantage.

Figure 9.

Why MRIP speeds estimation. The Multiresolution Image Pyramid technique was used to start the estimation at full resolution. When compared with a constant initial guess, the full resolution estimation in the Image Pyramid has already partially converged. For example, the hole in the field map that corresponds to the kidney (white arrow, top row) requires many iterations of ICM to fill (filling can only occur one neighborhood at a time). However, when MRIP is used, this hole (white arrow, bottom row), is already filled in when the ICM process starts on the full resolution image. Because the low resolution estimations are much faster (1 s at lowest resolution and 7 s at middle resolution) than the full resolution estimation, the MRIP yields approximately a 50% total savings in computational time in this particular case.

Extrapolation Initialization

A zero initial guess did not lead to a correct decomposition for many of the slices in each volume data set. However, a linear extrapolation initialization scheme (Fig. 10) was able to resolve all complete swap and partial swap problems, except some small partial swaps around the edges of the animal. This was not the case with either ICM or extrapolation initialization alone. A linear pattern was found in all of our data sets, so higher order polynomials were not needed for our data, although the algorithm can be easily expanded to use them.

Figure 10.

Extrapolation initialization. We found that ICM was sensitive to initial guess. When a zero initial guess was used, one of two types of problems were commonly found: complete swaps of fat and water (b), where the estimates were internally consistent, but appearing in opposite parameter maps, and partial swaps, where parts of the parameter maps are correctly estimated, and parts are not (c). a: shows the median field map values plotted against slice number. The dashed line shows the line of best fit of these median values, which was used to generate initial guesses. First, a starting slice was chosen and solved (with a uniform zero initialization), and the median value of that slice was used as a constant initial guess for a neighboring second slice. Once the second slice had been solved, a line of best fit could be found for the median field map values of the two slices. The constant initial guess for the third slice was then found by linear extrapolation to that slice. This same process was then repeated for every slice. Using this method, both the complete (d) and partial (e) swaps were avoided in several 3D data sets. We fit a new line every time we solved a slice, and could add a new median to the set.

DISCUSSION

Imaging methods are desirable for the study of metabolic derangements such as hepatic steatosis, both in clinical and research settings, because they are noninvasive and permit evaluation of entire organs. Dixon-type imaging is very sensitive to fatty infiltration of the liver, but is currently too expensive and analytically complex for routine use (1). We have shown that using FLAWLESS, we can quickly and accurately decompose fat and water in three dimensional data sets in the presence of large field inhomogeneity using image masking, stability tracking, and multiresolution image pyramids. This will contribute to the utility of Dixon methods, especially at high field strengths, where we can take advantage of better image quality (9).

Our method outperformed other 3-point Dixon solutions (Fig. 5). Methods that do not use strong measures of spatial smoothness, such as IDEAL (8), converge to their solutions faster than FLAWLESS, but yield incorrect solutions containing large swapping errors, even when using our extrapolation initialization method. The large field inhomogeneity range over each image causes different portions of the image to converge to different local minima of the cost function. As a result, any constant initial guess would lead to swapping errors. Our MRF-based approach allows for the handling of large field inhomogeneities, as long as the field is smoothly varying, while not requiring detection of a correctly decomposed seed pixel, as is needed in region growing algorithms. Other MRF methods, such Hernando's VARPRO-ICM algorithm (3), yield good results that are very similar to the solution calculated by FLAWLESS, but require an order of magnitude more time to converge. Of course, source images with poor acquisition quality, such as ones with corruption due to motion artifacts, are difficult to process with any decomposition algorithm. FLAWLESS can be programmed in a compiled language, such as C, for additional speed, but the patterns found in this manuscript would remain the same. Furthermore, the technique can use graphics card computing, as has been done with other methods (15), to achieve clinically practical processing times.

We found that at 7T, chemical decomposition required long processing times with previous methods to account for the greater field inhomogeneity at this higher field strength. FLAWLESS allowed processing of the 3-point Dixon MRF formulation in approximately 1/60 of the time for a 3D mouse volume, which meant a decrease in the mean processing time for a single 512 × 256 slice from approximately 45 min to approximately 2 min. This, we believe, will push the method toward practical applicability in processing rodent data. It is important to note, however, that although this work was performed using rodent data, the same principles can be applied to human data, especially when processing human data from scanners of similar field strengths.

We found the ICM procedure to be fairly sensitive to μ, weighting parameter between the data and smoothness cost. Because only an order of magnitude of flexibility is available in picking this parameter, we recompute this parameter for each slice or pixel. The algorithm was very insensitive to changes in δf.

MRIP, when applied to the field map inhomogeneity problem, gives FLAWLESS several advantages over previous of eliciting field map smoothness. First, because we are using a MRF framework, we see no dependence on the choice of a starting location within each slice, in contrast to methods that make use of region growing. Second, downsampling automatically creates field map smoothness between larger regions of the image without forcing this smoothness to appear in the final resolution image as a blur. Third, using the MRF framework in addition to MRIP actually enforces some degree of local smoothness, rather than hoping that downsampling alone will achieve smoothness. Finally, MRIP also has an added benefit in terms of processing time. While stability tracking was very useful for decreasing processing time, it does not have any of these additional advantages. Therefore, combining our three implementation features ensures the highest quality decomposition in the least amount of time.

In light of the recent publication of Reeder et al (14) on the importance of spectral modeling for accurate quantification, our algorithm can be expanded to multiple metabolites by simply adding terms that correspond to each metabolite to Eq. [1], and by acquiring an extra echo for each of these metabolites. The quantitative fat and water concentrations (along with their ratio) presented in this study are affected by this spectral modeling error, but because this can be easily fixed, and because this was not a quantitative biological study, the full spectral model was not considered here. T2* was not considered because of the use of asymmetric spin echoes minimizes this effect.

Finally, the techniques that are presented in this study are somewhat applicable to other algorithms for fat/water separation. Masking is very general technique that can be applied to any algorithm that works on a pixel-by-pixel basis in image space. MRIP can be used with similar image domain algorithms, and has been used in combination with the IDEAL algorithm (4). Stability tracking is useful in any algorithm that iterates over an entire image, and although most of the currently available algorithms like this are MRF based algorithms like FLAWLESS, stability tracking should be viewed as an option in the development of future algorithms. Finally, extrapolation initialization, in our experience, can be used to improve the decompositions from many different image domain algorithms, but only in concert with the MRF formulation were we able to handle the field inhomogeneities seen at 7T.

Extrapolation Initialization Procedure

We found that the extrapolation initialization method successfully eliminated complete swaps of fat and water, and greatly reduced the number of partial swaps when processing our 3D 7T data. This is an efficient way of including three dimensional smoothness information in the estimation process for volume data. Extending the MRF neighborhood into three dimensions greatly increases the computational cost of the estimation process, whereas simply adjusting the initial guess, combined with the use of the MRIP, was sufficient to largely prevent the two types of swapping errors discussed above.

The success of the extrapolation initialization procedure depends on the proper convergence of the manually chosen first slice. In data sets we have tested, correct convergence of the first slice enables convergence of the rest of the image volume. However, the choice of starting slice remains as an ambiguity in the process. We choose a slice close to the center of the image volume with no motion artifact, a mix of fat and water, and as little air as possible. For such initial slices, we get swapping errors approximately a quarter of the time. In that event, we simply restart the process on a different slice. We explored, with limited success, re-estimation of the first two slices after the entire 3D volume had been processed. This allows use of the entire volume for generating new initial guesses for these slices, but it has been our experience that although the error often does not propagate to other slices if the starting slice is incorrect, we cannot fix the starting slice using this simple approach.

Our implementation allows for higher-order polynomials to be used for extrapolation, which might be desirable on another scanner with different shimming. However, we did not find higher order polynomial fits to be necessary in our data taken on the same scanner at multiple different times on multiple different days. Higher order terms had very small coefficients compared with the zeroth and first order terms. Alternatively, we could restrict the linear extrapolation to a few surrounding slices. This would allow us to approximate linearity over a small portion of the animal, if a nonlinear fit was required by the data.

Acknowledgements

S.N. was supported by in part by the National Institute of Diabetes and Digestive and Kidney Diseases, and in part by a NIH grant to the Case MSTP from the National Institute of General Medical Sciences. G.-Q.Z. was supported by the NIBIB. This work was supported in part by an Ohio Biomedical Research and Technology Transfer award, “The Biomedical Structure, Functional and Molecular Imaging Enterprise.” This investigation was conducted in a facility constructed with support from Research Facilities Improvement Program Grant from the National Center for Research Resources, National Institutes of Health. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institute of Diabetes and Digestive and Kidney Diseases, the National Institute of General Medical Sciences, NIBIB, the National Center for Research Resources or the National Institutes of Health.

Ancillary