TensorFit: A torch‐based tool for ultrafast metabolite fitting of large MRSI data sets

To introduce a tool (TensorFit) for ultrafast and robust metabolite fitting of MRSI data based on Torch's auto‐differentiation and optimization framework.

1][12] One promising approach is using a 3D echo-planar spectroscopy imaging (EPSI) sequence, [13][14][15] which improves spatial resolution and reduces the acquisition time, obtaining information of larger volumes by simultaneously acquiring multiple spectra.However, an increase in the amount of acquired spectra results in longer processing times.Whole-brain acquisitions can yield up to 131 072 spectra for a 64 × 64 × 32 3D-MRSI data set, as is the case of ECCENTRIC, 16 resulting in metabolite fitting times taking up to hours to compute on dedicated servers.
There are several implementations available for fitting of MRS data, such as LCModel, 17 TARQUIN, 18 QUEST, 19 and TDFDFit, 20 each differing in fitting domain, minimization algorithm, and spectral models used.Existing software implementations of the mentioned methods lack GPU use, and some lack even CPU parallelization.Although this might not be a major issue for sequences involving only a small number of spectra, such as single-voxel spectroscopy or 2D MRSI, it is a big issue when analyzing data from high-resolution MRS methods, like EPSI.This is particularly problematic in a clinical setting where timely results are essential.Therefore, new time-efficient fitting tools are needed to accommodate the requirements of large data-acquisition methods.
A prior study explored the use of GPUs for rapid metabolite fitting. 21However, this implementation was limited to metabolites with Gaussian lineshapes and lacked the incorporation of prior knowledge and the use of simulated metabolite bases.3][24][25][26] These DL methods, although fast, have the drawback of being biased by the data set that is used for training. 27Furthermore, DL methods have difficulties to generalize across quantification models, sequences, and hardware specifications (e.g., field strength, scanner manufacturer, T E , T R ).
Despite all these challenges, the principles and efficient tools developed for error minimization in the field of DL can be applied for efficient curve fitting.Here, we propose using auto-differentiation tools from DL frameworks (Torch 28 )-not to train a neural network for generalizing across multiple spectra, but to perform highly efficient fitting on every individual spectrum.Although similar concepts have been recently applied to curve fitting in other fields, [29][30][31] to the best of our knowledge, this approach has not been previously applied for metabolite fitting in MRSI.
In this work, we develop an ultrafast tool based on a linear combination of baseline metabolites.We assess its precision and time efficiency compared with other linear combination methods, such as QUEST and TDFDFit, on simulated and in vivo data.This work lays the foundation for faster spectral fitting implementations that will boost the use of large data-size MRSI recordings in a clinical setting.

Fitting model
The general model to represent the time response of the spectroscopy signal s(t) can be written as where (t) is a linear combination of each metabolite response in time domain (TD); e(t) represents Gaussian noise over the signal; and (t) corresponds to the macro-molecular baseline.The metabolites response (t) is defined as a linear combination of Voigt lineshapes, 20 such as: where A m , T 2,m , T G,m ,  m , and  m represent the area, Lorentzian damping, Gaussian damping, frequency shift, and zero-order phase for each metabolite m, respectively.The value of B m (t) is the amplitude-normalized quantum mechanical simulated time response, including relaxation and J-modulation effects.The simulation corresponds to the used acquisition sequence and particular metabolite characteristics, and the time t is a discrete vector with the N sampling times.The Cramér-Rao Lower Bound (CRLB) defines the lowest theoretical error in an unbiased parameter estimation.It has been proven to minimize the number of free parameters in the model reduces errors. 32,33This can be accomplished by incorporating prior knowledge, which means linking the values of parameters to estimate as follows 20,34,35 : where , and  c are common free parameters, as B m (t) handles individual differences in T 2 , T G , , and .
On the other hand, Δ m , ΔT −1 2,m , and Δ m are values that define the prior knowledge and remain constant during the model fitting process.These parameters are necessary for fine-tuning the simulation basis set.Applying the relations defined in Eq. (3) to Eq. ( 2), we obtain The incorporation of prior knowledge not only reduces the minimal errors in the parameters to estimate but also increases the tool's robustness and computational speed.In this work, prior knowledge is defined using the spectrIm-QRMS 36 spectra modeling functionality, which has a graphical user interface for setting the model and the relationship between metabolites (available at https: //spectrim.diskstation.me/spectrImWeb).Using the same graphical interface, the user defines the initial amplitude for each metabolite.The fitting model and initial values were defined for an average spectrum on a clinical data set of 256 spectra obtained with the same sequence type and acquisition protocol.This approach guarantees a good estimate of the starting parameter.These starting values were used in all compared fitting methods tested.Additional details of both prior knowledge models applied are presented in Table S1 and S2.
The fitting in TensorFit can be performed in frequency domain (FD) as well as in time domain (TD) by computing the fast Fourier transform (FFT) of s(t), as follows: Application of nonlinear least squares fitting is the expert consensus 37 for metabolite fitting.It involves searching for parameters that minimize the cost function L, as follows: argmin  L(S(), y()) (6)   where the vector  are the free parameters of the model, and S() and y() are the modeled and measured spectrum, respectively.State-of-the-art fitting software uses different optimization options for this problem.FitAid, 34 QUEST, and LCModel use the Levenberg-Marquardt 38 algorithm as their base optimization method.ProFit 35 and TARQUIN, on the other hand, use VARPRO 39 as their base algorithm for fitting.Finally, TDFDFit 20 applies conjugate gradient descent 40 to fit the spectra.The implementation presented in this work will be compared against TDFDFit and QUEST.We used QUEST because of its performance on the ISMRM fitting challenge 2016, 41 and TDFDFit because it was available, in-house-developed, and a commonly used algorithm.

TensorFit implementation
TensorFit was implemented using the Torch v1.13.1 framework in Python v3.10.9.Equations ( 4) and ( 5) were implemented as a Torch computational graph as shown in Figure 1, where the free parameters , and 1 in Eq. [4]) are trainable tensors defined using "torch.nn.parameters.Parameter."These tensors are forward-propagated through the computational graph to compute the model response.The error between the model response and the target spectra is backpropagated through the network to update the parameters.After this iteration, the updated parameters of the computational graph will be closer to those that minimize the error.After several iterations, the error between the response and the target spectra will reach a minimum.We chose Torch, as this framework allows us to • Define complex formulas as a nn.Module and take full advantage of automatic differentiation; • Study a wide range of minimization algorithms (optimizers), including first-order and second-order minimization and several strategies to avoid ending up in local minima; • Redefine or modify the computational graph during the process of fitting; • Use efficient and optimized parallel GPU operations, making it suitable for high-dimensionality problems; and • Use complex numbers in all the needed operations, optimizers, and backpropagation algorithms.
To evaluate the fit quality between the (measured) spectrum to be fitted y(k, x) and the modeled spectrum S(k, x), with k representing the index of the spectrum in the batch, we used a loss function defined as where S ′ (k, x) and y ′ (k, x) are the spectral amplitude offset removed versions of S(k, x) and y(k, x).The values of N x and N k are the number of points in the signal and the number of spectra, respectively.Here, x represents either the time t in TD or  in FD; both variables are limited to user-selected ranges, to focus the fitting on a particular spectral region.In the underlying study, minimization was performed in FD, and the full spectral range was used.

F I G U R E 1
TensorFit implementation diagram.The tensor computational graph calculates the spectral response in frequency domain.The mean squared loss is calculated between the modeled and the target (simulated or measured) spectra.When the convergence is not satisfied, the parameters are updated, and the process is repeated until convergence is reached.FFT, fast Fourier transform.
Spectral offset removal was performed by spectrum-wise subtraction of the average between the first 200 points in FD.
The fitting consists of two phases.In Phase 1, we iterate fitting a truncated model, where the simulated basis set B m (t) is truncated to contain only the first quarter of the points.This way, the number of complex exponentials and matrix multiplications is drastically reduced, improving computation speed.During this phase, the iteration ends when the loss function L ( S ′ , y ′ ) improves less than ΔL(%) = 0.01% over the last N iterations.In Phase 2, we iterate to fit the full model (i.e., without truncation).The criteria to terminate this final iteration is the same as before.In this work, the value of N is 20.Both ΔL(%) = 0.01% and N = 20 were a good compromise between time and accuracy on the simulated data set.
Once convergence is reached, the CRLB is computed for  =  min as where is the derivative for each frequency with respect to each free parameter, and  2 is the variance of the Gaussian noise.The derivative is extracted directly from the computational graph after the last iteration.During minimization, hard constraints were implemented as part of the computational graph using the function "torch.clamp".This is equivalent to assign- , where l b and u b are the lower and upper bound for a certain parameter .The corresponding gradient is set to zero when  input < l b or  input > u b .This allows the free parameters to move between the limits but never go further.
We used different optimizers provided by the Torch library for loss minimization.In this work, we evaluated the performance of Rprop, 42 Adam, 43 and stochastic gradient descent (GD) as first-order methods, and AdaHessian 44 and limited-memory broyden-fletcher-goldfarb-shanno (LBFGS) 45 as second-order methods.

Simulated data set
We simulated a spectra data set to evaluate the accuracy and time efficiency of each studied tool, as this is not feasible in clinical cases due to the absence of ground truth.The model consists of nine metabolites: choline (Cho), glutamate, lactate, glutamine (Gln), creatine (Cr), aspartate, N-acetylaspartate chemical group 2 CH 3 (NAA), myo-inositol, and N-acetylaspartate chemical group 3 CH 2 .
Each basis was simulated using NMRScopeB 46 for the 3T scanner with a semiLASER 2D-MRSI sequence, TE = 135 ms, TR = 1500 ms, and 1024 points.The Gaussian width ( 1 ) was set to zero, to assume only Lorentzian lines and maintain comparability with QUEST, which does not fit Gaussian lineshapes.
To ensure that our simulated data set included the range of spectral shapes and noise levels typically observed in clinical data, we randomly sampled from uniform distributions in the indicated value ranges, as presented in Table 1.The area range was proportional to its corresponding starting value, whereas the frequency shift range was relative to its initial value in the model.More details of both models used in this work (3 T and 7 T) can be found in Tables S1 and S2, respectively.Additionally, we varied the SNR for each simulated spectrum to assess the performance of various analysis techniques across a range of SNR between 5 and 50, levels commonly encountered in in vivo spectroscopy.The SNR was calculated as the maximum spectral amplitude divided by the SD of the noise in the spectral baseline from the first 200 points in FD as follows: We calculated the difference in frequency shift between the initial model spectrum and the measured or simulated spectrum to improve the initial seed values and increase the probability of finding an acceptable minimum.This was achieved by identifying the  c value that maximizes the cross-correlation between the two signals.This step increases the method's robustness and lowers the number of outliers by obtaining a better seed for the initial frequency shift.

In vivo data set
For testing on in vivo data, we use a data set obtained from scans conducted on a 7T scanner (Terra; Siemens Healthineers, Erlangen, Germany) using a spectral-editing sequence named SLOW-editing 15 with TE = 68 ms, TR = 1500 ms, and FOV = 280 × 100 × 70 mm.One data set typically consists of a matrix with 65 × 42 × 10 = 27 300 spectra, from which 20 202 were discarded for being outside the brain region or voxels near the skull due to too strong lipid contamination.On the remaining 7098 voxels, B1 − ∕B1 + correction was performed using water reference data (i.e., Scorr = S WaterArea ).For a detailed description of the model used to fit this data set, see Table S2.
Like the approach used on these described simulated data, the maximum cross-correlation was determined to correct the frequency shift between the model and measured spectrum.Before fitting, the spectral amplitude offset was also corrected in FD, determined as the mean on the downfield part of the spectra (precisely, the leftmost 200 points).

2.5
Performance analysis

Performance of optimizers
To define the best-performing Torch optimizer for the complexity of our problem, we performed metabolite fitting on identical simulated data for Rpropr, Adam, GD, Ada-Hessian, and LBFGS.We used a subset of the simulated data set containing 2048 spectra.For each optimizer, we first searched the optimal learning rate lr, which resulted in lr = 10 for GD and lr = 1 for the other four optimizers.
Considering the optimal lr of each optimizer, we computed the percentual error as a function of the number of iterations as , where A ′ m is the ground truth for each of the most abundant metabolites (Cr, Cho, NAA, and Gln).The number of iterations was considered in the range of 0 to 50.

TensorFit validation
We evaluated the performance in accuracy and time efficiency of TensorFit in comparison with established fitting methods such as TDFDFit and QUEST.First, we analyzed the percentual error for a simulated data set containing 32 768 spectra each, with SNR varying between 5 and 50.These metabolites were fitted using the same 3T model used for the simulation.The percentual error was averaged between all the spectra in the data set.When compared with QUEST, the function used to fit in TensorFit is Eq. ( 4) in FD but removing the free parameter  c and considering

T A B L E 1
Parameter ranges in which values were randomly sampled.The area range is defined respective to the initial area of each metabolite defined in the model (for more details, see Table S1).

Parameter Minimum Maximum
Area A m (a.u.) 0.1× 2× Frequency shift  c (Hz) − 20  20   Lorentzian width (Hz) 1. 5  6   Phase  c ( • ) − 45  45  that each metabolite has its own ωm, which varies within a predefined Δ m .This adjustment is necessary due to the specific model definition within QUEST.In addition, QUEST does not allow Gaussian decay.Therefore, the free parameter 1

T G,c
is removed from the fitting model.This use case is included in the TensorFit implementation, but for clarity, we refer to it as TensorFit + MetShift.The Gaussian decay was also removed from the TDFDFit model.In this case, both methods had the same constraints, which are ±20 Hz for each metabolite frequency shift and ±180 • for the global zero order.
The execution time was measured as the time necessary to fit a certain number of spectra, excluding the loading time of each algorithm.Fitting with QUEST was performed within a jMRUI 47 pipeline.Because of the memory limitation of the 32-bit JVM (Java Virtual Machine), the maximum number of spectra that we could fit in one batch using jMRUI was 4096.When using TDFDFit, a parallelized call was implemented using Python.For TensorFit, the fitting was performed directly in a batch of the needed amount of spectra.The total time required for each fitting was determined for a data set of varying sizes between 32 and 32 768 for TensorFit and TDFDFit and ranging from 32 to 4096 for QUEST and TensorFit + MetShift.All presented CPU results were obtained on an AMD Ryzen (72 700×, 8-core, 3.7 GHz, 32 GB RAM), whereas the GPU performance was measured on a Nvidia GeForce GTX 1060 (6 GB, 1280 CUDA-cores).

In vivo application
For the in vivo spectral editing case, the comparison was performed only between TensorFit and TDFDFit, given the fact that QUEST can only handle Lorentzian lineshapes.The whole brain was fitted and directly compared between both methods, with metabolite maps generated using SpectrIm-QRMS.For comparison, metabolite maps for gamma-aminobutyric acid (GABA), glutamate + glutamine, and NAA are shown for both methods.CRLB was computed and averaged among all the voxels shown.To numerically evaluate the performance of both methods, we calculated the ratio between the TensorFit and TDFD-Fit loss.This is represented as a histogram showing the distribution across the 7098 spectra.

Optimizer evaluation
To determine the optimal Torch optimizer, we fitted the 3T prior-knowledge model for five different optimizers for the same 2048 simulated spectra.Figure 2A-D shows the percentual error for the metabolites NAA, Cr, Cho, and Gln, respectively, for each optimizer as a function of the number of iterations.Similar behavior is observed for the four metabolites.Rprop converges in fewer iterations than AdaHessian, Adam, and GD, while needing more iteration than LBFGS.Nevertheless, LBFGS evaluates the loss function several times in each iteration, causing the effective convergence time to be higher than that of Rprop.The label in Figure 2A shows each optimizer's total computation time to reach iteration number 50.It is noticeable that, although LBFGS converges in fewer iterations, the convergence time is more than 10 times larger than that of Rprop.Given the faster convergence speed, Rprop was adopted for all subsequent analyses with TensorFit.

Precision and time efficiency
For evaluating the precision of TensorFit, we fitted the 3T model and evaluated the percentual error against the ground truth used for the corresponding simulation.
Figure 3 shows the percentual error averaged among all spectra for TensorFit and TDFDFit in blue and light red, respectively.The solid red bar corresponds to the error for TDFDFit when removing outliers, defined as those cases in which the common frequency shift ( c ) differs from the ground truth more than 20 Hz, represented as "Filtered TDFDFit."Those cases represent 4% of the data set.The percentual error for TensorFit + MetShift and QUEST is shown in yellow and green, respectively.As there is no common frequency shift to consider, the outliers were identified as those cases in which the most abundant metabolites (NAA, Cho, and Cr) had errors exceeding four SDs from the ground truth, represented as "Filtered QUEST."This accounted for nearly 5% of the data set.In the case of TensorFit, no outliers were found.Additionally, we present the average CRLB computed using the ground-truth parameters.An extra comparison between TensorFit and TDFDFit, including Gaussian decay, is shown in Figure S1.
To evaluate whether the method can be used for processing large data sets, we fitted the simulated data and measured the execution time.Figure 4 shows the execution time for each method as a function of the number of spectra.Figure 4A presents the execution time when comparing TensorFit with TDFDFit.The execution time increases linearly for TDFDFit, whereas TensorFit on the CPU follows a similar rate with a speed-up of 11×.When running on the GPU, TensorFit remains strongly dominated by the Torch's GPU context initialization, starting to rise when reaching a data-set size of 2048, with a maximum speed-up of 165× for 32 k spectra.

F I G U R E 3
Precision analysis for all fitting tools used.The percentual error (%) with respect to the ground truth is displayed for each of the nine metabolites considered in the 3T prior-knowledge model.For TDFDFit and QUEST, the mean error without outliers is shown with solid bars (i.e., "Filtered TDFDFit" and "Filtered QUEST").The black dotted line represents the averaged Cramér-Rao Lower Bound (CRLB) obtained using the ground truth.The metabolites labels, with no chemical group information, contain all of them.Asp, aspartate; Cho, choline; Cr, creatine; Gln, glutamine; Glu, glutamate; Lac, lactase; Myo; myo-inositol; NAA, N-acetylaspartate.
illustrates the execution time for QUEST and Tensor-Fit + MetShift.In this case, we observe the same behavior as before, but with a smaller speed-up, 7 times faster on the CPU, and 115 times compared with QUEST.

In vivo performance
After validating the method with simulated spectra, we tested its performance on in vivo spectra.

F I G U R E 6
Histogram distribution of the ratio between the fit-quality number for both methods, computed as the loss of TensorFit divided as the loss of TDFDFit.The vertical line represents the median value.
shows the metabolites maps for three metabolites of interest, in this case, gamma-aminobutyric acid, glutamate + glutamine, and NAA.The columns represent both methods: TDFDFit and TensorFit.Figure 5D shows the CRLB%, illustrating the performance of both tools.
As no ground truth is available for the in vivo data set, we computed the error using the loss function value used for TDFDFit and TensorFit fitting.Figure 6 depicts a histogram illustrating the distribution of voxels with a specific ratio between the loss when using TensorFit and TDFDFit.A ratio smaller than one indicates a better fit using Ten-sorFit.The median value, represented by a vertical line, remains at 0.98, with most (92.4%)cases exhibiting a ratio smaller than one.For this data set consisting of a total of 7098 spectra, the execution time for TDFDFit was 2294 s (38:23 min), whereas TensorFit, running on a GPU, completed the task in 13.5 s, yielding a speed-up factor of 169×.When using TensorFit on the CPU, the time extends to 2:40 min, corresponding to a speed-up of 14×.

DISCUSSION
In this work, we proposed the off-label use of a DL framework for rapid metabolite fitting of MRS data.Specifically, we used the automatic differentiation capabilities of Torch, using them as a highly parallelized error-minimization algorithm.This approach allowed us to use GPU processing or even obtain a strong acceleration when working solely on the CPU.Focusing on error minimization using traditional DL techniques, the process involved defining an appropriate optimizer for error minimization, implementing the TensorFit tool, and comparing performance with well-established fitting methods.Many other frameworks, apart from Torch, allow the automatic differentiation of arbitrary computational graphs.TensorFlow, 48 for example, offers comparable optimization on GPU and supports complex-valued backpropagation.However, it lacks the flexibility to modify the computational graph during runtime and extends the network initialization time significantly.Other highly optimized frameworks, like JAX, 49 support automatic differentiation but restrict GPU use to Linux and MacOS platforms.In contrast, Torch allows its implementation in all commonly used operative systems.
First, we investigated the different optimizers available in the Torch library for error minimization.From the comparison, shown in Figure 2, we found that Rprop is the fastest optimizer.Consequently, we adopted Rprop as the optimizer for our TensorFit implementation.Nevertheless, further research should establish whether Rprop maintains its speed advantage across different models and data sets.
We tested TensorFit on simulated data and compared its performance with conventional fitting methods.In terms of fitting accuracy, TensorFit showed a lower percentual error for each fitted metabolite (Figure 3).On average, across all metabolites, the TensorFit error with respect to the ground truth was slightly lower than that of both TDFDFit and QUEST.This can be attributed to the influence of the outliers.In a small number of cases, TDFDFit determines a wrong initial value for the frequency shift, leading to convergence at suboptimal local minima.In contrast, TensorFit, starting from the same values, converges correctly.In addition, we attribute the higher precision of TensorFit with respect to QUEST to its high sensitivity to an initial frequency shift mismatch.For all (tested) fitting algorithms, the percentual error is, on average, smaller than the corresponding CRLB for all components.This behavior is explained by the fact that CRLB represents the minimum variance for an unbiased estimator; this means that most estimations should fall within the range defined by CRLB.Therefore, the average percentage error should be in the same order of magnitude.Further analysis of the distribution of both metrics is provided in Supporting Material 4.
Regarding time efficiency, when implemented on the CPU, TensorFit outperformed TDFDFit and QUEST by a factor of 7× and 11×, respectively.We attribute the speed-up with respect to TDFDFit to its use of explicit numerical differentiation, which involves several model evaluations.Compared with QUEST, the difference in performance on the CPU stems from the lack of parallelization, whereas TensorFit uses all the computational optimizations of Torch.In addition, as expected, the GPU implementation of TensorFit is up to 165 times faster than TDFDFit and 115 times faster with respect to QUEST.This is due to the high number of processors and the Torch's strong optimization of matrix multiplications.
The performance of TensorFit on the in vivo data set yields better results in terms of mean squared error compared with TDFDFit, strongly suggesting more accurate fitting.We found TensorFit to perform better regarding fit quality in 92.4% of the cases (Figure 6).Although the spatial distribution of metabolites remains the same, there is a slight difference in amplitude and clear outliers, primarily in the case of NAA, in which lipid contamination is significant (Figure 5).A comparison with QUEST was not feasible, as it does not allow the use of Gaussian lineshapes in the model as defined by Eq. ( 4).
Using Torch for spectral fitting offers several advantages.The main one is the speed-up when executing on a GPU.Additionally, Torch allows high flexibility in defining and customizing fitting models.Nevertheless, it should be noted that some inherent disadvantages of TDFDFit and QUEST, such as high susceptibilities to artifacts, lipid contamination, and problems with overlapping peaks, are also present in TensorFit.This limitation arises from the fact that they all depend on the underlying fitting model.Additionally, the current implementation of TensorFit does not account for the (macromolecular) baseline (i.e., it only performs correctly for long TEs).This will be included in the next release.
In summary, we have introduced a novel tool for metabolite fitting that outperforms existing methods concerning accuracy and time efficiency.The improvement in time efficiency directly addresses a critical obstacle in the clinical application of MRSI, namely the long data-processing times.To illustrate its impact, TensorFit can process a whole brain data set containing 7098 spectra in under 14 s, whereas TDFDFit requires 38 min to analyze the same data set.

CONCLUSIONS
We implemented an ultrafast tool for metabolite fitting of large MRS data sets using Torch's computational graph auto-differentiation capabilities.We demonstrated the high performance and accuracy of the method with proof-of-principle analysis in simulated and in vivo data sets.We found a superior performance in time efficiency and accuracy, obtaining a speed-up of 165× and 115× when compared with TDFDFit and QUEST, respectively.We believe this work will substantially enhance the use of EPSI sequences or other high-resolution MRS acquisition methods, such as ECCENTRIC, by reducing the required metabolite fitting time.
Figure 5A-C F I G U R E 4 Comparison of fitting tool time efficiency.The execution time is presented as a function of the number of spectra.(A) TensorFit (CPU and GPU versions) and TDFDFit.(B) TensorFit + MetShift and QUEST.F I G U R E 5 Performance comparison of TensorFit and TDFDFit in in vivo data.Metabolites maps for gamma-aminobutyric acid (GABA; A), glutamate + glutamine (Glx; B), and N-acetylaspartate (NAA; C). (D) The averaged Cramér-Rao Lower Band (CRLB) for the three metabolites in the shown slice.The error bars represent the SD.The observed occipital/temporal hotspots are pulse sequence-related and independent of the fitting tool.