Time‐domain principal component reconstruction (tPCR): A more efficient and stable iterative reconstruction framework for non‐Cartesian functional MRI

To improve the reconstruction efficiency (i.e., computational load) and stability of iterative reconstruction for non‐Cartesian fMRI when using high undersampling rates and/or in the presence of strong off‐resonance effects.


| INTRODUCTION
FMRI detects brain activity by measuring BOLD signals. 1 Even though the BOLD signals change slowly and smoothly, which makes standard 2D EPI sampling with a 2-3 seconds temporal resolution generally sufficient, 2 rapid sampling is still highly beneficial for the following purposes: (1) detection of high-frequency BOLD oscillations (up to nearly 1Hz) reflecting potentially relevant brain activity 3,4 ; (2) better characterization of the hemodynamic response function (HRF), which is subtly different across subjects and cortical areas 5-7 ; (3) separation of physiological noise, such as breathing and cardiac pulsation [8][9][10][11][12] ; and (4) greater statistical power and sensitivity. [13][14][15][16] Thus, rapid fMRI techniques are of great practical significance and can markedly improve fMRI analyses. 17,18 Reducing the acquisition duration of individual images usually involves combinations of parallel imaging, [19][20][21] k-space undersampling 22 and/or optimized non-Cartesian sampling patterns. [23][24][25][26][27] With these complex sampling strategies, image reconstruction becomes challenging. In particular, the reconstruction quality is dominated by the rate of k-space undersampling, which may result in ill conditioning of the inverse problem. Moreover, if readout duration is long, off-resonance effects also become an important interference factor that also contributes to ill conditioning and can only be partially corrected. [28][29][30] In fMRI, the images are usually reconstructed time-pointby-time-point. Especially with ultra-fast imaging techniques necessitating highly undersampled acquisitions and iterative reconstruction algorithms, high computational requirements may become an issue and some reconstruction errors are inevitable. The time-point-by-time-point reconstruction fails to take advantage of the high redundancy between successive fMRI images, which could be used to alleviate the computational load of the reconstruction. Moreover, any reconstruction error may negate the potential gains in sensitivity from increasing the temporal resolution of fMRI, as even small errors relative to the MR signal may still be significant relative to temporal BOLD signal changes, leading to spurious fluctuations and reduced temporal signal-to-noise-ratio (tSNR).
Instead, separation reconstruction is a spatiotemporal integrated reconstruction strategy, which decomposes the k-t-space data into background and dynamic components.
The background could be defined as the temporal average of k-t-space data, [31][32][33] or a low-rank (rather than static) background. [34][35][36] The dynamic components are then the difference from the background. If the dynamic components are sparse, they could potentially be reconstructed better. The separation method has been shown to perform well in applications such as dynamic contrast-enhanced MRI (DCE-MRI), where the boundary between background and sparse component is clear. However, in fMRI, it may be more difficult to separate the background and sparse components. For example, in resting-state fMRI, the signals of interest are actually background fluctuations that might not necessarily be sparse.
The partial separation model is another spatiotemporal integrated reconstruction strategy. [37][38][39][40][41][42][43] It supposes that the spatiotemporal signals could be well represented by a lowrank model based on principal component analysis (PCA). This could involve different sampling patterns at each time frame followed by interpolation of the missing k-t-space data using low-rank constraints. However, in fMRI studies, especially those focusing on high-frequency fluctuations, it is not clear to which extent high-rank information contributes to brain activity signals. Moreover, the time-varying sampling patterns may potentially introduce time-varying noise, for example due to variable off-resonance behaviors, which are difficult to correct. This paper proposes a generalized spatiotemporal integrated framework to improve the iterative reconstruction of ultra-fast fMRI, particularly in the presence of strong off-resonance effects. 44 It is based on non-time-varying sampling strategies (without assuming sparsity in the temporal dimension). This study focuses on the magnetic resonance encephalography (MREG) technique, which combines parallel sampling, a high undersampling rate and an optimized 3D non-Cartesian trajectory to realize full brain data acquisition in a single excitation. [23][24][25] The high temporal resolution leads to a large number of images to reconstruct (typically in the thousands). Combining this with the need for 3D iterative reconstruction requiring dozens of central processing unit (CPU)-minutes for each image, the computational load to reconstruct an entire fMRI dataset may easily become impractical. The dynamic image reconstruction framework is described in detail and evaluated in simulations as well as experimentally using breath-hold experiments. The applicability to other iterative algorithms is then discussed. where s is the acquired k-space data; is the unknown image; F is the forward operator encoding coil sensitivities, k-space trajectory, and off-resonance effects; and is noise. We consider the case where cannot be reconstructed by inverting F directly because it is too large and does not have a tractable analytical solution. In this case, reconstruction is commonly performed by iterative algorithms. 26 The forward operator may be ill conditioned due to undersampling or off-resonance effects, whereas regularization can be used to improve the conditioning by constraining certain image properties.
The image reconstruction is defined as searching for the minimum of a cost function c ( ) incorporating a data fidelity term and a regularization term such as an L2-norm penalty: here, ‖⋅‖ 2 2 indicates squared L2-norm, R is some image transform, and is a regularization parameter. As long as the forward operator F and the transform R are linear operators, the cost function can be minimized by an algorithm such as linear conjugate gradient (CG). It admits a closed-form solution where the inverse operator f l2 is data independent. In order to exploit sparse image properties, there is also great interest in using regularization based on the L1-norm: here, ‖⋅‖ 1 indicates L1-norm. Note that minimizing this cost function requires a nonlinear iterative algorithm and cannot be represented as a closed-form solution. Let ̂= f l1 (s), then the inverse operator f l1 is data dependent.
As the data fidelity and regularization terms no longer have the same scaling -unlike with L2-regularization, where both terms scale according to ‖̂ ‖ 2 2 -it should also be noted that the parameter λ will depend on image scaling. For example, for a dataset with a different magnitude than a given reference dataset, the regularization parameter has to be scaled relative to the reference in order to obtain the same degree of regularization:

| Time-domain principal component reconstruction (tPCR)
For fMRI, an additional time dimension is introduced. The conventional time-point-by-time-point reconstruction is referred to as sequential reconstruction (SR). This paper proposes a principal component-based reconstruction framework, time-domain principal component reconstruction (tPCR), which transforms the reconstruction domain from spatiotemporal space to principal component space.
Before reconstruction, a PCA or singular value decomposition (SVD) is performed along the time dimension in which u l (k) and v l (t) are left and right singular vectors representing the spatial and temporal parts of the principal components. l , the singular values arranged in decreasing order, are the weighting function of each component. Combining u l (k) and l to U l (k) constructs a weighted spatial part. The U l (k) is then reconstructed from k-space to image-space component-by-component P l (r) . Finally, they are recombined with the temporal part by a cross product to image-t-space.
The flowcharts of SR and tPCR are compared in Figure 1.

| Redistributed computational power
In contrast to SR where the signal at each time point has the same weight, the weights of the principal components are redistributed by ever-decreasing l , which allows redistributing the number of CG iterations among the components according to their weights; components with low weights can be reconstructed using fewer iterations since they contribute less to the total signal, and vice versa. Specifically, if the first principal component corresponding to eigenvalue 1 is reconstructed within a given tolerance (relative reconstruction error) tol, then the l th component will need to be reconstructed only within a tolerance tol × 1 ∕ l in order to contribute the same reconstruction error in the recombined signal. In CG, the required number of iterations grows asymptotically with the logarithm of the target tolerance 45 : (3) c (̂ ) = ‖F̂ − s‖ 2 2 + ‖R̂ ‖ 1 , where is the condition number of the forward operator. Therefore, the number of iterations n l to reconstruct the lth principal component is determined by the following equation: where n 1 is the number of iterations used to reconstruct the first principal component. The condition number is not known a priori but can be estimated from the achieved tolerance at each iteration in Equation (7). Although the number of images to reconstruct (either full images or PCA spatial basis images) is the same in SR and tPCR, the different distribution of the number of reconstruction iterations in tPCR may allow for a better or faster reconstruction overall.
The SOS k-space trajectory can be described by four parameters 23 : the radial undersampling rate in the x-y plane, which varies linearly from the center (R1) to the edges (R2) of k-space, and the undersampling rate in the z-direction (spacing between the spirals), which also varies linearly from the center (R3) to the edges (R4) of k-space. The total undersampling rate thus varies from R1 × R3 at the k-space center to R2 × R4 at the periphery. The 3D SOS trajectory used [R1, R2, R3, R4] = [3, 6, 1.6, 3.9], which corresponds to an average undersampling rate (when integrated over k-space) of 12.3. 23

| Breath-hold data acquisition and statistical analysis
A breath-hold experiment was performed due to its broad activation including regions with strong off-resonance effects. Three volunteers performed the experiment. The volunteers gave informed consent and the experimental protocol was approved by the ethics committee of the University Medical Center Freiburg. This experiment was designed as a paced breath-hold paradigm 46 (duration 4 blocks × 37 s = 148 s). Each block began with two 6 s paced breath periods (3 s breath in and 3 s breath out), which is to ensure the subject was just in the state of expiration when starting the breath hold. Then a 15 s hold period and a 10 s free breathing period followed. The same protocol was used for both the 2D singleslice EPI sequence and the 3D MREG sequence. Data from the first 12 s of the fMRI time-series were discarded in order to reach signal equilibrium. The reconstructed images were preprocessed (realigned and smoothed with 8 mm FWHM) with SPM12 (http://www.fil.ion.ucl. ac.uk/spm), and then analyzed by FMRISTAT (http://www. math.mcgill.ca/keith/ fmris tat/). The regressor is constructed according to the method in. 47 The SPM canonical HRF and an additional temporal derivative regressor were included in the model, leading to F-statistic maps indicating areas for which the combination of both regressors accounted for a significant amount of signal variance. A 10 th order auto-regressive noise model (AR 10) was used to correct for autocorrelations. 48

| Simulation study
Simulation experiments were performed with single-slice EPI images and 2D variable-density spirals, which were defined using only parameters R1 and R2. The simulated k-t-space datasets were generated by transforming the ground truth images (1360 normalized single-slice EPI images) with the forward operator based on a 2D spiral trajectory ([R1, R2] = [4,8]) as well as sensitivity maps and B 0 field map calculated from the dual-phase gradient-echo images. To reduce the computational cost of the various simulations, only the 10 coil sensitivity maps from the coil elements closest to the single slice were included in the forward operator. To ensure that the simulated off-resonance effects were as severe as in real 3D MREG data (see below), the 2D read-out duration was fixed to 76 ms for the various trajectories by lengthening the trajectory dwell time. The reconstruction was the same as 3D MREG except with a different regularization parameter (L2-norm: 0.01, L1-norm: 6 × 10 −5 ). We shared the source Matlab code and example datasets in https ://github.com/feiwa ng120 6/tPCR (commit ccdc4bd314256805423906f7d1bdc81489e60840).

| 3D MREG reconstruction settings
The MREG reconstruction settings for SR and tPCR were identical. The L2-norm linear CG reconstruction and L1-norm non-linear CG reconstruction with total variation penalty (TV) were applied as described in Hugger et al. 49 The regularization parameters were determined empirically for 3D MREG (L2-norm: 0.05, L1-norm: 5 × 10 −6 ). Note that for tPCR with L1-norm regularization, the regularization parameter of each component was scaled according to Equation (4) by using the temporal averaged k-space as reference. To correct for off-resonance effects, a time-segmented method was applied in the forward operator. 29 The number of segments was set to 10. The computations were performed on an Intel(R) Xeon(R) CPU X7560 @ 2.27GHz, with usage restricted to a single CPU core. The only difference between the two reconstructions was the number of iterations. For SR, the reconstructions of all time points were the same: a maximum number of iterations equal to 100, with an additional tolerance 5 × 10 −4 . For tPCR, the progressively smaller numbers of iterations for each component were calculated according to Equation (8). Here, n 1 was set so that the total combined number of iterations used to reconstruct all components became equal to that of the SR method. Additionally, to investigate the effects of accelerating the reconstruction, reconstructions were also evaluated when reducing n 1 so that the average number of iterations per component was reduced in steps of 10 with a minimum 5.   (8). Distributions of the number of iterations are also plotted for reduced values of n 1 so that the mean number of iterations was decreased in steps of 10 (and hence reducing the computational load), leading to an overall shift of the curve. As described previously, the minimum number of iterations was set to 5. Three representative time points from SR and three representative components from tPCR (both the temporal part and the weighted spatial part) with equal mean number of iterations are presented in supporting information Figure S1.

| Reconstruction error
Three types of error were considered in order to evaluate the performance of SR and tPCR: the total error, dynamic error, and activation error. The total error and dynamic error are the absolute value of the difference between the reconstructed images and the ground truth before and after removing the temporal average from the time-series, respectively. The activation errors were defined as differences in the F-statistic maps from ground truth data compared to the maps from reconstructed data. The error percentage was relative to the total signal. To compare with activation error, the total error and dynamic error were calculated on the preprocessed data. Figure 3 shows the three types of errors as a function of the mean number of iterations across three subjects. Overall, the reconstruction errors in tPCR decrease or keep almost constant as the number of iterations increases, and they are always lower than in SR when using the same mean number of iterations, that is, the same computational load. Alternatively, tPCR achieves the same dynamic error as SR using about 4 times less computational power in L2-norm reconstructions. In L1-norm reconstructions, the errors of tPCR are always lower than SR even when using the minimum number of iterations.
When using the same mean number of iterations during the reconstruction, it is seen that errors are largest in frontal brain regions, which are the ones most affected by offresonance effects (as shown in Figure 4A, using subject 1 as a representative example). The time-series in these regions ( Figure 4B) illustrates the signal deviations from the ground truth, which are larger in SR than tPCR, especially when using L1-norm regularization. With tPCR, the deviations increase when using a lower number of iterations. In contrast, the time-series in regions far away from off-resonance effects ( Figure 4C) are reconstructed accurately by both methods.
We also present the difference between SR and tPCR (tPCR -SR) on the SD of the time-series at each voxel as well as the F-statistic maps of subject 1 in Figure 5. Overall, the activation maps show similar behavior as the SD maps. The largest differences are seen in the frontal brain areas. With L2-norm regularization, SR results in lower time-series SDs, consistent with the suppression of BOLD fluctuations, and thus weaker activations than tPCR. With L1-norm regularization, SR results in higher time-series SDs, consistent with the appearance of spurious fluctuations.

| Comparison under different experimental conditions
1. Background intensity Figure 6 shows the relationship between error and background intensity. For comparison, the error percentage is all relative to the total signal with scaling factor 1 and the number of iterations is also identical with scaling factor 1. The total error, no matter the reconstruction method and regularization type, is almost proportional to background intensity. However, the dynamic error remains constant except for SR with L1-norm, where it is approximately proportional to background intensity.

Regularization parameters
The errors under various regularization parameters are presented in Figure 7 as a function of the number of iterations. With L2-norm, the total errors and dynamic errors were always lower with tPCR than with SR when using the same mean number of iterations, but the difference became very small when using a larger number of iterations. With L1-norm, the total errors of tPCR are also lower than SR except when using a larger number of iterations, but the dynamic errors with SR still did not converge to the lower values obtained with tPCR, even with a large number of iterations. Overall, the convergence speed is faster at large regularization parameters than at smaller regularization parameters.

Undersampling rate and off-resonance effects intensity
The errors under various undersampling rates and offresonance effects intensity are presented in Figure 8. Overall, the total errors and dynamic errors increase with undersampling rate and off-resonance effects intensity. Differences in the errors between both reconstruction methods, as well as differences in the rate of increase of errors as a function of undersampling rate and field scaling factor, were investigated by 2-way repeated-measures analysis of variance on the effects of reconstruction method and undersampling rate/ field scaling factor. For total errors, the errors increased in all cases significantly faster in SR than in tPCR. For dynamic F I G U R E 3 Percentage of total error, dynamic error, and activation error of sequential reconstruction (SR) and time-domain principal component reconstruction (tPCR) (the latter as a function of the mean number of iterations) with respect to the ground truth in three subjects. With SR, the total errors are lower with L1-norm regularization, but the dynamic errors and activation errors are lower with L2-norm regularization. With tPCR, all types of errors are lower for both L1-norm and L2-norm regularization errors, the rate of increase was also significantly higher in SR than in tPCR when using L1-norm regularization but not when using L2-norm regularization. The P value in each subgroup in each figure shows the statistical difference between SR and tPCR at each level of undersampling rate or field scaling factor. For L2-norm, both total errors and dynamic errors were significantly lower in tPCR than in SR in all cases. For L1-norm, total errors were significantly different only at the largest undersampling rate, but the dynamic errors showed obvious differences. Only for the largest field scaling factor was the difference marginally nonsignificant (P = .063), although this could be attributed to the low number of subjects and the large variability in the error values in SR.

| 3D MREG study
For experiments using real data, in the absence of ground truth, we only compare the difference of SD maps and the F-statistic maps of three subjects between SR and tPCR (tPCR-SR) in Figure 9. Overall, the activation maps show similar behavior as the SD maps and the largest differences are seen around the nasal cavity. With L2-norm regularization, SR results in lower time-series SDs and weaker activations. With L1-norm regularization, the higher instability of the nonlinear reconstruction results in more widespread differences, but the largest differences are seen in frontal brain areas. All reconstructed slices for all reconstruction  Figure S3.
The computational costs of SR and tPCR (under various mean numbers of iterations) are listed in Table 1. The time spent on decomposition and combination (steps 1 and 3) is negligible relative to the reconstruction itself (step 2). When using a low mean number of iterations, tPCR is about 6 times faster than SR.

| DISCUSSION
The presented tPCR reconstruction framework provides a new methodology for iterative CG-SENSE reconstruction of fMRI data. Normally, images at each time point are reconstructed independently. However, the proposed tPCR method performs the reconstruction in the decomposed principal component space with redistributed computational cost according to the weight of each component. It is shown that tPCR yields fewer temporal dynamic errors, and thus fewer activation errors, than SR across different regularization parameters, undersampling rates, and offresonance effects intensities with the same computational costs, especially when using L1-norm nonlinear reconstruction. In turn, for a given error level, tPCR also offers the opportunity to considerably reduce the computational costs compared to SR. Although the current work was applied to the CG-SENSE reconstruction of MREG data, it F I G U R E 5 The difference of SD maps (the first row) and F-statistic activation maps (the second row) of subject 1 between sequential reconstruction (SR) and time-domain principal component reconstruction (tPCR) when using the same mean number of iterations for them (80 for L2-norm, 99 for L1-norm) F I G U R E 6 Plots of total errors and dynamic errors as a function of background intensity would be applicable to any other iterative reconstruction of dynamic MR data.

| Reconstruction efficiency
One benefit of tPCR is the improved reconstruction efficiency relative to SR, which works for both L2-norm and L1-norm reconstructions. The reconstruction efficiency refers to the average rate of convergence of the iterative algorithm over the whole fMRI datasets. In highly ill-conditioned cases, such as MREG here, reconstruction of an entire dataset may require hundreds of CPU-hours (see Table 1).
With the conventional SR framework, the rate of convergence is essentially the same for each image, as fMRI images are highly similar to each other. However, in tPCR, high-rank components can be reconstructed much faster, because they contribute less to the total signal than low-rank components. Even though the number of principal components is equal to the number of time points, overall the mean number of iterations of tPCR is less than SR while attaining a similar level of dynamic error, as demonstrated in Figure 3. Additional computational overhead for the decomposition and combination steps is required in tPCR, but this is negligible relative to the reconstruction itself, so the total computational cost is significantly reduced. Instead of reducing computational costs, it would also be possible to increase the average number of iterations in tPCR up to the same computational load as in SR; this leads to less reconstruction error in tPCR than in SR, yielding a more accurate fMRI analysis. Note that with a sufficient number of iterations, both reconstruction methods converge to the same solution F T F + 2 R −1 F T s when using L2-norm regularization, as shown in Figure 7; however, tPCR converges faster. It thus appears that the source of reconstruction errors in SR is an insufficient number of iterations to reach full convergence. The number of iterations had been chosen according to a previous study 49 ; however, that work investigated fMRI activations in visual and motor cortices, where susceptibility artifacts do not play a major role. However, in areas such as the frontal lobe, off-resonance effects lead to a loss of conditioning (see section 5.3) and thus would require a much larger number of iterations to reach convergence. Even though off-resonance effects are essentially a static image feature, the lack of convergence will lead to reconstruction errors that will show up as spurious time-series fluctuations. This is less of an issue in tPCR, which converges in a low average number of iterations anyway.

| Reconstruction stability
Even stronger differences between tPCR and SR were observed when using L1-norm nonlinear reconstruction. L1-norm was proposed for its tendency to prefer sparse

F I G U R E 8
Relationship of total errors and dynamic errors with average undersampling rate/field scaling factor for sequential reconstruction (SR) and time-domain principal component reconstruction (tPCR) using L1-norm and L2-norm regularization. Each group includes the value of three subjects. The P value on top of each figure refers to the effect of the undersampling rate/field scaling factor on the rate of error increase, whereas the P value within each group corresponds to the statistical difference between SR and tPCR at each level of the undersampling rate and field scaling factor. Behind the P value, ***, **, * and n.s. correspond to P < .001, P < .01, P < .05, and P ≥ .05 respectively solutions. 22 This is also demonstrated by the smaller total errors in L1-norm regularization than L2-norm regularization, as shown in Figures 3 and 4, and this had also been previously reported for MREG reconstruction. 49 However, this is no longer true when looking at the dynamic errors, which are more relevant for fMRI analyses. The nonlinear CG algorithm used in L1-norm reconstruction could introduce increasing variance due to a larger number of iterations and potentially several restarts. This may lead to inherent instability when using a large number of iterations in ill-conditioned cases, leading to diverging solutions. 50 This reconstruction instability may lead to increasing dynamic error. Moreover, unlike linear reconstructions, the reconstruction operator is data dependent (see section 2.1) and will thus yield different solutions for the different decompositions used in SR and tPCR. Indeed, simulation experiments ( Figure 6) demonstrated that the dynamic error in SR is approximately proportional to the magnitude of background signal, even though this signal is merely an offset of the fMRI temporal variation signal, which was fixed. In fMRI, the background is usually much stronger than the dynamic part; thus the dynamic error is much higher with SR. With tPCR, the background has far less influence on the dynamic error, because the background (mainly in the first component) is reconstructed separately from the dynamic part (mainly  in high-rank components). Therefore, dynamic errors are smaller in tPCR than in SR, even when using a large number of iterations (Figure 7). It should also be noted that the regularization parameters were simply scaled according to the magnitude of the spatial basis functions (Equation (4)) and were thus not specifically optimized for tPCR. It is possible, for example, that these spatial components have different sparsities than full images and would thus benefit from nonlinear L1-norm reconstruction. Future studies will investigate the possibility to optimize the regularization of the spatial basis components, which could have the potential to improve the performance of tPCR even further.

| Robustness to regularization, undersampling, and off-resonance effects
One common consequence of regularization, undersampling, and/or off-resonance effects is that they all have an effect on ill-conditioning and convergence speed of iterative algorithms. A high regularization parameter improves the conditioning, whereas undersampling directly increases the degree of ill-conditioning. According to the theory of local k-space, 51 an external inhomogeneous magnetic field reduces the conditioning by locally distorting the k-space trajectory. In Figures 7 and 8, tPCR shows faster convergence speed across regularization parameters and only modest increases in dynamic errors with undersampling and off-resonance effects than SR. Thus, tPCR is more robust to regularization, undersampling, and off-resonance effects.
Importantly, the simulation results could be replicated in real MREG data. Despite the lack of ground truth, differences in the SD of the time-series, and the resulting effects on activation maps, were largely consistent with the results of the simulation experiments ( Figures 5 and 9).

| Other decomposition methods
Besides PCA in tPCR, other decomposition methods could also be used, such as simple background separation or Fourier decomposition. As long as the decomposition can separate the background from the dynamic components, it can reduce the dynamic error when using L1-norm regularization. But tPCR could provide the highest reconstruction efficiency because it is the densest expression of data, and some of the principal components may become sparser and thus contribute less error.

| Other applications
It should be noticed that the tPCR method is not itself a reconstruction algorithm but is rather a framework to improve the overall efficiency and stability of a given dynamic imaging reconstruction algorithm. Consequently, this method has the potential to be used in other classes of iterative reconstructions than CG-SENSE. This may include, for example, GRAPPA-based iterative reconstruction, typically SPIRiT and ESPIRiT in which an iterative algorithm is used to find the optimal k-space or image at each time point independently. 52,53 Another example is typical GRAPPA or simultaneous multislice (SMS) imaging if performed with integrated sampling (i.e., sampling the auto-calibration lines within each time point), in which the kernel functions are iteratively estimated at each time point. Here, tPCR could allow a more accurate estimation of the dynamic kernels. Finally, tPCR could also be applied to nonlinear reconstructions that try to recover dynamic sensitivity maps, T2* maps, or field maps jointly with the images themselves. 28,54

| CONCLUSIONS
This proof-of-concept study compared standard timepoint-by-time-point SR framework and a PCA-based tPCR framework for the iterative reconstruction of fast fMRI data. The tPCR outperforms the SR on two aspects: (1) improving reconstruction efficiency and (2) improving reconstruction stability of nonlinear reconstructions. These two improvements are robust to regularization, undersampling, and off-resonance effects and in turn give a chance to significantly reduce the computation cost and/or improve reconstruction quality.

ACKNOWLEDGMENTS
This work was supported by the DFG Koselleck grant He 1875/28-1, the cluster of excellence EXC-1086 BrainLinks-BrainTools from the DFG, and the China Scholarship Council (CSC).

SUPPORTING INFORMATION
Additional Supporting Information may be found online in the Supporting Information section.
FIGURE S1 (A) Three reconstructed images using sequential reconstruction (SR) with L2-norm regularization (left) and L1-norm regularization (right). (B) Three reconstructed spatial components (note the different amplitude scales) with L2-norm regularization (left) and L1-norm regularization (right) and corresponding temporal components (real part, only the first 37 s are shown) using time-domain principal component reconstruction (tPCR). With SR, the images at different time points show similar image magnitudes and relatively low image quality, especially for L2-norm. With tPCR, the components exhibit progressively reduced image magnitudes. Component 1 captures most of the image information with almost constant temporal variation, whereas component 5 only captures some local image power but with strong temporal oscillations, and component 1360 has a small magnitude with noise-like spatial and temporal components. In addition, component 1 exhibits a better image quality than images reconstructed by SR due to the much higher number of iterations (see Figure 2)