Orthogonal projections for image quality analyses applied to MRI

Measuring digital image quality is of importance in numerous applied research fields, such as image acquisition, image processing or image reconstruction. Subjective evaluation, i.e. assessment by human beings, is often too time‐consuming, expensive and inconsistent in complicated tasks such as medical imaging. In order to better understand the relation between objective image quality assessment measures, we compare a selection in an experimental setup. It is well known that L2 error estimation, i.e. computing the mean squared error (MSE) between two images, does not correspond well to the perceptual evaluation by the human visual system. Here, we illustrate on the example of MRI data the interaction of the MSE with the signal to noise ratio (SNR), the variance and subjective evaluation. We use random coverings of the space of orthogonal projections for coil combination to enable a comprehensive correlation analysis.


Introduction
MRI signal acquisition with multiple coils in a phased array is nowadays commonplace. We study the resulting images obtained by linear compression of image space data followed by coil combination with the root sum of squares (SOS). Such a final coil combination step is necessary for phased-array data that has been processed with parallel imaging methods in k-space, such as GRAPPA [5]. We use the final combined images for analyses of image quality measures.
The basis of our analysis are random orthogonal projections yielding different linear coil compressions. The given image data consists of d channels that shall be combined into one final magnitude image. To do so, we work with the Grassmannian, the space of k-dimensional linear subspaces of R d . It can be identified by orthogonal projections and is endowed with a normalized Riemannian measure µ k,d . In [6] the expectation of the covering radius for n random orthogonal projections {p j } n j=1 , independent identically distributed according to µ k,d , is computed and suggests that such coverings are sufficiently widespread over the whole the space. They can be efficiently computed by the QR decomposition of a matrix M = QR with independent standard normal distributed entries [3,Theorem 2.2.2].
∈ R d be an image space data set with m voxels measured by d coils. The final image volume is given by px 2 with p in G k,d , where d is the fixed number of channels and k < d varies. This corresponds to projecting the d coil channels to different dimensions k and computing root sum of squares (rSOS) subsequently. Note, that we can also study the rSOS without compression in this context, since rSOS(x) = x 2 and it holds for all p ∈ G k,d and where Γ denotes the Gamma function.

Methods and Experiments
We will show here results from our experiments with a simulated T1-weighted data set generated on brainweb [4]. We added 4 varying amounts of Gaussian noise in each coil, assuming no correlation (cf. [1]). Each resulting image volume consists of phased-array data with 32 channels. For more details and additional results from an in-vivo data set, see [2]. For a fixed projection p ∈ G k,32 , we will compute the MSE between the ground truth y = {y i } m i=1 and the combined final magnitude voxels px 2 := { px i 2 } m i=1 , as well as the variance and the mean SNR of px 2 . Following the typical notation in MRI, we use SNR( px 2 ) := 1 where σ corresponds to the estimated standard deviation of the noise, see e.g. [1]. In the experiment we choose n = 500 random projections for each dimension k, serving as a covering of G k,32 . In Figure 1 we see in scatter plots and visualization how the chosen quality measures interact. The symbol + corresponds to px 2 with a projection p ∈ G k,32 and the colors to the different dimensions k. The symbol corresponds to the rSOS, i.e. x 2 , and • to the compression with the orthogonal projection provided by PCA.  (Right) Cross-sectional images corresponding to the scatter plots. The left column shows the first channel of the simulated noisy 32-dim coil array before the coil combination step. The second column shows the final image obtained by rSOS including compression with the random projection p ∈ G 28,32 , that yielded the minimal MSE. The third column shows the image provided by rSOS without compression. The last two columns show the final images that use PCA in G 1,32 and G 4,32 for compression, yielding the highest mean SNR. We can directly see that the lowest MSE does not directly correspond to the highest SNR, where the visual evaluation acknowledges that the SNR describes the image quality more accurately.

Discussion
We can see that in all experiments rSOS itself, as well as including PCA compression, yield low MSE, but are always outperformed by some random projections. Nevertheless, regarding SNR and visualization, these reconstructed images with random compression cannot compete with PCA compression or rSOS. This illustrates contradicting behavior when measuring the reconstruction performance with MSE versus SNR.
Moreover, we observe in the scatter plots that the MSE yields strong correlation with the variance for the lower noise levels. Since variance can be interpreted as contrast in images with high SNR, it shows that high contrast directly relates here to a good reconstruction in the MSE manner. However, the higher the noise level, the more noise is also described in the measured variance and therefore it cannot directly be interpreted as contrast any more. Because of this varying behavior, the variance itself does not act as a useful measure of reconstruction accuracy in this experimental setup.
Measuring MSE and SNR acts here contradictory in terms of optimality and the visual evaluation corresponds more accurately to SNR. The highest SNR values were achieved by using PCA for compression, which outperformed rSOS with no compression in all settings. Therefore, these experiments also suggest to incorporate PCA for a beneficial denoising effect.