SEARCH

SEARCH BY CITATION

Keywords:

  • Beamforming;
  • Covariance thresholding;
  • MEG neuroimaging;
  • source localization and reconstruction;
  • varying coefficient models

Summary

  1. Top of page
  2. Summary
  3. 1. Introduction
  4. 2. Methodology
  5. 3. Theory
  6. 4. Numerical Results
  7. 5. Conclusion
  8. 6. Supplementary Materials
  9. Acknowledgments
  10. References
  11. Supporting Information

Reconstructing neural activities using non-invasive sensor arrays outside the brain is an ill-posed inverse problem since the observed sensor measurements could result from an infinite number of possible neuronal sources. The sensor covariance-based beamformer mapping represents a popular and simple solution to the above problem. In this article, we propose a family of beamformers by using covariance thresholding. A general theory is developed on how their spatial and temporal dimensions determine their performance. Conditions are provided for the convergence rate of the associated beamformer estimation. The implications of the theory are illustrated by simulations and a real data analysis.

1. Introduction

  1. Top of page
  2. Summary
  3. 1. Introduction
  4. 2. Methodology
  5. 3. Theory
  6. 4. Numerical Results
  7. 5. Conclusion
  8. 6. Supplementary Materials
  9. Acknowledgments
  10. References
  11. Supporting Information

Magnetoencephalography (MEG) is a technique for mapping brain activity by measuring magnetic fields produced by electrical currents occurring in the brain, using arrays of superconducting quantum interference sensors (Hamalainen et al., 1993). Applications of MEG include basic research into perceptual and cognitive brain processes, localizing regions affected by pathology, and determining the biological functions of various parts of the brain. In this article, we propose a novel method for analyzing MEG data and apply it to identify face-perception regions in a human brain.

While MEG offers a direct measurement of neural activity with very high temporal resolution, its spatial resolution is relatively low. In fact, improving its resolution by virtue of source reconstruction is lying at the heart of the entire MEG-based brain mapping enterprise. Reconstructing neural activities based on the measurements outside the brain is an ill-posed inverse problem since the observed magnetic field could result from an infinite number of possible neuronal sources. To be concrete, let inline image be the measurement recorded by the MEG sensor i at time inline image, and inline image be the measurements from all n sensors at time inline image, where the time points inline image inline image, the number of the time instants inline image is determined by the time window b and the sampling rate inline image per second, and the number of the sensors n is of order hundreds. Sarvas (1987) showed that the contribution of an individual source to inline image can be numerically calculated by the use of an Maxwell's equation-based forward model and that the contributions of multiple sources can be summed up linearly. Accordingly, inline image can be written as

  • display math(1)

where inline image is the source space (i.e., the space inside the brain), inline image is the source magnitude at location r with unknown orientation inline image and inline image is a linear function of the orientation inline image with inline image being an inline image matrix (called lead field matrix) at location r. The columns inline image, and inline image in inline image are the noiseless output of n sensors when a unit magnitude source at location r is directed in the directions of the inline image and z axes, respectively. The lead field matrix is known in the sense that it can be calculated by solving a set of Maxwell's equations (Sarvas, 1987). If we know the source locations and orientations, then inline image is known and therefore model (1) reduces to a functional regression coefficient model which has been extensively studied in literature (e.g., Chapter 16, Ramsay and Silverman, 2005). Unfortunately, as both the locations and orientation are unknown, it cannot be directly treated as a standard functional regression coefficient model.

To meet the challenge, we discretize the continuous source space by a sieve inline image, which is distributed throughout the brain. We can assume that the true sources are approximately located on the sieve if the sieve is sufficiently dense (i.e., p is sufficiently large). Let inline image inline image inline image be the magnitude vector of the candidate sources at inline image and inline image the source time series at inline image, where the superscript T indicates the matrix transpose. Let inline image and inline image. Then model (1) can be discretized as follows:

  • display math(2)

where inline image, inline image is the noise vector of the n sensors at time inline image. Letting inline image, we define the theoretical and empirical powers of inline image at inline image respectively as

  • display math(3)

We expect that most sources are null in the sense that their theoretical powers are close to zeros. Given the sensor data inline image, we want to estimate inline image, and to further infer the locations of latent non-null sources and the corresponding orientations and source time-courses.

To obtain a good approximation to model (1), the size of the sieve p should be substantially larger than the number of sensors n. However, when p is considerably larger than n, the estimation problem becomes highly ill-defined as there are a diverging number of candidate models in the MEG model space. This leads to a surging interest in developing new methods and theories in order to cope with this situation (e.g., Friston et al., 2008); Henson et al., 2010); Sekihara and Nagarajan, 2010). It is necessary to impose certain regularizations on the above model to tackle the adverse effects mentioned above. One frequently used regularization is the so-called sparsity condition where the observed magnetic field is assumed to depend on a much smaller number of latent sources than the number of available sensors n. Under this condition, the problem reduces to the one of finding the true sources from a very large number of candidates. The central issue that offers a challenge to modern statistics is of how to overcome the effects of diverging spectra of sources as well as noise accumulation in an infinite dimensional source space.

The existing methods in literature roughly fall into two categories, global approach and local approach. Bayesian and beamforming-based methods are the special cases of these two approaches respectively (e.g., Friston et al., 2008); Sekihara and Nagarajan, 2010). In the global approach, we directly fit the candidate model to the data, where the sieve size is required to be known in advance. In contrast, the local approach involves a list of local models, each is tailored to a particular candidate region and therefore the sieve size can be arbitrary. The global approach often needs to specify parametric models for sources and the noise process, while the local approaches are model-free. When the sieve size is small or moderate compared to the number of available sensors n, we may use a Bayesian method to infer latent sources, with helps of computationally intensive algorithms (e.g., Friston et al., 2008). However, when the sieve size is large, these global methods may be ineffective or computationally intractable and local approaches are more attractive. The sensor covariance-based beamforming represents a popular and simple solution to the above large-p-small-n problem. The basic premise behind beamforming is to scan through a source space with a series of filters; each is tailored to a particular area in the source space (called pass-band) and resistant to confounding effects originating from other areas (called stop-band) (Robinson and Vrba, 1998). The scalar minimum variance beamforming aims to estimate the theoretical power inline image at the location inline image by minimizing the sample variance of the projected data inline image with respect to the weighting vector w, subject to the constraint inline image. In a scalar minimum variance beamformer, the pass-band is defined by linearly weighting sensor arrays with the constraint inline image, while the stop-band is realized via minimizing the variance of the projected data. The estimated power can be used to produce a signal-to-noise ratio (SNR) map over a given temporal window while the projected data can provide time course information at each location. We rank these candidate sources by their SNRs and filter out noisy ones by thresholding. There are other beamforming methods such as vector minimum variance beamformers and minimum-norm types of beamformers (Sekihara and Nagarajan, 2010). Like the scalar version, the weighting vectors in the former are adaptive to sensor observations, while those in the latter are not.

Although significant progress has been made in assessing the performance of beamforming based on simulations (e.g., Brookes et al., 2008), there is no rigorous statistical theory available to allow one to examine the scope of a beamformer about what can and cannot be inferred on neuronal activity from beamforming. For example, although the sensor measurements are known linearly linked to underlying neuronal activity via the high-dimensional lead field matrix subject to some random fluctuations, there is no general and mathematically sound framework available to allow users to examine the extent of effects to which the spatial dimension (i.e., the lead field matrix) and the temporal dimension (i.e., the temporal correlations of sensor measurements) of a beamformer on its accuracy in source localization and estimation. In particular, when there are multiple sources, the accuracy of beamforming is compromised by confounding effects of multiple sources. The closer these sources are, the harder it will be for the beamformer to localize and estimate them. It is natural to ask when a beamformer will breakdown in presence of locationally nearby multiple sources and how this effect is determined by the spatial and temporal dimensions of a beamformer.

Here, to address these issues, a more flexible beamformer is proposed based on thresholding sensor covariance estimator. The proposed procedure reduces to the so-called unit-noise-gain minimum-variance beamformer in Sekihara and Nagarajan (2010) when the thresholding level is set to zero. A general framework is provided for the theoretical analysis of the proposed beamformers. An asymptotic theory is developed on how the performance of the beamformer mapping is affected by its spatial and temporal dimensions. Simulation studies and an MEG data analysis are conducted to assess the sensitivity of the proposed procedure to the thresholding level. In particular, the proposed method is applied to a human MEG data set derived from a face-perception experiment. Two clusters of latent sources are predicted, which reacted to face and scrambled face stimuli differently as shown in Figure 1.

image

Figure 1. The time series plots for the locations of the first five highest log-contrast values when the thresholding constant inline image was chosen by maximizing the SAM index. The Savitzky–Golay plots of the projected sensor measurements under the two stimuli (faces and scrambled faces) along the estimated signal orientations for the locations (with CTF coordinates inline image, inline image, inline image, inline image, and inline imagecm), respectively. In each plot, the solid line and the dashed line stand for the time-courses under the face stimulus and the scrambled face stimulus, respectively.

Download figure to PowerPoint

The rest of the article is organized as follows. The details of the proposed procedure are given in Section 2. The asymptotic properties of the proposed procedure are investigated in Section 3 and in the On-line Supplementary Material. The simulation studies and the real data analysis are presented in Section 4. The conclusions are made in Section 5. The proofs of the theorems and lemmas are deferred to Web Appendix A in the On-line Supplementary Material.

2. Methodology

  1. Top of page
  2. Summary
  3. 1. Introduction
  4. 2. Methodology
  5. 3. Theory
  6. 4. Numerical Results
  7. 5. Conclusion
  8. 6. Supplementary Materials
  9. Acknowledgments
  10. References
  11. Supporting Information

Suppose that the data inline image and inline image, which are obtained from a single-stimulus MEG experiment, follow model (2). Suppose that latent sources are indexed by an unknown subset I of inline image such that for a positive constant inline image and inline image inline image. We want to estimate I, the source orientations inline image as well as source time-series inline image

We propose a family of beamformers based on sensor covariance thresholding. These beamformers can be implemented in three steps: In Step 1, a thresholded sample sensor covariance matrix is calculated. In Step 2, a brain SNR map is produced by partitioning the brain into a regular three-dimensional grid called a sieve and by calculating the source SNR at each grid point. These SNRs generate a source distribution overlaid on a structural image of the subject's brain. In Step 3, latent sources are inferred by screening the map. The beamforming method investigated here is termed synthetic aperture magnetometry (SAM) (Robinson and Vrba, 1998). The above procedure can be extended to the setting with two stimuli, where we first apply the procedure to the MEG data for each stimulus and then calculate the log-contrast (the logarithm of the ratio of the SAM indices under two stimuli) at each grid point. This will create a log-contrast map. The global peak on the map indicates where the maximum of SNR increases attains for one stimulus relative to the other. The details of the proposed procedure are spelled out below.

2.1. Thresholding the Sensor Covariance Matrix

The inline image sensor covariance C is traditionally estimated by the sample covariance matrix inline image of inline image. Bickel and Levina (2008) showed that the sample covariance is not a good estimator of the population covariance if its dimension n is large or if the sample covariance is degenerate. In MEG neuroimaging, the sensor sample covariance matrix can be nearly singular due to collinearity between nearby sensors, which can have serious effects on estimating the precision matrix used in the source reconstruction.

Although regularizing the sensor sample covariance by shrinkage has already been used in MEG neuroimaging (e.g., Brookes et al., 2008), its spatial block structure has not been explored. Here, applying the idea of thresholding (Bickel and Levina, 2008), we estimate the sensor covariance by inline image where inline image, inline image is an indicator and inline image is a constant changing in n and inline image

2.2. Reconstruction of Latent Sources

We localize the sources which underpin the observed magnetic field as follows. For any location r and orientation inline image, let n-dimensional vector inline image. Given inline image we calculate the n-dimensional weighting vector inline image by minimizing the sample variance inline image with respect to inline image, subject to inline image Note that if taking the data projection inline image as a filter, then inline image represents the filter output from a unit-magnitude source. Therefore, setting inline image guarantees that the signal from the source at r can fully pass through the filter. In addition to the power at r, the sample variance inline image may contain the noise and other unwanted contributions such as the interferences from sources at locations other than r. Accordingly, by minimizing the above variance, we obtain the weight inline image, which minimizes such unwanted interferences without affecting the passing of the signal coming from the source at r (e.g., Chapter 4, Sekihara and Nagarajan, 2010). Substituting inline image into the above sample variance, we obtain the empirical power inline image of the estimated source time-series inline image defined in (3), which is used to estimate the theoretical power at r. Similarly, we obtain the estimated power inline image at r if C is known. As the empirical power of the projected noise along inline image is inline image, the signal-to-noise ratio (SNR) at r is inline image which is proportional to the normalized power

  • display math

where by convention, inline image is set to 0 (Sekihara and Nagarajan, 2010). The orientation is then estimated by maximizing the SNR or equivalently by maximizing the normalized power. The above optimization can be done by solving a generalized eigenvalue problem: The optimal orientation inline image is the eigenvector associated with the minimum non-zero eigenvalue of the inline image matrix inline image relative to the inline image matrix inline image We denote by inline image the reciprocal of the above minimum non-zero eigenvalue and call it the SAM index of neural activity at r. When r is running over the sieve, inline image creates a neuronal SNR map that underlies measured magnetic fields. The maximum peak of the map gives a location estimator of one of the underlying sources and the optimal weighting vector along which the latent time-course at the maximum peak can be estimated by projecting the data. We also calculate the local peaks on the transverse slices of the brain, identifying multiple sources by grouping these local peaks.

2.3. Choosing the Thresholding Level

In practice, the MEG imaging is often run on a subject first without stimulus and then with stimulus. This allows us to calculate the sample covariance inline image for the MEG data with stimulus as well as the sample covariance inline image for the background noises. The latter can provide an estimator of the background noise level. To make the thresholded sample covariance to be convergent, we set inline image with a tuning constant inline image and threshold inline image by inline image, where inline image is the minimum diagonal element in inline image. Note that, when inline image the proposed SAM procedure reduces to the standard SAM implemented in the software FieldTrip. For each value of inline image, we can apply the proposed SAM procedure to the data and obtain the maximum SAM index

  • display math(4)

In both simulations and a real data analysis, we will show that inline image inline image has covered its useful range. We choose inline image in which the SAM index attains maximum or minimum, which are called MA and MI, respectively. By choosing inline image, MA intends to increase the maximum SNR value, while MI tries to reduce source interference. Note that via inline image, inline image has effects on estimating C and therefore has effects on the inline image-based source interference reduction used in calculating SAM index as well as the peak value of the SAM index.

3. Theory

  1. Top of page
  2. Summary
  3. 1. Introduction
  4. 2. Methodology
  5. 3. Theory
  6. 4. Numerical Results
  7. 5. Conclusion
  8. 6. Supplementary Materials
  9. Acknowledgments
  10. References
  11. Supporting Information

To make model (2) identifiable, we assume the following condition.

(A1): The source processes inline image and the noise process inline image are stationary with inline image These two processes are uncorrelated with each other. The sources inline image are also uncorrelated with each other.

The assumption of inline image, which holds approximately in many applications (Sekihara and Nagarajan, 2010), is made for simplicity. Under model (2) and condition (A1), if the noises are uncorrelated across the sensors and white, then the sensor covariance matrix can be expressed as

  • display math(5)

where inline image denotes the theoretical power at location inline image, inline image is the background noise level and inline image is an inline image identity matrix. To simplify the theory, assuming that inline image, we reparametrize the model (2) as follows:

  • display math(6)

where inline image and inline image. Here, inline image stands for the Euclidean norm of a vector. For the notation simplicity, we let inline image and inline image stand for inline image and inline image, respectively. Note that the SAM index inline image is invariant under the above reparametrization. Although the original time-course and power are not invariant under the reparametrization, they can be recovered by multiplying inline image by the scaling factor inline image. We often see that inline image tends to a limit as n is large.

In this section, we present an asymptotic analysis for the proposed SAM index when both n and J are sufficiently large. In practice, the number of sensors is fixed around a few hundreds. Allowing n to be varying is an analytic device for finding spatial factors that affect the performance of a beamformer. We will show that the values of the proposed SAM index are much higher at source locations than at non-source locations. This implies that the screen based on the proposed beamformers can eventually identify latent sources if n and J are sufficiently large. We proceed the analysis with two steps. In the first step, we focus on the ideal situation where the sensor covariance matrix is known. In the second step, we investigate the asymptotic behavior of the proposed SAM index when the sensor covariance matrix is unknown but estimated by using the sensor measurements on a finite number of time instants.

3.1. Beamforming with Known Sensor Covariance

We assume that there are inline image non-null source locations, say inline image in the model and denote the sensor covariance C by inline image to reflect this assumption. If the sensor processes are ergodic and can be observed over an infinite number of time instants, then the sensor covariance matrix inline image can be fully recovered. Under this ideal situation, we can perform beamforming directly on inline image and reconstruct the unknown sources based on the equation

  • display math

To build a brain map, we consider an arbitrary location r in the brain and orientation inline image with inline image. Let inline image denote inline image, the (scaled) composite lead field vector at r with orientation inline image. For any two locations r and inline image, their lead field spatial coherence is defined by inline image. Note that inline image shows how close inline image is to inline image in terms of the lead field distance inline image.

In general, matrix inline image can be written down as inline image, where inline image is the first inline image columns of inline image and inline image is a inline image matrix of rank d. For example, inline image in the single spherical head model (Sarvas, 1987). A necessary requirement for the true source inline image being identifiable is that the lead field vectors differ from each other over different locations near inline image. This requirement can be satisfied by assuming the following condition.

(A2): For any inline image inline image and inline image are linearly independent in the sense that for any orientations inline image and inline image and non-zero constant inline image with inline image, we have inline image.

1. Under condition (A2), the true source location is identifiable and the orientation is identifiable if and only if the columns of the lead field matrix at the true source location are linearly independent.

3.1.1. Single-source case

To give an insight into the behavior of beamforming, we start with the single-source case where inline image. We assume that there exists a single unknown source located at inline image with orientation inline image and that other sources are weak enough to be ignored. In this case, the estimated power inline image at inline image is equal to inline image with the bias inline image. We show below that if the sensor covariance is known, then the SAM beamformer map can accurately recover the true source location and orientation. When n is large, we can further demonstrate how sharp the peak is. See A.1 of the Web Appendix A in the On-line Supplementary Material.

2. Under conditions (A1) and (A2), inline image has a single mode and attains the maximum at inline image The bias of the source power estimator at inline image is equal to inline image

3.1.2. Multiple sources

We now turn to multiple sources, where there exist q unknown sources located at inline image with orientations inline image, respectively. To show the consistency of the beamforming estimation, more notations and regularity conditions on the composite lead field vectors are introduced as follows.

First, we introduce a notation of source separateness to describe the estimability of multiple sources. Let inline image and inline image denote the (scaled) composite lead field vectors at locations inline image and inline image with orientations inline image and inline image, respectively. For the simplicity of notation, let inline image To describe the spatial relationships among the composite lead field vectors, for inline image, we define the partial coherence factor inline image between inline image and inline image given inline image by iteratively performing the so-called sweep operation on the matrix inline image as follows:

  • display math

where the dependence of the above notation on n has been suppressed. Note that inline image shows the partial self-coherence of inline image given the preceding inline image’s (Goodnight, 1979). See A2 of the Web Appendix A, the On-line Supplementary Material. Let inline image and

  • display math(7)

We impose the condition that inline image as inline image, which is equivalent to requiring that for any inline image, the maximum partial coherences of inline image is bounded above by inline image for large n. If inline image, then inline image are all positive when n is large. Therefore, inline image are linearly independent for large n because the inverse of matrix inline image can be obtained by iteratively performing sweep operations inline image times on this matrix. We say that the source locations are asymptotically separable if inline image as inline image, and that non-source location inline image with orientation inline image is asymptotically separable from the sources if inline image. Secondly, to regularize the lead field matrix, we define notation inline image by letting inline image, and

  • display math

It follows from the definition that inline image if inline image and inline image (i.e., given inline image, the partial regression coefficients of inline image and inline image with respect to inline image are bounded). In particular, if inline image is bounded below from zero as n tends to infinity, then the above partial regression coefficients are bounded and thus inline image. The following theorem shows that if inline image (i.e., the partial regression coefficients of inline image are bounded), then inline image is a necessary condition for the source powers to be consistently estimated. We state our general mapping theorem as follows.

1. Suppose that inline image and that conditions (A1) and (A2) hold. Then, as inline image, the power estimator inline image at inline image is asymptotically larger than inline image, that is, the power estimator is not consistent, where inline image is defined in the equation (7). As inline image, we have:

  1. For any non-null source location inline image, inline image the power estimator and the SAM index at location inline image with orientation inline image admit the expressions
    • display math
    where inline image and inline image are constants defined in the on-line supplementary material satisfying inline image and inline image as n is sufficiently large. In particular, for inline image, we have inline image, inline image inline image.
  2. For any null-source location inline image which is asymptotically separable from the sources and satisfies inline image, the power estimator and the SAM index at location inline image with orientation inline image can be respectively expressed as
    • display math

Note that Theorem 1 holds under the reparametrized model (6) and continues to hold under the original model (2) after adjusting the power estimators by the corresponding scaling factors. The above theorem can also be extended to the setting with two stimuli. It can also be seen from Theorem 1 that at each of separable non-null source locations the SAM index is asymptotically equal to the product of the number of sensors n, the source power and the source coherence factor. In contrast, at null-source location inline image which is asymptotically separated from the non-null sources, the SAM power index is equal to the lower bound inline image up to the first asymptotic order. These facts suggest that the greater the number of sensors employed, the larger the contrast between non-null sources and null-sources will be, therefore, the easier for the beamformer to localize them. In particular, when n is large, the localization bias can be of order less than inline image in terms of the lead field distance.

3.2. Beamforming with Estimated Sensor Covariance

In addition to conditions (A1) and (A2), we need the following two conditions for extending the asymptotic analysis to the case of unknown sensor covariance. The first one is imposed to regularize the tail behavior of the sensor processes. The Gaussian iid processes considered in Bickel and Levina (2008) satisfies the following condition.

(A3): There exist positive constants inline image and inline image such that for any inline image,

  • display math

and inline image. In the second additional condition, we assume that the sensor processes are strong mixing. Let inline image and inline image denote the inline image-algebras generated by inline image and inline image, respectively. Define the mixing coefficient

  • display math

The mixing coefficient inline image quantifies the degree of the temporal dependence of the process inline image at lag k. We assume that inline image is decreasing exponentially fast as lag k is increasing, which the Gaussian iid processes considered in Bickel and Levina (2008) satisfy.

(A4): There exist positive constants inline image and inline image such that

  • display math

For a constant A, let inline image. Let inline image be the sample mean of the ith sensor as before and

  • display math

where inline image is the indicator. Let inline image We are now in position to generalize Proposition 2 and Theorem 1 to the case where the sensor covariance is estimated by the thresholded covariance estimator. The former is straightforward and thus omitted. In the following theorem, we show that estimating the SAM index is harder than estimating the power index due to the hardness of estimating the small value of inline image.

2. Suppose that conditions (A1)–(A4) hold and that inline image and inline image, inline image as both n and J tend to infinity. Then, we have:

  1. If inline image and inline image, the power estimator at inline image is asymptotically larger than inline image in probability, that is, the power estimator is not consistent. If inline image, for inline image the power estimator and the SAM index at the source location inline image with orientation inline image admit the expressions
    • display math
    where inline image and inline image as n is sufficiently large.
  2. For null-source location inline image, as inline image and inline image, the power estimator and the SAM index at location inline image with orientation inline image can be expressed as
    • display math
    as n is sufficiently large.

4. Numerical Results

  1. Top of page
  2. Summary
  3. 1. Introduction
  4. 2. Methodology
  5. 3. Theory
  6. 4. Numerical Results
  7. 5. Conclusion
  8. 6. Supplementary Materials
  9. Acknowledgments
  10. References
  11. Supporting Information

In this section, we assessed the proposed procedure by simulations and a real data analysis.

4.1. Simulation Studies

Using simulations we examined the performance of the proposed beamformer procedure in terms of the so-called localization bias. For any estimator inline image of an source location inline image the localization bias is defined as inline image, where inline image is the inline image distance between inline image and inline image We attempt to answer the following questions: Has the SAM procedure been improved by using the thresholded covariance estimator? To what extent will the performance of the proposed beamformer procedure deteriorate by source interferences? How does the choice of the number of sensors and the sampling frequency affect the accuracy of source localization? For simplicity, we focused on a single-shell forward head model, where a spherical volume conductor with 10cm radius from the origin was created (Oostenveld et al., 2011). We constructed 2222 regular 3D grid points of resolution 1cm within the head. These candidate source positions were aligned with the axes of the head coordinate system. The gradiometer array with n sensors at 12cm distance from the origin and the corresponding (inline image) lead field matrix L between the n sensors and the 2222 grids was created by using the open source software FieldTrip. We first studied the single-source case, where the true source inline image was located at inline image and of the form inline image with

  • display math

By using inline image, we simulated the neural oscillation patterns in the brain motor system. The sensor measurements at t follow the model

  • display math(8)

where inline image is the sensor noise vector. Set the time window width inline image and the number of time instants J to be equal to the sample rate inline image. That is, the sensors are measured at the time instants inline image The signal strength (inline image) in the sensor space was defined by

  • display math

For each k, we sampled inline image from an n-dimensional standard Normal inline image and set inline image where inline image is a coefficient of noise level. Similarly, we can define the noise strength (inline image). We consider two representative noise levels (i.e., low and high) inline image with the SNRs (i.e., inline image) equal to inline image and inline image, respectively. For each combination of inline image, where inline image and inline image inline image, we generated 30 datasets of inline image paired with inline image by (8) for each noise level. Here, we imitated the practical situation, where the MEG imaging was run on a subject first without stimulus and then with stimulus. The former provides an estimator of the background noise level. For each dataset, we calculated the sample covariance inline image of inline image and the corresponding sample covariance inline image of the background noises. We calculated the thresholded sensor covariance estimate inline image, where inline image with the tuning constant inline image and inline image is the minimum diagonal element in inline image. We considered five values for inline image: inline image 2, and the selected inline images (say ma and mi) by MA and MI. Recall that, when inline image the proposed SAM procedure reduces to the standard SAM implemented in the FieldTrip. For each value of inline image, we applied the proposed SAM procedure to the 30 datasets, obtaining 30 SAM-based maximum location estimates of inline image. We then showed the 30 inline image values defined in (4) and the location biases by the box-and-whisker plots. The results, summarized in Figure 2 and Web Figure B1, the Web Appendix B in the On-line Supplementary Material, demonstrate that in the single source case, for each combination of inline image and inline image, the proposed procedure can localize the true source with very small bias and the performance deteriorate when the background noise level is increasing.

image

Figure 2. Single source: The first two columns of plots are the box-and-whisker plots of the SAM values and the localization biases for inline image sensors and the sample rate inline image. Among them, the upper two respectively show the SAM values and the localization biases against the tuning constant inline image, inline image, and inline image for the noise level inline image, and the bottom two are for the noise level inline image. The remaining two columns of plots are for inline image sensors and the sample rate inline image. From the first and third columns of the plots, we see that most of the time MA slected 0 as inline image attained the maximum at inline image. This figure appears in color in the electronic version of this article.

Download figure to PowerPoint

We now turn to the two-source case. Let the second source

  • display math

which is located at inline image. Adding inline image into the model in (8), we have

  • display math(9)

The inline image distance between the two sources is 13, respectively.

Similar to the single source case, for each of the 10 combinations of inline image, we generated 30 paired datasets of inline image and inline image. For each dataset, we thresholded the sample covariance inline image by inline image, on which we applied the SAM procedure. We obtained 30 inline image values, the SAM-based maximum estimates of source location and calculated their minimum inline image distances to inline image and inline image. The results, displayed as box-and-whisker plots in Figure 3 and Web Figures B2 and B3, the Web Appendix B in the On-line Supplementary Material, suggest that: (1) The SAM-based maximum estimates are much closer to inline image than to inline image. Therefore, the source interference has masked the impact of detecting the first source. In contrast, without the interference from inline image, as demonstrated in the single-source case, we can localize the source inline image very accurately by using the SAM-based maximum estimation. (2) On average, when the noise level in the data is high, at either low or high thresholding level, the localization bias was high because the sensor covariance was poorly estimated. Also, in general the SAM value was inversely proportional to the localization bias, suggesting that we should choose the thresholding level by maximizing the SAM index in inline image. These facts explain why the proposed inline image thresholding works in reducing the source localization bias. However, inline image seems not optimal and its optimal value does depend on n and J. (3) The number of sensors n has an effect on the choice of the sampling rate and inline image. When n is small, say 26, source interferences may have a big effect on source localization. In this situation, MI can have a better performance than MA. On average, the localization bias is higher when inline image than when inline image (4) In the two-source case, the thresholding is useful when the noise level in the data is high. The performance of the proposed procedure is not necessarily decreasing in the noise level. In the field of sensor array processing, people often adopt a shrinkage approach, that is, by artificially add noise to the data in order to improve the SNR mapping (Sekihara and Nagarajan, 2010). In light of this, like in the software FieldTrip, in our implementation, if the smallest eigenvalue of the sample sensor covariance is too small, then we add a small amount of noise (determined by the smallest eigenvalue of the noise covariance matrix) to the data (i.e., adding inline image to the sample sensor covariance) after thresholding.

image

Figure 3. Two sources: The first two rows display the box-and-whisker plots of the SAM values and the localization biases against the tuning constant inline image, inline image, and inline image for the combinations of inline image sensors, and the sample rates inline image, respectively. The third and fourth rows present the box-and-whisker plots of the local localization bias to the sources inline image and inline image against the transverse slice indices from 0 to 10 when inline image was selected by MA. The first and third rows are for the noise level inline image, while the second and fourth rows are for the noise level inline image From the first row of the plots, we see that most of the time MA slected 1 as inline image attained the maximum at inline image when inline image, and that most of the time MA slected 0.5 as inline image attained the maximum at inline image when inline image. From the third and fourth rows of the plots, we see that all the local peaks on the transverse slices are not close to the source location inline image which implies the source 1 has been masked on the SAM-based power map. This figure appears in color in the electronic version of this article.

Download figure to PowerPoint

To see whether the local peaks on the beamforming map can provide certain traces of the source inline image, the 2222 grid points were further divided into 10 transverse slices along the z-axis, where the kth transverse slice contains the grids of the form inline image, inline image For each combination of inline image and the associated inline image selected by MA, we calculated the local peak of the SAM index and its localization biases to inline image and inline image for each of the ten slices. The results, plotted in Figure 3 and Web Figures B2 and B3, the Web Appendix B in the On-line Supplementary Material, indicate that the average inline image distances from the locations of these local peaks to inline image are at least larger than 4 for any combination of inline image. Therefore, the trace of source inline image has been masked on the SAM-based SNR map. We also use the Bayesian principal component decomposition (PCD) of the estimated sensor covariance matrix (Minka, 2000) to determine the effective number of sources. In the above simulation setting, the Bayesian PCD predicted that there was only a single effective source in all 30 datasets.

4.2. Real MEG Data Analysis

We illustrate the proposed method using a human MEG data set provided by Professor Richard Henson from the MRC Cognition and Brain Sciences Unit Volunteer Panel (Henson et al., 2010). The study subject, a healthy young adult underwent the following face perception test which includes two different stimuli (faces and scrambled faces). A central fixation cross (presented for a random duration of 400–600 ms) was followed by a face or scrambled face (presented for a random duration of 800–1000 ms), followed by a central circles for 1700 ms. As soon as he saw the face or the scrambled face, the subject used either their left or right index finger to report whether he thought it was symmetrical or asymmetrical about a vertical axis through its center. The MEG data were collected with 102 magnetometers and sampled at rate 1100Hz. The experiment includes six sessions. Here, we analyzed the data set generated in the first session, which includes 96 trials labeled as Face and 50 labeled as Scrambled Face.

We first created a grid system of 1cm resolution using the subject's anatomical magnetic resonance imaging (MRI) data. Then, we applied the neuroimaging software SPM8 to read, preprocess and epoch the recorded data for the face stimulus and the scrambled face stimulus, respectively. This gives rise to 146 epochs (i.e., trials) of 700 ms (770 time instants) with 200 ms pre-stimulus and 500 ms post-stimulus, where 96 trials were labeled as Face and the remaining 50 as Scrambled Face. To reduce the noise level in the data, we averaged the over the Face trials and the Scrambled Face trials, respectively. For each of the two stimuli, we calculated the sample covariance inline image from the averaged post-stimulus data and the noise covariance inline image from the averaged pre-stimulus data. We thresholded inline image by inline image with different values of inline image, inline image, and mi, where inline image inline image, and inline image is the minimum diagonal element in inline image. For each value of inline image, we then applied the proposed SAM procedure to the face and the scrambled face data, respectively and calculated the log-contrast between the face stimulus and the scrambled face stimulus, creating a log-contrast map. We sliced the map into 13 transverse layers in which we calculated the local maximum peaks. The locations of these peaks were described by the CTF coordinates. The definition of the brain CTF coordinate system can be found at http://neuroimage.usc.edu/brainstorm/CoordinateSystems. We projected the MEG data along the directions of the optimal weighting vectors at each of the above 13 local peak locations, followed by the Savitzky–Golay smoothing (Orfanidis, 1996). Finally, we spatially smoothed the log-contrast values by using the interpolating command in the FieldTrip. We overlaid the interpolated log-contrast values on the MRI scan of the brain.

For inline image, 2, inline image, and inline image, the SAM procedure resulted in the same set of the transverse local peaks and the same global peak with the CTF coordinates inline imagecm. Note that inline image and inline image The transverse slices of the log-contrast maps along the z-axis are shown in Web Figure C1, the Web Appendix C in the On-line Supplementary Material for inline image and 2. This is in agreement with the observation we have made in the previous simulation studies that the SAM procedure is not sensitive to inline image when there is a single strong source and the number of sensors is 91. In light of the above fact, we focused on inline image in the remaining analysis.

We calculated the log-contrast values at the above 13 local peaks before and after the spatial interpolation, respectively as shown in Table 1. To group the above log-contrast values, we placed the 13 values in decreasing order. Given inline image, we put the first inline image ordered values in Group 1 (the source group) and the remaining in Group 2 (the non-source group). By using the ‘elbow’ rule in cluster analysis, we chose inline image since the variance percentage inline image attained the maximum when inline image. The resulting Group 1 includes the log-contrast values at the locations inline image, inline image, inline image, inline image, and inline imagecm, while Group 2 includes the values at the locations inline image, inline image, inline image, inline image, inline image, inline image, inline image, and inline imagecm). Unlike the latter 8 local peaks, the former 5 local peaks had relatively larger powers in the responses to the face stimulus than to the scrambled face stimulus. We plotted the orthogonal slices of the log-contrast maps at the global peak location inline imagecm and at the local peak location inline image as shown in Figure 4. The estimated time-courses at the peaks in Group 1 are displayed in Figure 1. The orthogonal slice plots and the estimated time-courses at the other local peaks can be found in Web Figure C2, the Web Appendix C in the On-line Supplementary Material. The Bayesian PCA predicted that there is a single effective source.

Table 1. The peak locations and log-contrast values for the 13 local peaks
Peak location InterpolatedFace-selective
(cm)log-contrastlog-contrastregion
  1. inline imageBy N/A, we mean that the location is not close to any face-selective regions.

(−5, 5, 5)0.62180.6151Close to STS
(−5, 5, 6)0.52630.5084Close to STS
(−6, 4, 7)0.52010.4624Close to STS
(−4, −4, 8)0.32930.3078N/A
(−5, 5, 4)0.12430.1178N/A
(−4, −4, 0)0.11440.0640N/A
(−5, −3, 9)0.08770.0928N/A
(−4, −2, −1)0.07970.0551N/A
(0, −1, 3)0.0338−0.0136N/A
(−6, −1, 1)0.0036−0.0243N/A
(−5, 4, 2)−0.0069−0.0222N/A
(2, 3, 10)−0.0562−0.3745N/A
(0, −1, 11)−0.5374−0.5782N/A
image

Figure 4. The interpolated log-contrast map when inline image. The first two columns of plots show the orthogonal slices (through the maximum peak inline imagecm and along the directions of three axes, respectively) of the log-contrast map. The remaining two columns of plots show the orthogonal slices (through the local peak inline imagecm and along the directions of three axes, respectively) of the log-contrast map. In each slice, the values of the log-contrast have been thresholded by the half of the maximum. This figure appears in color in the electronic version of this article.

Download figure to PowerPoint

Interestingly, Figure 1 has revealed that during the time period 120–300 ms, there were larger evoked responses (i.e., larger M/N170 amplitudes) to the face stimulus than to the scrambled face stimulus, at the locations inline image, inline image, inline image, and inline imagecm, respectively. Note that M/N170 is a typical pattern of face-perception (see Henson et al., 2010). Furthermore, Web Figure C3, the Web Appendix C in the On-line Supplementary Material highlights two putative clusters around inline image and inline imagecm, where the brain gave larger power responses to the face stimulus than to the scrambled face stimulus. Figure 4 indicates the former is stronger than the latter. In fact, Table 1 shows that the log-contrast at location inline imagecm was twice that at location inline imagecm. Figure 4 also shows that the locations inline imagecm is close to the right superior temporal sulcus (STS) region, which was known to be associated with the face-perception (Davies-Thompson and Andrews, 2011).

5. Conclusion

  1. Top of page
  2. Summary
  3. 1. Introduction
  4. 2. Methodology
  5. 3. Theory
  6. 4. Numerical Results
  7. 5. Conclusion
  8. 6. Supplementary Materials
  9. Acknowledgments
  10. References
  11. Supporting Information

In the past decade, the performance of the beamformers has been examined by extensive simulations (e.g., van Veen et al., 1997); Brooks et al., 2008); Quraan et al., 2011). In particular, Quraan et al. (2011) demonstrated that the beamformer methods can fail to effectively attenuate the interference of strong sources from outside the region of interest, resulting in leakage. Despite these developments, there is lack of a general and statistically sound theory which allows one to assess: (1) when multiple sources are detectable, (2) how the spatial and temporal dimensions determine the beamforming accuracy in source localization and estimation, (3) how the sampling rate is related to the number of sensors, and (4) whether using the thresholded sensor covariance can improve the performance of a beamformer. Here, we have developed a general asymptotic theory for addressing the above questions and illustrated it by using simulations. We have proposed a family of beamformers by using covariance thresholding. We have shown how to select the thresholding level. The simulations have suggested that the proposed thresholding does improve the performance of the standard SAM procedure. The new methodology is further assessed by a real MEG data analysis. The proposed methodology is general and can be used to screen features in a general varying coefficient model. However, the details of these studies are beyond the scope of this article and will be presented elsewhere.

6. Supplementary Materials

  1. Top of page
  2. Summary
  3. 1. Introduction
  4. 2. Methodology
  5. 3. Theory
  6. 4. Numerical Results
  7. 5. Conclusion
  8. 6. Supplementary Materials
  9. Acknowledgments
  10. References
  11. Supporting Information

Web Supplementary Materials, referenced in Sections 'Methodology''Numerical Results', and the Matlab code are available with this paper at the Biometrics website on Wiley Online Library.

Acknowledgments

  1. Top of page
  2. Summary
  3. 1. Introduction
  4. 2. Methodology
  5. 3. Theory
  6. 4. Numerical Results
  7. 5. Conclusion
  8. 6. Supplementary Materials
  9. Acknowledgments
  10. References
  11. Supporting Information

We thank Professor Richard Henson from MRC, Cambridge for sharing his MEG data with us and an Associate Editor and two anonymous reviewers for their constructive comments.

References

  1. Top of page
  2. Summary
  3. 1. Introduction
  4. 2. Methodology
  5. 3. Theory
  6. 4. Numerical Results
  7. 5. Conclusion
  8. 6. Supplementary Materials
  9. Acknowledgments
  10. References
  11. Supporting Information
  • Bickel, P. and Levina, E. (2008). Covariance regularization by thresholding. The Annals of Statistics 36, 25772604.
  • Brookes, M. J., Vrba, J., Robinson, S. E., Stevenson, C. M., Peters, A. M., Barnes, G. R., Hillebrand, A., and Morris, P. G. (2008). Optimizing experimental design for MEG beamformer imaging. NeuroImage 39, 17881802.
  • Davies-Thompson, J. and Andrews, T. J. (2011). The localization and functional connectivity of face-selective regions in the human brain. Journal of Vision 11, 647.
  • Friston, K., Daunizeau, J., Kiebel, S., Phillips, C., Trujillo-Barreto, N., Henson, R., Flandin, G., and Mattout, J. (2008). Multiple sparse priors for the E/MEG inverse problem. Neuroimage 39, 11041120.
  • Goodnight, J. H. (1979). A tutorial on the SWEEP operator. American Statistician 33, 149158.
  • Hamalainen, M., Hari, R, Ilmoniemi, R. J., Knuutila, J., and Lounasmaa, O. V. (1993). Magnetoencephalographytheory, instrumentation, and applications to noninvasive studies of the working human brain. Reviews of Modern Physics 21, 413460.
  • Henson, R. N., Flandin, G., Friston, K. J., and Mattout, J. (2010). A parametric empirical Bayesian framework for fMRI-constrained MEG/EEG source reconstruction. Human Brain Mapping 31, 15121531.
  • Minka, T. (2000). Automatic choice of dimensionality for PCA. Technical Report 514, MIT Media Lab, Perceptual Computing Section.
  • Oostenveld, R., Fries, P., Maris, E., and Schoffelen, J. M. (2011) FieldTrip: Open source software for advanced analysis of MEG, EEG, and invasive electrophysiological data. Computational Intelligence and Neuroscience 2011, 156869.
  • Orfanidis, S. J. (1996) Introduction to Signal Processing. New York: Prentice-Hall.
  • Quraan, M. A., Moses, S. N., Hung, Y., Mills, T., and Taylor, M. J. (2011). Detection and localization of hippocampal activity using beamformers with MEG: A detailed investigation using simulations and empirical data. Human Brain Mapping 32, 812827.
  • Ramsay, J. O. and Silverman, B. W. (2005) Functional Data Analysis, 2nd edition. New York: Springer.
  • Robinson, S. and Vrba, J. (1998). Functional neuroimaging by synthetic aperture magnetometry. In: Recent Advances in Biomagnetism, T. Yoshimoto, M. Kotani, S. Kurikim H. Karibe, and N. Nakasato (eds), 302305. Sendai, Japan: Tohoku University Press.
  • Sarvas, J. (1987). Basic mathematical and electromagnetic concepts of the biomagnetic inverse problem. Physics in Medicine and Biology 32, 1122.
  • Sekihara, K. and Nagarajan, S. S. (2010). Adaptive Spatial Filters for Electromagnetic Brain Imaging. Berlin: Springer-Verlag.
  • Van Veen, B. D., Van Drongelen, W., Yuchtman, M., and Suzuki, A. (1997). Localization of brain electrical activity via linearly constrained minimum variance spatial filtering. IEEE Transactions on Biomedical Engineering 44, 867880.

Supporting Information

  1. Top of page
  2. Summary
  3. 1. Introduction
  4. 2. Methodology
  5. 3. Theory
  6. 4. Numerical Results
  7. 5. Conclusion
  8. 6. Supplementary Materials
  9. Acknowledgments
  10. References
  11. Supporting Information

All Supplemental Data may be found in the online version of this article.

FilenameFormatSizeDescription
biom12123-sm-0001-SupData-S1.pdf2631KSupplementary Materials.
biom12123-sm-0002-SupData-S2.txt9KSupplementary Materials Code.

Please note: Wiley Blackwell is not responsible for the content or functionality of any supporting information supplied by the authors. Any queries (other than missing content) should be directed to the corresponding author for the article.