Multiplexity of human brain oscillations as a personal brain signature

Abstract Human individuality is likely underpinned by the constitution of functional brain networks that ensure consistency of each person's cognitive and behavioral profile. These functional networks should, in principle, be detectable by noninvasive neurophysiology. We use a method that enables the detection of dominant frequencies of the interaction between every pair of brain areas at every temporal segment of the recording period, the dominant coupling modes (DoCM). We apply this method to brain oscillations, measured with magnetoencephalography (MEG) at rest in two independent datasets, and show that the spatiotemporal evolution of DoCMs constitutes an individualized brain fingerprint. Based on this successful fingerprinting we suggest that DoCMs are important targets for the investigation of neural correlates of individual psychological parameters and can provide mechanistic insight into the underlying neurophysiological processes, as well as their disturbance in brain diseases.


Hidden Markov Modeling
Baum-Welch algorithm finds values for HMM parameters that best fit the observed data.
For training we need:  sequence of observations O₁, O₂, …, Oₙ  sequence of hidden states for these observations S₁, S₂, …, Sₙ Algorithm tries to estimate transition matrix A and emission matrix B using values of O and S. Since, we don't known the O and S, we assigned a random initial guess of their values.
Then, the algorithm iterates using the training DoCM ts and re-calculate both O and S matrices till the convergence of the algorithm.

Expectation maximization
Baum-Welch algorithm is using expectation maximization (EM) approach to find values for A and B.
1. Initialize A and B with some initial values (done only once) 2.Then, the algorithm estimates latent variables ξᵢⱼ(t) and γᵢ(t) using A, B, O and S.This step estimates how much each transition and emission has been employed.This is called the 'estimation step'.
3.Then, the algorithm maximizes A and B matrices employing estimations (latent variables) from previous steps.This is called the 'maximization step' 4. The procedure continues until its convergence

Initial equations
For the estimation of A and B matrices, we used the following formulas (source): 1.A = aᵢⱼ (probability of transition from hidden state i to hidden state j)= expected number of transitions from hidden state i to state j /expected number of transition from hidden state i 2. B = bⱼₖ (probability of observing observation Oₖ in hidden state j)= expected number of times model is in hidden state j and we observe Oₖ/ expected number of times in hidden state j The probability aᵢⱼ could be defined as the probability of being in hidden state i at time t and in hidden state j at time t+1, given the observation sequence O and the model (source).
The described procedure can be graphically showed as follows:

S1. Current step probability with forward, backward and emission probability
In the graph we are at time t, we know probability that we are at the current hidden state Sᵢ (this is forward probability αᵢ(t)), we know hidden probabilities going from hidden state Sⱼ to the end of the sequence using backward probabilities βᵢ₊₁(t).We want to get probability of going from Sᵢ to Sⱼ and given that we have observed Oₜ₊₁ in Sⱼ at t+1.
Here we'll make use of latent variables ξᵢⱼ(t) and γᵢ(t):  ξᵢⱼ(t)-probability of transition from hidden state i to hidden state j at time t given observations: (1) Note that denominator P(O|Θ) means probability of the observation sequence O by any path given the model Θ. ξᵢⱼ(t) is defined for time t only.We have to sum over all time-steps to get the total joint probability for all the transitions from hidden state i to hidden state j (calculate aᵢⱼ).This will be our numerator of the equation of aᵢⱼ.For denominator we could use marginal probability which means the probability of being in state i at time t, whole equation has the following form: (2) Denominator could be expressed differently and leads to a new latent variable γᵢ(t): (3) γᵢ(t)-probability at given state i at time t given observations.We can use it to calculate aᵢⱼ (our previous formula for aᵢⱼ is also valid): (4) We can use γᵢ(t) to calculate bⱼₖ (which is the probability of a observation Oₖ from the observations O given hidden state j): (5) Note that 1ₒₜ₌ₖ is an indicator function which has value 1, if observation Oₜ belongs to class k and 0 if it doesn't.

Expectations and maximization in HMM
Based on the aforementioned equations, we can calculate components separately under the form of EM approach.We have to follow these two steps:  Calculate expected value of latent variables ξᵢⱼ(t) and γᵢ(t).One can either l initialize A and B randomly or use some previous knowledge if we have it.Here, we used the DoCM ts for every subject's ROI from the scan session 1 to initialize A and B.


Maximize values of A and B by using equations for aᵢⱼ and bⱼₖ.And go for a next round by using new A and B values for estimating ξᵢⱼ(t) and γᵢ(t).
We have only observations and we start with random guess (or if we have some more information we could use it).We estimate our latent variables which we'll be then used to maximize A and B. At each step we get a better estimation for A and B until improvements are small and algorithm will converge.

Nodes to networks mapping
STable 1 tabulates the id, the full name of each brain area's, its name according to AAL (Rolls et al., 2015), the located lobe, the abbreviated name and also the corresponding network used in our study.Here, we used the first 90 areas excluding the cerebellum brain areas (91-116).