Forget weak statistics, fMRI studies suffer from oversimplified assumptions made during pre-processing

Functional Magnetic Resonance Imaging (fMRI) has been under intense scrutiny this summer following concerns over the replicability, and  claims that up to  40, 000 studies may be invalidated by poor statistics . Unfortunately, despite being a powerful resource for non-invasive exploration of brain function, fMRI signals are an indirect measure of brain activity, that suffer from many sources of physiological and experimental noise. These lead to the detection of false positive signals that are exacerbated through the proliferation of studies with small sample sizes, and lack of pre-registration of research hypotheses. This has resulted in a number of unsubstantiated claims

To address these problems, many groups are making a push to improve the robustness of statistical analysis and enforce pre-registration. While vital these do not address a fundamental problem: that population-based comparative studies of brain function make oversimplified assumptions when comparing functional signals across brains.

A key requirement of any cross population study of functional imaging signals is the mapping of spatial correspondences between data sets. That is, experimenters need to know that they are comparing the same pattern of functional activity  in all brains. This is generally achieved using image registration techniques, that warp brain images until equivalent structures overlap (for example Conroy et al. 2013, Robinson et al 2014Sabuncu et al 2010), although hyperalignment methods that cluster points in an embedding space are also available (Haxby et al., 2011, Langs et al. 2010) .

However, spatially constrained warping approaches make the assumption that, to a coarse approximation, the morphology  and functional organisation of all human  brains are consistent. That is, there is a equivalent pattern of folds and equal number of functionally specialised brain regions, where these regions appear in the same relative position for all brains . Unfortunately, there is growing evidence that this is not the case.  Studies have shown that number number of folds and branches of anatomical regions such as the cingulate can vary (Van Essen et al., 2005). The recent HCP “Multimodal  parcellation of the human cerebral cortex”  identifies variations in the location of at least one cortical area (55b, a language region, see figure). Studies have also shown that patterns of functional activity vary across individuals (Gordon et al., in press Langs et al, in press ) and that the variation of  these patterns can be linked to different aspects of personality (Adelstein et al., 2011Wang et al.2014 ) .

Topological variance of region 55b from HCP .scene file

All these results suggest that by enforcing a global average model of brain structure and function we are losing the sensitivity to detect the  subtle variations in brain structure that underpin different aspects of healthy and diseased brain function. But what can be done about this?

One option, that I have already mentioned is to use hyper alignment approaches ((Haxby et al., 2011, Langs et al. 2010). These use matrix factorisation approaches, to embed all data points into a low dimensional space where they can be clustered. Distances in the space reflect the similarity of the functional profiles of different points in the brain. One problem with this approach is that the brain displays patterns of long range functional correlations. That is, points that are very far apart in the brain may fire together within a long range network. Additionally, the impact of noise in the data can be difficult to predict ,or remove completely. These factors can lead to the assignment of correspondences between very different brain areas. Further, there is no way of deforming the data such that the variation of correspondences of time (due to development or healthy ageing) can be observed.

Another option is to instead look at graph matching of brain networks. These approaches compare macroscale models of human brain structure (that model the brain as a relatively small number of interconnected, functionally-specialised regions). Specifically, individual subject network models are generated by clustering functional imaging data into regions to form the nodes of the networks. Regularisation can be applied to reduce the effect of noise. The strengths of connections (or edges) of the network as estimated from the similarity of averaged activity profiles. Matching algorithms, such as graph edit distance, can then be used to address the similarity of two networks, which can be used as biomarkers for the detections of different types of disease. These approaches have promise has they place no constraints on the structure or topology of the brain. However, by downsampling to the scale of a few hundred regions, it is feasible that the resolution may be too low to detect the subtle differences in micro-structure that underpin complex aspects of disease.

One option I have been looking at recently is to look for ‘best-of-both-worlds’ approach, and  perform group-wise registration whilst removing constraints that enforce topological consistency in the data. This approach,adapting sparse optimisation techniques from papers from the facial recognition literature, searches for a minimal rank solution without forcing all feature sets to fit to a global average model. Results are still very preliminary, but my goal is to extend the method to alignment of topologically inconsistent functional regions such as area 55b. Long-term, this avenue of research will test the theory that rather than there being a single global pattern of brain organisation, there are subsets. In this way it may be possible to  explore whether subjects with common cortical folding patterns also exhibit similar patterns of functional organisation.

Finding correspondences between topologically inconsistent data is a complex and open-ended problem; complicated by the effects of noise and global signal variances in the imaging data, as well as a lack of ground-truth understanding of how human brains are structured. Nevertheless, clearly it is time to move away from population average-based analysis and focus on these challenges. Finding a solution, could have a  vast impact on the sensitivity of neuroimaging experiments.


3 thoughts on “Forget weak statistics, fMRI studies suffer from oversimplified assumptions made during pre-processing

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s