Estimating Brain Activity with High Spatial and Temporal Resolution using a Naturalistic MEG-fMRI Encoding Model

Beige Jerry Jin, Leila Wehbe

Carnegie Mellon University

Our work integrates the millisecond-level temporal precision of MEG with the millimeter-scale spatial specificity of fMRI to reconstruct cortical source activity at a high spatiotemporal resolution in naturalistic experiments.

Abstract

Current non-invasive neuroimaging techniques trade off between spatial resolution and temporal resolution. While magnetoencephalography (MEG) can capture rapid neural dynamics and functional magnetic resonance imaging (fMRI) can spatially localize brain activity, a unified picture that preserves both high resolutions remains an unsolved challenge with existing source localization or MEG-fMRI fusion methods, especially for single-trial naturalistic data.

We collected whole-head MEG when subjects listened passively to more than seven hours of narrative stories, using the same stimuli in an open fMRI dataset (LeBel et al., 2023). We developed a transformer-based encoding model that combines the MEG and fMRI from these two naturalistic speech comprehension experiments to estimate latent cortical source responses with high spatiotemporal resolution. Our model is trained to predict MEG and fMRI from multiple subjects simultaneously, with a latent layer that represents our estimates of reconstructed cortical sources.

Our model predicts MEG better than the common standard of single-modality encoding models, and it also yields source estimates with higher spatial and temporal fidelity than classic minimum-norm solutions in simulation experiments. We validated the estimated latent sources by showing its strong generalizability across unseen subjects and modalities. Estimated activity in our source space predict electrocorticography (ECoG) better than an ECoG-trained encoding model in an entirely new dataset. By integrating the power of large naturalistic experiments, MEG, fMRI, and encoding models, we propose a practical route towards millisecond-and-millimeter brain mapping.

Model Architecture

Feature streams enter the network through the input layer and traverse four transformer layers before being projected into the "fsaverage" source space by the source layer. The source estimates in the "fsaverage" source space is then transformed into subject-specific source estimates by the source morphing matrix. The MEG head predicts sensor signals by multiplying the source estimates with the lead-field matrix. The fMRI head predicts BOLD responses by convolving the downsampled envelope of the source estimates with a learnable hemodynamic response function (HRF) kernel. The MEG and fMRI of multiple subjects (e.g., S1, S2, ...) are predicted simultaneously. Under the joint constraints of MEG and fMRI from multiple subjects, our model recovers the source estimates with high spatiotemporal resolution.

Predictive Performance

Our model is comparable to single-subject, single-modality ridge models which serve as a ceiling in MEG and fMRI prediction.

ECoG Prediction

Our model can produce powerful predictions of ECoG signal across experiments and subjects, outperforming models trained directly on ECoG data.