OxCIN Oxford Neuroscience Experience 30th June - 4th July 2025

MRI analysis bootcamp

We’re using a software package called FSL, developed here at OxCIN@FMRIB. FSL is free to download, and runs on macOS and Linux (and Windows, with a little extra work).


We’re using a program called FEAT for the fMRI analysis. We set up the analysis by configuring the following steps:

  • Data: Input fMRI data, where to save the results
  • Pre-stats: Pre-processing/noise removal
  • Registration: Aligning functional+structural images
  • Stats: Defining our model
  • Post-stats: Searching for activation
Download

Download your data

The download links were removed at the end of the week.

Terminal

Open a terminal

To use FSL, we need to use a terminal (a.k.a. shell, command-line, command-prompt) - a text/keyboard-based interface to your computer. We’re going to use a terminal to inspect and analyse our MRI images.


Open the Terminal program:

  1. Press ⌘+space
  2. Type terminal
  3. Press enter

A window that looks something like this should appear:

This is a terminal - it is waiting for you to interact with it. The way that you use a terminal is as follows:

  1. Type in the command that you want to run.
  2. Press enter.
  3. The command will run, and may print some information for you.

For example, try running these commands (pressing enter at the end of each line):

pwd
ls

The pwd command prints out your current location in your computer’s file system, and the ls command prints the contents (files and folders) of your current location.

Now type these commands into the terminal window, pressing enter at the end of each line:

Or copy+paste them using ⌘+c to copy and copy and ⌘+v to paste.

cd ~/Downloads/
unzip winneuroexp2025.zip

The first command changes into the Downloads directory, and the second command unzips your MRI data.

Data

Look at your data

Open the T1 (structural) image in FSLeyes, the FSL image viewer, by typing this command and pressing enter:

fsleyes structural.nii.gz &

The FSLeyes image viewer will open, displaying the structural MRI image from your experiment.

Click and drag to look at different parts of the image, to make sure that it looks good, and that nothing has gone wrong during the acquisition.

Remove the structural image: Overlay → Remove, and then add the functional image: File → Add from file → functional.nii.gz.

This is a 4D image - a collection of 3D images acquired throughout the course of the experiment. Open the time series plot (View → Time series), and try to find a region/voxel which was responding to the experimental stimulus (hint: if your stimulus was visual, try searching in the occipital lobe).

A brain region which responded to (was activated by) a stimulus should have a time series which roughly looks like the stimulus timing, i.e. high when the stimulus is present, and low when the stimulus is absent.

Brain extraction

Brain extraction

We need to remove non-brain matter from the structural image - this step is an essential part of registration, which is the process of aligning a brain image to a standard/average template.

This is not strictly necessary for our experiment today, but is needed for any experiment that involves more than one participant, and for today will allow us to obtain some more accurate information as to which specific brain regions were activated by our stimulus.

In order to perform brain extraction, we can use two commands, called robustfov and bet :

robustfov -i structural -r structural_fov
bet structural_fov structural_fov_brain

The robustfov command will tighten the field-of-view (FOV), or bounding box, of the whole image, e.g. to remove empty space outside the head. The bet command will automatically identify the brain boundary, and will remove (set to zero) everything outside of it.

View the outputs to make sure that these steps worked. The results don’t need to be perfect, just reasonably good:

fsleyes structural -cm blue structural_fov structural_fov_brain -cm hot &

You are looking at three images here - the blue one in the background (if you can see it) is the raw T1 image, the grey one is the T1 image after the FOV has been tightened with robustfov, and the red-yellow image is the T1 image after non-brain voxels have been zeroed out by bet.

Analysis

fMRI analysis

We are now ready to perform the time series analysis to find out which regions of the brain were active during our experiment. We will start the analysis running now and then, while we wait for it to finish, we will explore each of the steps that are being performed.

Open the FEAT program by running the easyfeat command in the terminal:

easyfeat &


1. Set up the input files:

  • Input file: functional.nii.gz
  • T1 structural: structural_fov_brain.nii.gz


If you are running out of room, or can't see the whole FEAT window, you can minimise each section by clicking on the little black triangles.



2. Leave the Pre-stats options at their default settings:

  • High-pass filtering to remove low-frequency drifts
  • Spatial smoothing to boost signal-to-noise
  • Motion correction to reduce the effect of head movement
  • Brain extraction of the fMRI data

3. Leave the Registration options at their default settings. The data is aligned (a.k.a. registered) to an average template brain. This isn’t important for a single-subject experiment like we are doing here, but is essential when you want to compare data from more than one individual.



4. Set up your experiment timing:

  • Number of conditions: 2 (or more/less?)
  • Give each condition a suitable name
  • Change the Off/On/Phase values to match the timing of your conditions



5. Set up your statistical tests:

  • The Mean checkbox instructs the anslysis to search for the brain’s response to any of your conditions.
  • The #1, #2, etc, checkboxes will search for the brain’s response to individual conditions.
  • The Compare conditions checkboxes allow you to compare your stimuli against each other. For example, the #1 >/< #2 checkbox instructs the analysis to search for any regions where the brain’s response to your first condition was greater or less than the brain’s response to your second condition.

Note that we are using the terms condition and stimulus interchangeably, to refer to your experimental conditions.


6. Leave the Post-stats options at their default settings. These control the statistical significance thresholding that is used to identify which brain regions were active.

7. Click on the Run button. A web browser should open after a few seconds, showing the analysis progress.

The analysis should take around 20 minutes. While you wait, let’s learn about fMRI preprocessing and analysis, yay!

Activation

Searching for activation

We have acquired some fMRI data, and are trying to identify which parts of the brain responded to our experimental stimulus. In order to do this, we first need to describe our stimulus. Let’s imagine that our stimulus is a cat:

During the experiment, we periodically showed our participant this cat. We interspersed this stimulus with rest periods. We can therefore describe our stimulus with something like the following (with time moving along the horizontal axis):

We then need to convolve this time series so that it looks like what we might expect to see in regions of the brain that were responding to our stimulus.

This convolution step is necessary because of a physiological process known as the BOLD effect. In brief, when a brain region starts firing, it begins to consume oxygen from the local blood supply. This causes, after a slight delay, an influx of freshly oxygenated blood to the region. It is this spike in oxygenation which we are able to observe using fMRI.


Just like how we have described our stimulus timing, we can also describe the BOLD effect - we do this with something called a haemodynamic response function (a.k.a. HRF), which looks something like the plot to the right.


So we need to convolve our stimulus timing with the haemodynamic response function. The result looks something like this:

BOLD is short for Blood Oxygen Level Dependent

Voilà - we have our model - this is what we expect to see in regions of the brain that responde to our stimulus. All that’s left now is to search through our data to find voxels with behaviour that matches our model:

Our model - the convolved stimulus timing - is also sometimes referred to as the predicted response, or as an explanatory variable.

Statistics

Statistical analysis

What follows is a substantially simplified description of the processes that are typically followed in the statistical analysis of fMRI data, but it is hopefully good enough to give you a basic understanding of what is going on.

We now have:

  • a model of what we expect to see in a region of the brain that was responding to our experimental stimulus.
  • Time series data from about 20000 voxels, each of which needs to be assessed to determine how well it fits our model.

Fortunately we’re using software to do this for us. While we wait for the analysis to complete, let’s dig into the process that is used to quantify which voxels/brain regions were responding to our stimulus.

We need to determine how well the time series data from each voxel fits our model. If we find a voxel where it fits well, then there is a good chance that the brain region in which that voxel is located was responding to our stimulus.

The way that we do this is, for each voxel, to take our model and shift and scale it until it fits the data as well as it possibly can. In the plot below, the smooth blue line represents our model, and the noisy red line represents some data we have observed from one voxel:

At this point things need to get a bit mathsy. We now compare the amplitude of the fitted model (𝜷) to the difference between the fitted model and the data (respresented by std(𝜷)). We combine these two quantities to calculate a t-statistic which, simply put, summarises how well our model fits our data:

Our t-statistic is simply a ratio between the amount of signal, and the amount of noise, in our data. A high t-statistic implies a good model fit and low amount of noise, whereas a low t-statistic implies a poor model fit and a large amount of noise. So we can use the t-statistic to decide which we trust more - our model, or the noise?

To do this in a quantitative manner, we take this t-statistic and plug it into a t-distribution to convert it into a probability - a p-value:

This tells us how likely it is that we would observe a particular t-statistic purely by chance. A small t-statistic (poor model fit) is quite likely, whereas a large t-statistic (good model fit) is very unlikely. We ultimately use this p-value to decide whether we think our model can explain our data, or whether our data is just full of noise.

Remember that we are actually performing this process on the data for every single voxel in our dataset. This means that we will end up with an entire 3D image of t-statistics and another 3D image of p-values. We often present the results of an fMRI analysis with images that look like this:

What we are looking at here are a handful of 2D slices of our 3D dataset, with the fMRI data (averaged across time) shown in the background in grey, and the t-statistic values shown through red (low) to yellow (high). We are only displaying t-statistics that were statistically significant, i.e. all of the voxels where the t-statistic (and the corresponding p-value) was too low are not displayed.

We don’t actually use t-statistics, but rather use z-statistics. But for the purposes of this demonstration, you can think of t and z-statistics as being equivalent.

So we’re now finally able to identify the brain region(s) that were activated by our experimental stimulus!


Summary We use statistics to find out:

  • How well does our model fit our data?
  • Where in the brain do we get a good model fit?
  • Which parts of the brain really like cats (or whatever)?
Preprocessing

Preprocessing

We need to take a couple of steps back, and address the fact that MRI data is very noisy! So before we can apply any of those fancy statistical tricks described above, we need to preprocess, or clean our data to remove as much noise as possible.

Here we have highlighted just a few issues that need to be addressed in a typical fMRI study. There are many more sources of noise that are inherent to fMRI data, and other MRI modalities.

For example, fMRI data is affected by something known as susceptibility distortion, which causes the images that we read from the scanner to become warped.

This is due to the fact that our “3 Tesla” (3T) MRI scanner produces a magnetic field which is not exactly 3T, but in fact fluctuates by small amounts throughout the space inside the scanner. This distortion can be corrected by estimating the strength of the magnetic field, and then adjusting (a.k.a. unwarping) the images accordingly.

Another very common approach to correcting for this type of distortion is to acquire a field map, which is the scanner’s own estimate of its magnetic field, and to use that to unwarp our data.

Another common source of noise which we often need to handle is scanner drift. Throughout the course of an experiment, the internal temperature of the scanner increases, and this can affect the the intensity of the signal that we observe. This results in low frequency drifts in the voxel time series:

We usually correct for scanner drift by passing our data through a high-pass filter, which removes low frequencies. Surely you’ll agree that we would have a much better chance of fitting our model to this filtered time series:

Another comon source of noise is that most people, when inserted into a long claustrophobia-inducing tube, insist on moving around and breathing:

Throughout the course of a fMRI experiment, we acquire a succession of 3D images. If our participant moves during the acquisition, the images will not be aligned with each other. In other words, the time series data from one voxel will contain signal from more than one brain region! We can account for this by registering (introduced below) the 3D image acquired at each time point to a reference time point, which will bring all of the images into alignment with each other.

Once we have removed all of the noise from our data, we still need to do a few extra steps to get ready for the analysis. Most fMRI studies involve working with data from more than one participant. In order to combine data from different individuals, we need to align, or register the data so that they are anatomically aligned. These are automated processes which use sophisticated computational techniques to find a mapping between two images. We usually do this as follows:

  1. First we register an individual’s fMRI data to their structural image.
  2. Next, we register each individual’s structural image to a standardised average template image.
  3. Now we can combine both of the above steps to transform or project the fMRI data into the average template space.

Before we can perform these registration steps we need to run brain extraction on our structural image (remember that?). This is an automated process which identifies and removes non-brain regions from our structural image.

Once we have a brain-extracted structural image, we can register our functional images to the structural image, and then register the structural image to our standard template image.

Results

Results

Hopefully by now, the analysis will have completed. The report web page shows a summary of the analysis; it contains five sections - Registration, Pre-stats, Stats, Post-stats, and Log. The Log page is usually only useful for troubleshooting, so we won’t worry about it here.

The Registration page displays a summary of the fMRI → structural, and structural → standard template registration stages. In a real fMRI analysis it is very important to manually check that the registration was successful.


  • Summary registration, FMRI to standard space: This image shows the fMRI image in the background in grey, and an outline of the standard template overlaid on it in red - this allows you to assess the overall results of both registration stages.
  • Registration of example_func to highres: This image has two rows, both showing the fMRI (example_func) to structural (highres) registration. The first row shows the fMRI image in the background in grey and an outline of the structural image overlaid in red. The second row shows the structural image in the background in grey, and the fMRI image outline overlaid in red.
  • Registration of highres to standard: This image shows the same information as above, but for the structural to standard template registration.
  • Registration of example_func to standard: This image shows the same information as the Summary registration, but in more detail.

The Pre-stats page displays a summary of the preprocessing steps. The MCFLIRT plots display a summary of the extent to which the participant moved around during the experiment. These motion estimates are used to correct the data for the effect of motion.


The Stats page displays your experimental stimulus timing, i.e. the model which describes the behaviour that you are trying to find in the data. In the main plot, time goes down the vertical axis, and a column is present for each of your experimental stimuli. The model is also referred to as a design matrix.


There are actually two columns for each of your stimuli, i.e. if you have one stimulus, the model/design matrix will contain two columns. The second column contains the temporal derivative of the stimulus timing, and is automatically added to the model to account for noise that is due to timing differences in the acquisition of the 2D slices in each 3D image.

The Post-stats page contains the good stuff - the results of the statistical analysis. Each image displays the fMRI data in the background in grey, and t-statistics in red-yellow, for the statistically significant voxels (voxels where there was no significant activation are not coloured).


If your experiment had two stimuli, and you set up the analysis to Compare condition #1 with condition #2, a total of five tests will have been performed:

  • zstat1 - C1 (mean): These results show the activation in response to both of the stimuli combined (e.g. as if they were just a single stimulus).
  • zstat2 - C2 (stimulus 1): These results show the activation in response to your first stimulus.
  • zstat3 - C3 (stimulus 2): These results show the activation in response to your second stimulus.
  • zstat4 - C4 (stimulus 1 > stimulus 2): These results show where the activation in response to your first stimulus was greater than the activation in response to your second stimulus.
  • zstat5 - C5 (stimulus 1 < stimulus 2): These results show where the activation in response to your second stimulus was greater than the activation in response to your first stimulus.

There is a collection of time series plots underneath the test results. These plots show the data and the model fit for the most significant voxel, for each of the tests that were performed.

  • The red line shows the fMRI data (after preprocessing, e.g. high-pass filtering).
  • The blue line shows the best fit of our model to the data.
  • The green line shows the “partial” model fit, which can be useful when you have more than one stimulus.
Exploring

Exploring the results in more detail

Bring up your terminal window (or open a new one), and type (or copy+paste):

cd ~/Downloads/functional.feat
fsleyes -b -ad -sfeat filtered_func_data.nii.gz

This will bring up a FSLeyes window that looks something like this:

The table on the right (the cluster browser) allows you to explore your results in more detail.

Use the drop-down menu to select the test (a.k.a. COPE) that you are interested in. Then press the Add Z-statistics button to open the Z-statistic image for that test.


Now press the little button in the first row, underneath the Z Max location column. This tells FSLeyes to display the location of the maximum Z-statistic for your test (i.e. the locatoin of the best model fit).



Now we can see exactly where the brain was responding to our stimulus (or, if we are comparing two stimuli, where the brain responded more strongly to one stimulus when compared to the other stimulus).

But you might want some more detailed information as to which anatomical regions were involved in your experiment. We can explore our results a little further by using an anatomical atlas, which is an image that contains anatomical labels for every voxel in the brain.

However, our fMRI data and z-statistics are defined in fMRI space, and the atlases are all defined in a standard template space called MNI152 (remember the Registration pre-processing step?). So we first need to transform our z-statistics into MNI152 space.

1. Load the MNI152 standard template image (FileAdd standard, and select MNI152_T1_2mm.nii.gz). Then move the MNI152_T1_2mm image down to the bottom of the Overlay list (below the image display, on the left), by clicking the ▼ button a few times. You should be able to see that your fMRI data and z-statistics are not aligned with the MNI152 image.


2. Select the functional:zstat image in the overlay list, and then select the ToolsLoad affine transformation menu item. Make sure the Reference image is set to MNI152_T1_2mm, then press the Matrix file - Choose button, select the ~/Downloads/functional.feat/reg/example_func2standard.mat file, and then press Ok. Your z-statistic image should now be aligned with the MNI152 image.


3. Repeat step 2 for the functional:filtered_func_data image.

FSLeyes may not immediately refresh the display - if the filtered_func_data image is still not aligned, click on the spanner icon at top-left, and change the Display space option to the functional:zstat image.

Our fMRI and z-statistic images should now be aligned to MNI152 space. So we can now load an anatomical atlas, and explore the brain regions that were involved in our experiment.

4. Open the Atlases panel (SettingsOrtho viewAtlases). Now, as you move the location cursor around to different parts of the brain, information about the anatomical region will be displayed in the atlas panel.

If space is getting tight, you may wish to resize/close some of the FSLeyes panels - you can resize any section of the FSLeyes interface, by clicking+dragging the boundaries between panels, and you can close a panel by clicking on the little x button at the top-right of the panel.

5. We can also add the different atlas regions to the display. Inside the atlas panel, click on the Show/Hide link next to the Harvard-Oxford cortical structural atlas. This will add another image, called harvardoxford-cortical/label/all, to the display.


6. Select harvardoxford-cortical/label/all in the overlay list, and click the circle button on the top toolbar, to toggle “outline” mode.


7. Finally, select the functional:zstat image, then click the gear button at the top left, and set Interpolation to Spline interpolation - this smooths the display, making it look a bit nicer.