FEAT 3 Practical

This is the third FEAT practical. It leads you through some of the more advanced usage and concepts in both single-session and higher-level FEAT analyses. Feel free to do the latter three sections in a different order if you are particularly interested in any of them.

Contents:

Motion & Physiological Noise Correction
Look at the ways in which we can do these corrections within FEAT.
Contrasts in Parametric Designs
Analyse a simulated dataset to look for linear and quadratic trends.
Interactions
Analyse an experiment containing multiple conditions to look for interactions between the stimulus types.
Contrast Masking
Use contrast masking to distinguish between results (in a differential contrast) driven by positive or negative BOLD changes.

Optional extensions

There is far more to FEAT than we have time to cover here! There are a few more sections in the "Extras" practical, but we do not expect you to do these! However, if you think that any of the concepts outlined below are likely to be more relevant to you than what is in this practical, then feel free to substitute sections.

Custom Waveforms
An example of the options for setting up first-level FEAT analyses with simple designs that do not require timing files.
HRF Basis Functions
Create and use basis functions to model more general / flexible HRF shapes.

Motion & Physiological Noise Correction

In this section, we look at the ways we can correct for structured noise within FEAT. By adding specific regressors to the GLM we can mitigate the effects of motion to some extent, and we can pursue a similar strategy using PNM to correct for physiological noise—provided physiological recordings were acquired during the scan!

To demonstrate this we acquired two data sets: two repetitions of the pyramids & palm trees task (as seen in the FEAT 2 practical) in the same subject, but where in one scan the subject deliberately moved and breathed irregularly. These are referred to as the naughty and nice data from here on in.

Another, complementary, approach to addressing excessive motion is to use ICA-based clean-up strategies. These are introduced in the ICA portion of the FSL course.

Data

Take a moment to re-familiarise yourself with the key contrasts and typical responses under normal conditions, and satisfy yourself that the subject was still for the duration of the nice scan. Then look at the data contaminated by motion—the differences should be obvious!

cd ~/fsl_course_data/fmri3/motion/
firefox nice.feat/report.html &
firefox naughty.feat/report.html &

Simple motion correction

The simplest form of motion correction we can apply is to add the estimated motion traces from MCFLIRT to the GLM as nuisance regressors. That ensures that any of the BOLD signal that correlates with the temporal dynamics of the motion can be treated as noise. To do this, we simply select Standard Motion Parameters from the drop-down menu in the Stats tab in FEAT. Take a look at how this changes the results below:

firefox naughty_motion.feat/report_poststats.html &

Physiological noise correction

We are now going to use PNM to generate a set of EVs that relate to the physiological recordings we collected during the scans.

If you don't see any plots at the top of the web page, right-click on the empty area, and select This Frame -> Reload Frame.

Look at the report PNM generates. You should be able to see several unusual events in the respiratory trace!

The second step of PNM takes the processed physiological data and makes the EVs for FEAT—we will show you how to use these later. To generate these, run the command listed at the bottom of the web page or simply:

./pnm/mypnm_pnm_stage2

We ran an analysis for you that included the physiological confound EVs generated by PNM (either only using PNM, or using PNM in combination with the standard motion parameter approach described above). Have a look at how this changes the results:

firefox naughty_pnm.feat/report_poststats.html &
firefox naughty_motion+pnm.feat/report_poststats.html &

Motion outliers

As a last resort, we can completely ignore volumes that have been irreparably corrupted by motion. This is very similar to the concept of 'scrubbing', which just deletes any particularly bad volumes. However, deleting volumes is problematic as it disrupts the modelling of temporal autocorrelations. Instead, we can add another set of EVs to the GLM that indicate which volumes we want to ignore. We use fsl_motion_outliers to do this using the command below:

fsl_motion_outliers -i naughty.nii.gz -o my_outliers.txt -v

This may take a few minutes to run as this is multiband data. The -v flags simply prints some extra information, including the volumes that fsl_motion_outliers identifies as noisy. Open naughty.nii.gz in FSLeyes and check a few of these volumes.

How many fsl_motion_outliers EVs will be added to the design matrix?
Incorrect! This would assume that the effect on the BOLD timeseries is exactly the same each time the subject moves (which we would not typically expect).
Incorrect! There is no point to having EVs that only contain zeros. We only need to include EVs for the volumes that need to be removed.
Correct! Each of these EVs will have zeros at all timepoint and a 1 at the timepoint that should be removed.

Finally, we are ready to put this all together! Open FEAT and follow the instructions below to perform all the above motion corrections.

We have run this analysis for you, so take a look with:

firefox naughty_kitchen+sink.feat/report.html &

Take a look at the design on the Stats page, which should now contain a smorgasbord of additional EVs. Finally, compare the results to both the nice data and the naughty data without any correction. Are the FSL tools for motion and physiological noise correction on Santa's naughty or nice list this year?


Contrasts in Parametric Designs

How can we investigate the way activation changes as a function of, for example, differing stimulus intensities? To demonstrate this, we will use a data set where words were presented at different frequencies. Sentences were presented one word at a time, at frequencies ranging from 50 words per minute (wpm) to 1250 wpm, and the participant just had to read the words as they were presented. This is an example of a parametric experimental design. The hypothesis is that certain brain regions respond more strongly to the optimum reading speed compared to the extremely slow and extremely fast word presentation rates (i.e. you might expect to find an inverted U-shape for the response to the five different levels).

cd ~/fsl_course_data/fmri3/parametric/
firefox parametric.feat/report_stats.html &

To begin with, we perform the f-test based analysis described in the lecture. Familiarise yourself with the way this is set up in the design file (ignore contrasts 5 to 8 for now). What is the f-test looking for? Answer.

Looking at the Post-stats, the f-test passes significance in large swathes of the brain. But what shape of response is driving this result? To investigate this, we can inspect the raw parameter estimates (PEs) directly.

fslmerge -t response_shapes.nii.gz parametric.feat/stats/pe[13579].nii.gz

pe1.nii.gz contains the beta values from the GLM for the 50 wpm stimuli. In other words, this is a map of the strength of the BOLD response to words presented at 50 wpm (before statistical correction). The above command concatenates the PEs for all 5 EVs (ignoring the even numbered EVs which represent the temporal derivatives). This allows us to explore the specific response shapes in more detail.

fsleyes parametric.feat/example_func.nii.gz \
        parametric.feat/thresh_zfstat1.nii.gz \
        response_shapes.nii.gz &

Open the timeseries display and turn on response_shapes only. Turn this off in the main view, and adjust the colour of thresh_zfstat1 so you have a representative view of the f-stats. As you click around within the brain, the time series should now display the responses at that voxel for each of the five word presentation rates. Keep this FSLeyes window open!

Can you find brain regions where the responses exhibit a U-shape? Or an inverted-U? How might one interpret these types of responses in light of the experimental paradigm? Answer.

Quantifying response shapes

It should be obvious that, in some regions, the parametric responses are very structured. How then, could we quantify these?

To begin with, reopen the FEAT report and look at the design again. Contrasts 5 to 8 encode two simple models for the response: linear and quadratic trends. Satisfy yourself with how we encode these as contrast weights.

Which contrast describes the inverted U-shaped trend?
Incorrect! Contrast 5 is a positive linear trend (i.e. it models an increasingly strong response from the lowest word presentation frequency to the highest word generation frequency).
Incorrect! Contrast 6 is a negative linear trend (i.e. it models a gradually decreasing response from the lowest word presentation frequency to the highest word generation frequency).
Incorrect! Contrast 7 is a U-shaped quadratic trend (i.e. it models high activation levels at both low and high word presentation frequencies, and a reduced response in middle presentation frequencies.
Correct! Contrast 8 is a inverted U-shaped quadratic trend with lower response at both extreme low and high word presentation frequencies, and a stronger response in middle presentation frequencies.

Next, look at results in the Post-stats tab. Again, we can explore these further by loading the negative quadratic z-stats into FSLeyes as a new overlay in the window we had opened (parametric.feat/thresh_zstat8.nii.gz). As you click around within the significant regions of this contrast, note the shape of the frequency response in the time series plot. If you have time, take a look at the linear contrasts too. Are different regions displaying different trends? Answer.

In summary, we have run an exemplar set of parametric analyses. We used an f-test to find any regions that showed different responses to different frequencies, and visualised what shape these responses took using the response time courses. We also quantified these responses in terms of a set of linear and quadratic trends to give an idea of the more complex analyses that can be run on this type of data.


Interactions

In this section we will look for interaction effects between a visual and a motor task condition. During the visual condition, subjects passively watched a video of colourful abstract shapes. The motor condition involved uncued, sequential tapping of the fingers of the right hand against the thumb. Subjects were scanned for 10 minutes, which contained twelve 30s task blocks: four visual blocks, four motor blocks, and four blocks containing both conditions.

To begin with, we have run a simple analysis in one subject that models the visual and motor conditions, but not the interaction between them. Take a look at the FEAT report and familiarise yourself with the task, the analysis, and the responses to the two conditions.

cd ~/fsl_course_data/fmri3/interactions/
firefox 001/initial_analysis.feat/report.html &

We will now run an analysis looking for interactions using this subject's data. Open FEAT and follow the instructions below:

What interaction effects do you see in this subject? How do you interpret them? Answer.

Group analysis

We have run a straightforward group analysis of this data on a set of nine subjects. Familiarise yourself with the results by looking at the FEAT report:

firefox group/group.gfeat/report.html &

And take a closer look at the results for the interaction contrasts in FSLeyes:

fsleyes -std \
  group/group.gfeat/cope5.feat/thresh_zstat1.nii.gz -cm red-yellow -dr 3.1 6.0 \
  group/group.gfeat/cope6.feat/thresh_zstat1.nii.gz -cm blue-lightblue -dr 3.1 6.0 &

What interaction effects do we observe at the group level? How do you interpret them? Answer.

Note that in this case, the interaction contrast gave us a relatively straightforward set of results. However, this is primarily because we were looking at the interaction between two simple, distinct conditions in a relatively small data set in order to run things in this session. In targeted experiments, interaction based designs can be very powerful and the analysis pipeline is exactly as presented here.


Contrast Masking

Differential contrasts and F-tests are sensitive to positive and negative changes in BOLD. To separate out positively driven from negatively driven results we use Contrast Masking. For example, in a differential contrast like A − B, a significant result occurs whenever A − B > 0, but this could be driven by either A > B where both are positive, or by B < A where both are negative (i.e. B is more negative than A).

We will look at the Shad > Gen contrast (word shadowing greater than generation) from the fMRI fluency dataset (from the first FEAT practical) in order to see if the result is associated with positive or negative shadowing and generation responses. In contrast_masking you will see a copy of the analysis we asked you to run in an earlier practical. Back up these results with the command:

cd ~/fsl_course_data/fmri3/contrast_masking
cp -r fmri.feat fmri_orig.feat

Quickly review the results of this analysis (and in particular, the Shad > Gen contrast) to refresh your memory.

We can apply contrast masking without re-running the whole analysis by starting the FEAT GUI and doing the following:

You should see that the cluster associated with contrast 4 (Shad > Gen) no longer appears. What is the explanation for this?
Correct! The results that were previously found were not in regions that showed a positive effect for both word generation and word shadowing, meaning that the effect was driven by a deactivation.
Incorrect! If this would have been the case, then the same regions would have shown up in the contrast masked results.

Note that it is difficult to determine this directionality in other ways, such as looking at timeseries plots. However, it can be confirmed by loading the appropriate COPE images into FSLeyes, as these will show negative values in the stats/cope1 image (the Generation condition) in the area associated with the medial posterior cluster.


The End.