Oct 182012

During that SfN session last week, Hans Op de Beeck also asked a question about McGugin et al. 2012. At the time, I actually misunderstood and thought that him and Nancy Kanwisher shared the same concern (see previous post). In fact, it turns out that Hans, unlike Nancy, agrees that the effects reported in Rankin’s paper are expertise effects – but he had another concern, which he was nice enough to clarify in an email.

Hi Isabel,

I do not question the existence of expertise effects in the FFA and in many other areas. Indeed the PNAS paper shows expertise effects.

however, I do question the relevance of these effects for explaining face selectivity as there is up to now no good evidence that the expertise effects are specific to face selective voxels. To the contrary, the PNAS paper shows the expertise effects are the same in face selective and non face selective voxels. IF expertise would explain the difference in stimulus (face) selectivity between these two sets of voxels, then effects of expertise should also be different between these two sets of voxels.

Your response/blog is not addressing this most critical issue, nor do most of your papers as they are too narrowly focussing upon face selective voxels. The PNAS paper addresses the issue and congratulations to rankin and all of the lab for this work. However, in this paper you find evidence AGAINST your expertise hypothesis: expertise effects are the same in face selective and non face selective voxels. Thus expertise cannot explain the difference in face selectivity among those voxels.

Maybe you can add this comment to your blog?



You just have to love it when you think you have published a result that means A, and someone else believes it means not-A! Science is never boring.

I believe our results actually support our claims, and let me try to explain why.

Below is Figure 3 from McGugin et al. (2012). Here is the sequence of steps that produce this analysis, which is conducted in flattened cortex 2D space: We used a standard-resolution localizer, faces – objects, to localize the peak of the right FFA in each subject. We then vary the threshold on that localizer to define a very small ROI around that peak (25 mm2) and larger. We then look at the high resolution data for faces, animals, cars and planes within these regions. Half of the high-res dataset is used to classify voxels based on their maximal responses: voxels that respond most to faces, voxels that respond most to animals, and voxels that respond most to cars or planes (we call those non-sensitive). Car and plane voxels are lumped together because this step occurs before we look at expertise, and for the subjects as a group regardless of expertise, those non-selective voxels do not clearly replicate that maximal response to objects. Note that if we separate them into car or plane voxels, the results are the same.

McGugin et al Fig 3

Using the other half of the high-res data, we then calculate the response to cars vs. animals in each of these 3 types of voxels – and the bar graph represents the magnitude of the correlation of this response with car expertise. For the purpose of this discussion, I am going to ignore “animal voxels”, and focus on the face and non-selective voxels.

Let’s consider possible results:

– If we had found that car expertise effects were obtained ONLY in voxels maximally response to faces, Hans suggests that this would support the expertise hypothesis. This is in fact what we obtained in the left FFA, as shown in Supp Figure 4.

McGugin et al Supp Fig 4

– if we had found that car expertise effects were only obtained in non-sensitive voxels, this would suggest that the car expertise response is found outside of the most face-selective voxels, that they are just very close to each other, but do not spatially overlap. This is NOT what we found, because voxels that show the strongest reliable response to faces also show a car expertise effect, on in both left and right FFA.

– What we did find in the right FFA is this. First, highly face selective voxels, even in the very middle of the FFA, show a car expertise effect. Non-sensitive voxels also show a car expertise effect. But most importantly, these car expertise effects drop out when you move away from the peak of face selectivity, in a ROI that is 300 mm2-200mm2 around the FFA. In the paper we discuss how the signal in the larger ring region is as good as it is in the center ROI. Moreover, the mean response to cars over all subjects is the same in ROI-25mm2 and in ROI-300-200mm2, both in “face” voxels (p=.85) or in non-sensitive voxels (p=.94). So the only difference between the voxels within 200 mm and those passed 200mm2 appears to be that the response across subjects within 200mm2, and not outside of this ring, is predicted by behavioral car expertise.

So interestingly, the results are a bit different in the two hemispheres. I don’t particularly like to think of these voxels as “face” or “car” voxels, I believe things at the voxel (and single-cell) level may be better described in more probalistic terms. But since this is how many of us have been describing our results, I’ll stick with it here.

We are often encouraged to think of face-selective areas surrounded by non-face selective areas. This model however does not apply here at all. When it comes to predict behavioral performance, what matters is NOT what the maximal response of a voxel is, but whether is it near or far the peak of face selectivity. Note that the voxels within the 200mm2 border that do not respond reliably more to one category (therefore, they respond to faces and objects) respond more like face-voxels then they respond like non-selective voxels outside of that region. I made this summary figure to illustrate that the only model with “two kinds” of voxels (model A) that can explain these data is one where one kind of voxel shows both face selectivity and car expertise effects. Even then, this really fits with the pattern on the left FFA. In this Figure, the colors on the right just refers to “kinds” of cells, and the discussion on the right refers to whether you can use a “two kinds of voxels” or a “three-kinds of voxels” (model B) to explain the results. Note that given our design, “face response” means a stronger response to faces than objects, but “car response” means a correlation between responses to cars and car expertise.

schematic figure

In our view, these results are compatible with a spatial distribution of face and car expertise responses that overlap to a large extent, and likely overlap more as a function of expertise, since it likely takes a lot of expertise with cars to match that which we have with faces and to recruit the same number of cells. If we had a single scale on which to compare degree of expertise for faces and cars, the comparison of the spatial distribution of selectivity for the two categories in people with matched expertise would be interesting.

In sum, are there voxels that show an expertise effect for cars and that are not categorized here as “face voxels”?: sure there are. This of course doesn’t mean that these voxels do not respond to faces. Their selectivity relative to other categories, for one reason or another, was not as strong and reliable. Also note that the location of these voxels remains constrained by face-selectivity: they are found near the peak of face selectivity, defined independently. In a way, faces appear to recruit a subset of an area that responds to expertise more generally. It would be very interesting to test subjects with enough of a range in expertise for faces to actually compare apples to apples here: expertise effect for cars and expertise effect (not just mean selectivity) for faces. Perhaps the overlap is even greater then.

Most importantly, we do not find “face” voxels that do not show a car expertise effect. By an extension to humans of the results that Doris Tsao and her colleagues have published, whereby in the center of a face-selective patch, 97% of the cells are face-selective. If this result holds true in human FFA, then we’d have to conclude that much of the car expertise effects found in the heart of FFA are coming from face cells!



Oct 162012

One reason to present at conferences is to hear criticisms from your colleagues.
Rankin McGugin and Ana Beth Van Gulick were recently presenting their work at the SfN meeting in NewOrleans, in a very interesting Nanosymposium organized by Ido Davidesco on Extrastriate Cortex: Functional Organization Faces and Objects. Since the work they were presenting argues against the idea that face perception is a “cognitive function with its own piece of real-estate in the brain”, namely the FFA, it is particularly interesting to hear what Nancy Kanwisher has to say about this since she is known as the strongest advocate of this position.

Rankin recently published part of her dissertation in PNAS, a study scanning 51 subjects at 7T and relating responses to cars in the FFA to car expertise. While Rankin and Ana Beth’s talks were about unpublished follow-up experiments, they both mentioned the study, and thus Kanwisher’s question mentioned that study.

The question was, from memory: “I am confused, are you now changing the definition of what a car expertise effect is? I thought that it was supposed to be a stronger response to cars than objects in car experts and in your PNAS paper you do not get that”. Our answer at the time was “no we aren’t –changing the definition- and yes we are – finding more activity for cars than objects in car experts” but I doubt it came out very clearly.

Here I hope to unpack our answer, for those who care:

1- Are we changing the definition of an expertise effect?

This requires an answer, because everybody knows that a scientific hypothesis that requires conveniently changing one’s definition to match the data isn’t much of a hypothesis at all. If we did this, without making it clear what justifies it, we’d be pretty bad scientists.

However, the definition of an expertise effect has never been “a stronger response to objects of expertise than to other objects” and it’s important to explain why.

Let’s start with the first car and bird expertise study I conducted as a post doctoral project with Nancy Kanwisher.

Gauthier et al 2000 Fig 7

The plot shows the response in FFA for Birds – Cars on the Y axis, and the X axis shows behavioral expertise for Birds – Cars. The definition of an expertise effect was “the correlation between behavioral expertise and the response to objects of expertise”, here r=.75 and r=.82, within samples of car experts and bird experts. This plot illustrates something interesting: more of the bird experts have more activity for birds than cars than car experts have more activity to cars than birds. This is likely because animals, as reported many times since then, generally produce a large response in FFA, even in novices. Would we on this basis suggest that the expertise effect is larger for birds than cars? I would suggest we would not: it is the correlation that matters. Conceptually, it is easy to imagine comparing the response to cars or birds to other things that lead to smaller responses in FFA, like shoes or houses, and as long as the response to these objects does not depend on car or bird expertise, the correlation would remain the same, and the values on the Y axis would simply be shifted.

Perhaps best illustrating that the main effect of objects of expertise relative to the baseline as never been critical to the definition of expertise, we have often used faces as a baseline. Since expertise for faces is likely to be higher than for most other categories, even an expertise account would predict more activity for faces – this is *exactly* what the expertise account says: that faces engage the FFA more than most other things because most people we scan are very good at face recognition.

Gauthier et al. 2005 Fig 4

For instance in a paper in 2005, using another small sample of people varying on car expertise (on the X axis) we found a similar correlation with the response in FFA for cars, relative to faces this time. Clearly, most car experts will still have more activation for faces than cars in FFA and this is shown in the figure. Note that the best car expert’s FFA responded more to cars than faces.

2- Do we find more activity for cars than other objects in the FFA in McGugin’s 2012 PNAS paper?

Yes we do. Here is one of the several figures in the paper that show this.

McGugin et al. 2012 Figure 2

The X-axis is behavioral car expertise. The Y-axis now is plotting Car da. What is that? It is basically an effect size measure, calculated like Cohen’s d. In this case, it is the response to cars – the response to all other categories we used (faces, animals, planes), divided by the pooled variance. If da is positive, there is more activity for cars than the average of other objects. And in the present case, this happens for a subset of our subjects, those with more car expertise. But is this a critical feature of our results? It really isn’t, because where the mean falls on the Y -axis completely depends on the choice of baseline, and the baseline is not what varies with expertise.

What matters in the measurement of expertise effects is the relation between performance and the response to objects of expertise relative to a baseline. The baseline needs to be high level (fixation is not good, because in that case we’d be measuring each brain’s general BOLD response to any visual image, which tends to vary quite a bit) but it does not matter whether it is one category of objects, faces or the average of many categories. As long as the baseline does not correlate with expertise for the category of interest, the correlation should be the same.

McGugin et al. (2012) is strong evidence that the kind of expertise effects we have reported for years in the FFA overlap with face-selective responses in the very center of the FFA, even in the most face selective high resolution voxels. There is a large proportion of cognitive neuroscience studies that use faces because the strong response in the FFA and its associated network make many questions easier to study: McGugin et al. (2012) suggests the results of such studies speak to processes relevant to face recognition *and* to processes relevant to many other visual skills.

Oct 042012

2012 OPL Chess Tournament begins Monday, October 8th!

First up –

Group A: David vs. Magen

Group B: William vs. Kaleb

Check back for results of the game!

Oct 042012
1) To begin, each player in each group will play against one another once. The top two in each group will advance to the next round. Then the first player from each group will play the second player from the other group. The two that win those games will move on to the final round to see who will be the OPL Chess Champion of 2012!


2) Games should last no more than two weeks each. That means the games have to be quicker than what we normally play, say roughly about 4 or 5 moves per day.


3) In case of a tie at the end of the first round, we will use a sort of point system. This will require you to take a picture at the end of each game! The points will work something like this:


4) The player to go first will be determined by a coin toss. The winner of the coin toss will decide if he/she will be white (plays first) or black.
Oct 042012

Group A (will play on chess board in Wilson 308):

Isabel Gauthier (“Z” Ninja)

David Ross

Magen Speegle


Group B (will play on chess board in Wilson 220):

Jackie Floyd

William Yue

Kaleb Lowe