Printer Friendly

Effects of spatial frequency content on classification of face gender and expression.

Several studies have provided evidence that face recognition depends on a critical band of spatial frequencies (SF), with better performance being obtained with images containing information on a medium band of frequencies (e.g., Bachmann, 1991; Costen, Parker & Craw, 1996; Fiorentini, Maffei & Sandini, 1983; Nasanen, 1999). However, faces are multidimensional stimuli that can be classified in terms of several properties, such as identity, gender, age or expression, and there is now evidence showing that the critical band of SF for face classification varies depending on the demands of the specific task at hand (see the reviews by Morrison & Schyns, 2001; Ruiz-Soler & Beltran, 2006).

Compelling evidence for the flexible use of information from different spatial frequencies has been obtained in experiments where the participants are presented with hybrid face images that contain two superposed stimuli (say of a man and of a woman), each filtered at a different spatial scale. For example, Schyns and Oliva (1999), using percentage of correct classifications as the dependent variable, found that participants used preferentially low spatial frequency (LSF) information when asked to categorize briefly presented (50 ms) hybrid faces as to their identity, but that they favored high spatial frequencies (HSF) when asked to categorize the faces as to their expressiveness (expressive or not). On the other hand, subjects showed a bias for LSF when asked to identify the specific expression (happy, angry or neutral) shown by the face. Moreover, superiority for LSF stimuli to capture attention when faces are briefly presented or to drive neural responses under incidental processing conditions has been observed in the case of fearful expressions (Holmes, Green & Vuilleumier, 2005; Vuilleumier, Armony, Driver & Dolan, 2003). Different SF biases for gender and expression classification have been found in a study with samples of different ages (5-6, 7-8 years and adults) by Deruelle and Fagot (2005, their experiment 2), using number of LSF choices in each task as the dependent variable. These authors used a matching procedure, using as sample stimuli hybrid HSF/LSF images that contained superposed faces of a man and a woman showing different expressions. A LSF bias was found when the participant had to match the comparison (unfiltered) faces to the sample in terms of gender and a HSF bias when they were asked to match in terms of expression. Primacy of the LSF band for gender classification has also been reported by Goffaux, Jemel, Rossion and Schyns (2003) in a study where the participants had to categorize faces as to their gender or their familiarity. These authors found more efficient performance with LSF, compared to HSF filtered faces, in terms of both speed response and accuracy, and also a modulation of the face sensitive N170 event-related-potential for the LSF faces, which was specific of gender classification.

From a more theoretical point of view, the finding that information from different SF bands is used flexibly depending on task constraints is relevant for general theories of face perception. The well known model proposed by Bruce and Young (1986) and more recent, brain-systems models of face perception (Haxby, Hoffman & Gobbini, 2000), have assumed that expression is processed by specialized systems that can be dissociated from those processing fixed properties of the faces, such as gender and identity, though more recent proposals have argued against a strict dissociation between face processing systems (Calder & Young, 2005). The finding of task-dependent biases in the use of spatial scale information is in principle consistent with the hypothesis of segregated or independent processing, as it suggests that each task rely on a different set of diagnostic cues. However, in the case of gender and expression there are also recent data that show that these properties are not processed independently (e.g., Aguado, Garcia-Gutierrez & Serrano-Pedraza, 2009; Atkinson, Tipples, Burt & Young, 2005). For example, Aguado et al. found a symmetrical interaction of gender and expression in face classification tasks with upright, inverted and segmented faces (top or bottom face halves). This interaction suggests that there is an overlapping set of features on which gender and expression identification rely and that these two facial dimensions are not processed independently.

Although a primacy for the LSF band in gender categorization has been found in some studies (Deruelle & Fagot, 2005; Goffaux et al., 2003), Schyns and Oliva (1999) did not find a SF bias in this task. There is also some ambiguity in the case of expression categorization. A primacy for the LSF band has been suggested based on results such as those previously mentioned of Vuilleumier et al. (Holmes et al., 2005; Vuilleumier et al., 2003) with fearful, non-hybrid faces. In studies with hybrid faces contradictory results have been found. While Deruelle and Fagot (2005) reported results consistent with a HSF bias when the participants had to discriminate between different expressions, Schyns and Oliva found a bias for the LSF band. However, these last authors found a HSF bias when the participants were asked to make an expressive/non-expressive decision. These discrepancies might be related to the different stimulus durations, task formats and dependent variables used in these experiments. Moreover, it is not clear to what extent the results from studies with hybrid faces can be generalized to categorization of normal, non-hybrid faces. Studies with hybrid faces are perfectly suited to detect biases for a specific SF band when competing information from two different bands is available. However, finding that a particular task shows a bias for a specific SF band does not mean that information of diagnostic value for that task is not contained in other frequency bands. Complementary relevant information can be gleaned from studies exploring how efficient performance can be in different classification tasks with simple stimuli, when information is restricted to specific SF bands. Comparing performance with stimuli of different spectral content allow inferences as to the kinds of information relevant for different categorizations. For example, similar performance with unfiltered and HSF faces in a specific task would indicate that categorization is mainly based on fine detail information from individual features of the face and that coarse, global information, has no diagnostic value for that task. On the other hand, a similar decrement in performance with HSF and LSF faces would indicate that fine detail and coarse information are of equivalent relevance. Given the existent contradictions between the results from studies that have explored the role of different SF bands in the identification of the gender and expression of faces, it seemed that a direct comparison of performance on these tasks with faces of different spatial content might be of worth.

The main objective of the present study was to explore further the relative role of different spatial scales on gender and expression classification. Unfiltered and SF filtered faces were used as stimuli in a simple classification task where participants had to decide whether the face was of a male or of a female (gender task) or if it showed a happy or an angry expression (expression task). Performance was compared between face images that were unfiltered or filtered to contain only high or low spatial frequencies (HSF and LSF faces, respectively) and that had been equated in contrast energy. Female and male faces, showing happy or angry expressions, were used as stimuli. In Experiment 1, the effects of spatial frequency content were tested on both gender and expression classification. In Experiment 2, a similar comparison was performed in an expression classification task, with a sequential presentation procedure that allowed the participants to perceive the change from a neutral to an emotional expression. Finally, in Experiment 3 the effect of spatial frequency content on gender classification was compared between expressive and non-expressive faces. An effort was made to balance participant sex in all experiments, so that the results could be generalized to both sexes. This is important, given some contradictory results on possible differences in facial expression processing dependent on sex (Arcuri1, Castelli, Boca, Lorenzi-Cioldi, & Dafflon, 2001; Hall, 1978; Penton-Voak, Allen, Morrison, Gralewski, & Campbell, 2007; Rotter & Rotter, 1988).

EXPERIMENT 1

In this experiment, the participants were presented with a set of male and female faces showing either happy or angry expressions. Each stimulus was presented in three different versions, unfiltered or filtered to contain only high or low spatial frequencies (HSF and LSF faces, respectively). The participants were asked to classify each face in terms of expression or gender, depending on the task assigned. With this design we tried to evaluate the relative role of information contained in different SF bands on expression and gender classification.

Method

Participants

38 psychology students, 36 right-handed, (9 males and 11 females in the Emotion task and 9 males and 9 females in the Gender task) with ages 18-47 (mean 21), from the Universidad Complutense (Madrid, Spain), participated in the experiment in exchange for course credits.

Apparatus and stimuli

Presentation of stimuli and register of responses was controlled through the software E-Prime 1.1. Stimuli were presented on a gamma corrected CRT 17" monitor (vertical frame rate, 60 Hz). Subjects were seated at a distance of 50 cm from the screen. Responses were given by pressing keys 1 and 5 of a five-key response box (PST Serial Response Box, 200A). Sessions were carried out individually in a sound-proof, dimly lighted room.

Stimuli were 32 pictures of human male and female faces showing a happy or an angry expression, taken from the KDEF collection (Lundqvist & Litton, 1998). Of this face set, 29 stimuli had been used in previous experiments on the interaction of gender and expression in face classification (see Aguado et al., 2009). There were 16 faces of males and 16 of females. Half the faces of each gender showed a happy expression and the other half an angry expression.

Images were processed in the following way (see Figure 1). First, we cut out the original images of the faces (562 x 762 pixels) to make the image square (512 x 512 pixels), subtending an area of 13.5 x 13.5 degrees of visual angle (deg) (see Figure 1a). Second, we added to the square image an oval window that concealed most of the hair, subtending an area of 7.95 x 12.3 deg. Third, the resulting images were unfiltered and filtered low-pass and high-pass by means of a two-dimensional isotropic filter using cut-off frequencies of 1c/deg and 3c/deg respectively (see Figure 1b). We used a non-causal Butterworth filter of order 2 (see Gonzalez & Wintz, 1987, pages 170, 181, and Sierra-Vazquez, Serrano-Pedraza & Luna, 2006, -see their appendix A for a formal definition of these filters-). All images, filtered and unfiltered were processed so that all had the same Root Mean Square Contrast ([c.sub.RMS]) of .025 (the [c.sub.RMS] value was obtained using the equation [c.sub.RMS] = [[sigma].sub.L]/[L.sub.Ave] (Stromeyer & Julesz, 1972), where [[sigma].sub.L] and [L.sub.Ave] are respectively the standard deviation and the average of the luminance values of the image). The procedure used to construct images with a desired [c.sub.RMS] can be seen in Aguado et al., 2009 (see their Appendix B).

[FIGURE 1 OMITTED]

Procedure

The instructions were presented on the computer screen and described the task to perform, stressing that responses should be fast. Depending on the task condition, the participant was instructed that she had to identify the expression, anger or happiness, shown by the face (expression classification task) or identify the face as male or female (gender classification task). The task was performed by pressing keys 1 and 5 of the response box. Assignment of keys to each alternative response was counterbalanced in each of the task conditions. Before the classification phase proper, twelve practice trials were given. The faces presented during these trials were not presented again during the experimental phase.

During the experimental, classification phase, the 32 faces were presented once at each of the filtering conditions, that is, unfiltered, low-pass and high-pass filtered. Order of presentation was randomized independently for each subject. Faces appeared on the screen preceded by a fixation point (a white asterisk) presented on the center of the screen for 500 ms. The face stayed on the screen until the participant gave a response or until a maximum of 2000 ms had elapsed. The interval between the response and the next trial was 2000 ms, during which only the blank screen was presented. The experimental session was divided into three blocks of 32 trials each. Each particular face appeared only once per block. A rest period was given at the end of the first and second blocks of trials.

Results

In this and the following experiments, error proportions and reaction times (RT) of correct responses (range, 200-2000 ms) will be reported. In the present experiment, a 2 x 2 x 3 ANOVA with participant Sex and Task (gender and expression) as between subjects factors and Stimulus (unfiltered, high-pass and low-pass) as the repeated measures factor was performed separately on error proportion and RT measures.

[FIGURE 2 OMITTED]

Error proportion

As can be seen in Figure 2, error proportions were higher in the gender than in the expression task. While performance on the expression task was highly accurate at all stimulus conditions, spatial filtering led to an increase in error proportion in the gender task, with more errors for HSF faces, followed by LSF and unfiltered faces. These impressions were confirmed by statistical analysis. Significant main effects were obtained of Task, F (1, 34) = 214, p < .001, [[eta].sup.2.sub.p] = .86, Stimulus, F (2, 68) = 16, p < .001, [[eta].sup.2.sub.p] = .32 and participant Sex, F (1, 34) = 5.5, p = .026, [[eta].sup.2.sub.p] = .14, with a higher error proportion in females (M = .13, SE = .008) than in males (M = .10, SE = .008). The Task x Stimulus interaction was also found significant, F (2, 72) = 22.9,p < .001, [[eta].sup.2.sub.p] = .65. In the gender task, the highest error proportions corresponded to HSF faces, followed by LSF and unfiltered faces. All paired comparisons (Bonferroni corrected) were significant (p < .005). Neither the Task x participant Sex or the triple Task x Stimulus x participant Sex reached statistical significance (p > .05).

RT

No response fell outside of the specified range. Figure 3 shows that participants responded more slowly in the gender than in the expression classification task. Moreover, slower responses were given to LSF faces in both tasks. Statistical analyses showed significant main effects of Task, F (1, 34) = 10, p = .003, [[eta].sup.2.sub.p] = .22 and of Stimulus, F (2, 68) = 27, p < .001, [[eta].sup.2.sub.p] = .43. Paired comparisons between stimulus conditions showed significant differences between unfiltered and LSF faces and between HSF and LSF faces (p < .001), but not between unfiltered and HSF faces. The Task x Stimulus interaction only reached marginal significance (p = .065, Greenhouse-Geysser corrected, [[eta].sup.2.sub.p] = .08). A significant interaction was found of participant Sex x Task, F (1, 34) = 7.5, p = .009, [[eta].sup.2.sub.p] = .18. Analysis of this interaction showed that male participants gave significantly faster responses than female participants in the gender task, (M= 775, SE = 39.6 and M = 915, SE = 39.6, for male and female participants, respectively).

[FIGURE 3 OMITTED]

Discussion

The results of Experiment 1 showed that discriminating the gender of faces is a more difficult task than discriminating their expression, at least when happy and angry faces of both genders are compared. More errors and slower response speed were observed in the gender than in the expression task at all stimulus conditions (unfiltered, HSF and LSF faces). This result differs from previous reports where faster identification of sex than of expression has been found (Atkinson et al., 2005; LeGal & Bruce, 2002). However, the stimuli used by those authors differ from the ones used in our experiments. First, the expressions were different (happy and fearful in Atkinson et al's and happy and surprised in Le Gal and Bruces experiments). Variations in the discriminability of different pairs of expressions and in the interaction between gender and expression (e.g., Aguado et al., 2009) might well explain the discrepancy between our results and those of previous studies. This discrepancy cannot be attributed to our use of filtered stimuli, as slower classification of gender was found with all stimulus conditions, unfiltered faces included. Possibly more important was the fact that, due to the need to equate the contrast of all faces, our stimuli were of relatively low contrast and this might have had a different impact on classification of gender and of expression.

A finding of more importance to the objectives of the present study was the different effect of spatial filtering on accuracy of gender and expression classification. In the expression classification task, happy and angry expressions were recognized with high accuracy from unfiltered, HSF and LSF filtered faces. However, SF content did have important effects on gender classification. In this task, a significant increase in error proportion was observed with both HSF and LSF faces. The highest error proportion was obtained with HSF faces, where information from the low frequency band had been removed. This finding is consistent with previous studies that have shown a primacy for the LSF band in gender classification (Deruelle & Fagot,2005; Goffaux et al., 2003). Similar effects of SF filtering were found on both tasks with the RT measure. In this case, filtering out high frequencies (that is, LSF faces) produced an impairment of performance in both tasks. However, removal of low spatial frequencies (HSF faces) had no significant effects on this measure.

Speed and accuracy measures showed differential sensitivity to spatial filtering depending on the task. While in the expression task SF content only affected response speed, in the gender task both accuracy and speed were impaired. More specifically, the effects of SF content on expression classification were only manifested as slower responses to LSF faces. In the case of gender classification these effects were manifested as a decrease in accuracy for both HSF and LSF faces and as an increase of RT to LSF faces. This difference suggests that information of diagnostic value for expression classification is contained mainly in the HSF band, but that gender classification relies on combined information from both frequency bands, with a bias for the LSF band. However, the discrepancy between speed and accuracy data observed in the case of LSF faces, with an increase in RT and a decrease in error proportion, is also suggestive of a speed-accuracy trade-off. We will defer discussion of this alternative interpretation until the general discussion section.

The results obtained with the RT measure in the gender task are only partially consistent with those of Goffaux et al. (2003). These authors reported both less accurate and slower responses for HSF than for LSF faces. Though we also found less accurate performance with HSF than with LSF faces, we observed the opposite pattern for speed of correct responses, that is, faster responses with HSF faces. One possible reason for this discrepancy might be due to the fact that we used faces showing happy or angry expressions, instead of non-expressive faces. Previous studies that have shown an interaction between expression and gender in face classification tasks indicate that the expression shown by the face can influence speed and accuracy of gender identification (Aguado et al., 2009). It might well be that the effects of SF content on gender classification are also different for expressive and neutral faces. For example, the distortion of facial features by the emotional expression might have increased the importance of fine detail information for gender discrimination in our study. To see if the pattern of results found in Experiment 1 is representative of gender classification per se or is instead limited to faces showing emotional expressions, the effects of spatial filtering should be compared between expressive and neutral faces of both sexes. The results of this comparison will be reported in Experiment 3.

EXPERIMENT 2

In daily social interaction, emotional expressions are perceived in moving faces as dynamic changes in the shape and distances between facial features. There are some results indicating that movement is a psychologically relevant property of facial expressions and that dynamic information has an influence on how expressions are processed. For example, judged intensity of expressions increases with speed of face movement (Yoshikawa & Sato, 2008) and there are specific brain areas that show more activation to dynamic than to static faces (Sato, Kochiyama, Yoshikawa, Naito & Matsumura, 2004). Moreover, dynamic information seems to facilitate face identification especially under poor viewing conditions (O'Toole, Roark & Abdi, 2002).

Experiment 2 was a replication of the expression condition of Experiment 1, with the only difference that each expressive face was immediately preceded by an expressively neutral face of the same identity. This procedure was intended to give the participant the subjective impression that the face was moving and changing its expression. We wanted to see if the effects of SF content shown in Experiment 1 were maintained under conditions that are more similar to our daily experience, when dynamic cues derived from displacement of facial features are available. Although our sequential presentation is not a continuous dynamic display, it involves a rapid change in the appearance of the face and so provides additional relevant information, compared to the static expressive face presented in isolation. Under the sequential condition, the neutral face provides a basis against which the immediately following expression can be compared and the availability of these cues might facilitate identification of the emotional expression.

One specific prediction is that performance to LSF faces, that was specially impaired in Experiment 1, should be facilitated and that differences between this and the unfiltered and HSF conditions should be reduced.

Method

Participants

Eighteen psychology students (9 females, 9 males; 15 right handed) with ages 18-30 (mean 20), from the Universidad Complutense (Madrid, Spain), participated in the experiment in exchange for course credits.

Apparatus and stimuli

Materials, stimuli and procedure were similar to those of Experiment 1. The only difference was that each expressive face was preceded by an unexpressive face of the same individual. This "neutral" face lasted 250 ms and was followed immediately by the corresponding expressive face. Each pair of neutral-expressive face was presented once at each of the filtering conditions (unfiltered, HSF and LSF). The participants were instructed to respond to the second, expressive face and classify it as either angry or happy.

Results

Error proportion and RT results were submitted to a 3 x 2 mixed ANOVA with Stimulus as a repeated measures factor and participant Sex as the between subjects factor.

Error proportion

As can be seen in Figure 4, participants made more errors with LSF faces. A significant main effect was obtained of Stimulus, F (2, 32) = 11.6, p = .001, [[eta].sup.2.sub.p] = .42. Paired comparisons showed that performance for LSF faces differed from both Unfiltered and HSF faces ([p.sub.s] = .03 and .002, respectively). Participant Sex and the participant Sex x Stimulus interaction had no significant effect (p > .05).

[FIGURE 4 OMITTED]

[FIGURE 5 OMITTED]

RT

In the present experiment, 2.66% of the data were eliminated for falling outside the specified range. Figure 5 shows that compared to unfiltered faces, RT of correct responses increased for LSF faces but were similar for HSF faces. Consistent with these impressions, statistical analyses showed a significant effect of the Stimulus factor, F (2, 32) = 25.6, p < .001, [[eta].sup.2.sub.p] = .61, with significant differences between LSF and both Unfiltered and HSF faces (p < .001). A marginally significant effect of participant sex was found, F (1, 16) = 4.4, p = .052, [[eta].sup.2.sub.p] = .21, with faster RT for female than for male participants (M = 730 and 936, respectively, SE = 69.5).

Discussion

The results of Experiment 2 replicated almost exactly those found in the expression task of Experiment 1. Contrary to our expectations, sequential presentation of the neutral and expressive version of the faces did not alter the effects of SF content. Indeed, sequential presentation did not eliminate the disadvantage of LSF faces. If anything, a stronger impairment of performance was observed with these faces. While in Experiment 1 only RT was affected, in the present experiment both accuracy and RT were impaired in this filtering condition.

It can be concluded from the results of Experiment 2 that the dynamic information provided by sequential presentation, with a rapid transition from the neutral to the expressive faces, do not alter the effects of SF content. The effects of SF filtering observed in the present experiments on expression categorization might then be generalizable to both static and dynamic displays, though this conclusion should await direct confirmation from studies with faces showing real motion. If the present results are compared to those of the expression task of Experiment 1, it can be said that contrary to the prediction that the sequential procedure should facilitate expression recognition from LSF faces, more errors and slower responses were now observed in this condition. This result shows again that fine scale information from the HSF band has a determinant role on the identification of facial expressions of emotion.

EXPERIMENT 3

In Experiment 1, evidence was found that supported the role of both LSF and HSF information in gender discrimination. The finding that removal of LSF information (that is, HSF faces) produced a superior increase of error proportion is consistent with the suggestion that diagnostic information for gender discrimination is mainly contained in this low frequency band (Deruelle & Fagot, 2005; Goffaux et al., 2003). However, response speed data showed slower RT precisely with LSF faces and no differences between HSF and unfiltered faces, suggesting an additional role for fine scale information in this task. We reasoned that this might reflect a stronger reliance on fine detail information that is present in the HSF band due to the difficulty involved in discriminating the gender of expressive faces. As more efficient performance with HSF than with LSF faces in gender categorization has not been previously reported, this possibility seemed worth exploring. To test for the possibility that the results obtained in the gender task of Experiment 1 were only specific of the expressive faces, in the present experiment expressive and neutral faces of both sexes were used and presented under the same filtering conditions as in Experiment 1. In this way, the specific role of information from different SF bands on sex classification and possible variations between expressive and non-expressive faces can be clearly assessed.

Method

Participants

Twenty two psychology students (14 females, 8 males; 20 right handed) with ages 18-41 (mean 23), from the Universidad Complutense (Madrid, Spain), participated in the experiment in exchange for course credits.

Apparatus and stimuli

Materials and procedure were similar to those of previous experiments. A new set of 32 faces was added to the set used in Experiment 1. The new stimuli were 16 female and 16 male faces from the KDEF collection, showing a neutral expression. The participants were instructed to classify each face as to their gender. The experimental session was divided into three blocks of 64 trials each. Each face appeared once at each of the filtering conditions (unfiltered, HSF and LSF filtered).

Results

Error proportion

Error proportion and RT results were submitted to separate 3 x 2 x 2 ANOVA with Stimulus and Expression (expressive or neutral) as repeated measure factors and participant Sex as the between subjects factor.

Figure 6 shows the results for the error proportion measure of Experiment 3. Significant main effects were found of Stimulus, F (2, 40) = 51, p < .001, [[eta].sup.2.sub.p] = .72 and of participant Sex, F(1, 20) = 4.7,p < .05, [[eta].sup.2.sub.p] = .19. Significant interactions were found of Stimulus x Expression, F (2, 40) = 4, p < .05 (Greenhouse-Geysser corrected), [[eta].sup.2.sub.p] = .16 and of participant Sex x Stimulus, F (1, 40) = 3.45, p < .05, [[eta].sup.2.sub.p] = .15. Analysis of the Stimulus x Expression interaction showed significant differences for all paired comparisons in the case of neutral faces ([p.sub.s] < .001). For expressive faces, paired comparisons showed that error proportion for unfiltered faces differed from both HSF and LSF faces ([p.sub.s] < .001), but that HSF and LSF faces did not differ. As to the Stimulus x participant Sex interaction, males showed worse performance than female participants with HSF faces (M = .34 and .25, SE = .02 and .01, for male and female participants, respectively).

[FIGURE 6 OMITTED]

[FIGURE 7 OMITTED]

RT

No response fell outside the specified range. The results corresponding to the RT measure are shown in Figure 7. The only significant effect for this measure was the main effect of Stimulus, F (2, 40) = 16.7, p < .001, [[eta].sup.2.sub.p] = .45. Faster responses were given to Unfiltered and HSF faces than to LSF faces, (p < .005).

Discussion

The results of Experiment 3 showed that, compared to unfiltered faces, gender was recognized less accurately from faces where HSF or LSF information had been filtered out. This was observed with both expressive and non expressive (or neutral) faces. However, there was a difference between expressive and neutral faces in that significantly higher error proportions for HSF than for LSF faces were only observed in the case of neutral faces. This was somewhat unexpected, as precisely this difference between HSF and LSF faces was found in the gender task of Experiment 1, where only expressive faces were used. Response speed data showed also an impairment of performance for both expressive and neutral LSF faces. This result do not support our interpretation that the slower RT observed in the gender task of Experiment 1 with LSF faces reflected the stronger role of information from the HSF band for discriminating the gender of expressive faces. In fact, the results of the present experiment basically replicate those of the gender task of Experiment 1, as they show first a decrease of accuracy produced by the removal of LSF information (HSF faces) and second an increase in response speed consequent to the removal of HSF information (LSF faces). From these results it can be concluded that efficient gender categorization depends on the availability of information from both the HSF and LSF bands and that this is so with independence of the face being emotionally expressive or not.

General Discussion

The experiments reported in this paper explored the effects of SF filtering on classification of the gender and expression of faces. Two main results were obtained. First, a result common to both tasks was that, compared to unfiltered faces, a substantial impairment of performance was always found with LSF faces, where information from the high spatial frequency band had been removed. This effect was observed on gender classification of expressive and neutral faces (Experiment 1 and 3, respectively) with both speed and accuracy (error proportion) measures. In the expression task, impaired performance with LSF faces was manifested only in response speed in Experiment 1 and in both accuracy and speed in Experiment 2. A second result of the present experiments is that removing low spatial frequency information from the images (HSF faces) had different effects on each task. In the gender task, HSF faces tended to produce the highest proportion of errors. However, no drop in accuracy or increase in RT for HSF faces was observed in the expression task. This was so when expressive faces were presented in isolation (Experiment 1) and also when the neutral and expressive versions of each face were presented in rapid succession (Experiment 2).

The results form the experiments reported here show both common and specific effects of SF content on gender and expression classification of faces. A common role for information contained in the high SF band is suggested by the decrement in speed and accuracy observed in both tasks with low-pass filtered faces. The effects of SF content on face classification have been interpreted in terms of the different diagnostic value that information contained in each SF band has for different face classification tasks (Morrison & Schyns, 2001). In this sense, our results indicate that the high SF band contains information of diagnostic value for both gender and expression discrimination. However, our results also suggest that this value might be higher for expression discrimination. This conclusion is consistent with the results from previous studies with hybrid stimuli, which have reported a bias for the HSF band in expression tasks (Deruelle & Fagot, 2005; Deruelle, Rondan, Salle-Collemiche, Bastard-Rosset & Da Fonseca, 2008). However, our results are in contrast to those obtained by Schyns and Oliva (1999) with hybrid faces. Though these authors found a HSF bias when the participants had to discriminate between expressive and non-expressive faces, they found instead a LSF bias when the participants were asked to discriminate between specific emotional expressions (happy, angry or neutral). One possible reason for this discrepancy is that while the faces were exposed for a very short time (50 ms) in the study of Schyns and Oliva (1999), longer durations were used both in our experiments and in those of Deruelle and Fagot (2005) and Deruelle et al. (2008). It is possible that with short exposure times specific expressions can be more easily perceived based on configural cues (LSF band) than on fine detail information (HSF band) and that the opposite is true for longer exposure times.

The results from the gender task of Experiments 1 and 3 of the present report showed that accuracy was most impaired with high-pass filtered stimuli. More errors to HSF than to unfiltered faces were observed with both expressive and neutral faces (Experiment 3). Moreover, error proportion was higher for HSF than for LSF faces, with the exception of the expressive face condition of Experiment 3, where similar accuracy was observed for both stimulus conditions. Response speed, however, did not decrease significantly to HSF faces. With this measure, a significant impairment of performance was only found for LSF faces. This discrepancy between speed and accuracy results is, of course, problematic for the interpretation of our results in terms of the different diagnostic value of LSF and HSF information for gender and expression classification. An alternative interpretation can be proposed in terms of a speed-accuracy trade-off. This explanation seems plausible in the case of the gender task of Experiment 1 and in the expressive condition of Experiment 3, where the increase in response speed to LSF faces was accompanied by a decrease in error proportion, compared to HSF faces. However, in the non-expressive condition of Experiment 3, an increase in response speed to LSF faces was observed, without a significant variation in accuracy. Thus, a speed-accuracy trade-off does not seem totally consistent with the results of our gender task. In any case, we have to recognize that the strength of our interpretation in terms of variations in the use of information from different frequency bands is diminished by the discrepancies between speed and accuracy measures observed in our experiments. The safest conclusion that can be drawn from our results and those of prior studies is that efficient gender classification relies probably on a combination of information from the HSF and LSF bands, with a possible bias for the LSF band.

The results from the expression classification task suggest that information from the LSF band does not have a determinant role in explicit classification of emotional expression. This does not mean, of course, that expression cannot be identified from LSF stimuli. In fact, in the expression task of Experiment 1 participants classified happy and angry faces with high accuracy at all filtering conditions. But while low frequencies can be filtered out without appreciable effects on accuracy or speed of expression classification, removal of high frequencies results always in slower recognition. Again, caution is required when drawing conclusions regarding the role of the LSF band in expression classification, given the inconsistency between speed and accuracy data observed in the expression condition of Experiment 1. In this case, though a significant increase in response time with LSF faces was observed, accuracy at all three stimulus conditions was similar. However, in Experiment 2, with sequentially presented faces, decrease of response speed to LSF faces was accompanied by a decrease in accuracy. This last finding is an important observation, because it shows that even when the participant is allowed to perceive the change from neutral to expressive, this does not compensate for the absence of fine detail information in LSF faces.

It is interesting to consider the finding that removal of LSF information did not influence speed or accuracy of expression classification in relation to previous results showing a dissociable role for the LSF and HSF bands on implicit and explicit responses to emotional expressions. Vuilleumier and colleagues (e.g., Vuilleumier, Armony, Driver & Dolan, 2003) have shown that the LSF band drive neural responses to fearful faces in the amygdala under incidental processing conditions. However, these authors also found that responses in areas of the visual cortex specifically involved in face processing, such as the fusiform gyrus, or fusiform face area, on which detailed and conscious analysis of faces is thought to be dependent, were predominantly influenced by HSF information. In fact, behavioral results reported by Vuilleumier et al. (2003) showed that explicit ratings of emotional intensity were higher for HSF faces. These results are thus consistent with our conclusion that information from the HSF band is critical for expression discrimination.

The impairment of performance with LSF faces and the equivalent efficiency observed with HSF and unfiltered faces in the expression task suggest that explicit expression categorization requires processing of local features. This conclusion is consistent with the role that changes in individual facial features, corresponding to different action units controlled by specific facial muscles, has in expression recognition (Ekman & Friesen, 1978; Ellison & Massaro, 1997). Our results also suggest that fine detail information provided by the HSF band has an additional role in gender classification, as an increase in response speed was also observed for LSF faces in this task. This is consistent with previous evidence showing the diagnostic value that some local features and spatial relations, especially on the eye region of the face, have for gender discrimination (e.g., Brown & Perret, 1993).

The results of the present experiments are partially consistent with previous evidence showing flexible use of spatial scale information in different face classification tasks. Specifically, our results are consistent with the different spatial frequency biases reported in previous studies for gender and expression classification. These results support some form of independence between gender and expression, as long as they are compatible with the idea that there are cues of different diagnostic value for the discrimination of these facial dimensions. According to this, it might be concluded that gender and expression categorization rely on a set of diagnostic cues that are partially overlapping and partially independent. In this sense, a strict segregation between the processing of face gender and expression, such as was proposed in the original model of Bruce and Young (1986), cannot be maintained.

References

Aguado, L., Garcia-Gutierrez, A., & Serrano-Pedraza, I. (2009). Symmetrical interaction of sex and expression in face classification tasks. Attention, Perception & Psychophysics, 71, 9-25.

Arcuril, L., Castelli, S., Boca, F., Lorenzi-Cioldi, F., & Dafflon, A. (2001). Fuzzy gender categories: How emotional expression influences typicality. Swiss Journal of Psychology, 60, 179-191.

Atkinson, A. P., Tipples, J., Burt, D. M., & Young, A. W. (2005). Asymetric interference between sex and emotion in face perception. Perception and Psychophysics, 67, 1199-1213.

Bachmann, T. (1991). Identification of spatially quantised tachistoscopic images of faces: How many pixels does it take to carry identity? European Journal of Cognitive Psychology, 3, 85-103.

Brown, E., & Perret; D. I. (1993). What gives a face its gender? Perception, 22, 829-840

Bruce, V., & Young, A. (1986). Understanding face recognition. British Journal of Psychology, 77, 305-327.

Calder, A. J., & Young, A. W. (2005). Understanding the recognition of facial identity and facial expression. Nature Reviews Neuroscience, 6, 641-651.

Deruelle, C., & Fagot, I. (2005). Categorizing facial identities, emotions, and genders: Attention to high- and low-spatial frequencies by children and adults. Journal of Experimental Child Psychology, 90, 172-184.

Deruelle, C., Rondan, C., Salle-Collemiche, X., Bastard-Rosset, & D., Da Fonseca, D. (2008). Attention to low- and high spatial frequencies in categorizing facial identities, emotions and gender in children with autism. Brain and Cognition, 66, 115-123.

Ekman, P., & Friesen, W. V. (1978). Facial action coding system. Palo Alto: Consulting Psychologists Press.

Ellison, J. W., & Massaro, D. W. (1997). Featural evaluation, integration, and judgment of facial affect. Journal of Experimental Psychology: Human Perception and Performance, 23, 213- 226.

Costen, N. P., Parker, D. M., & Craw, I. (1996). Effects of high pass and low-pass spatial filtering on face identification. Perception & Psychophysics, 58, 602-612.

Fiorentini, A., Maffei, L., & Sandini, G. (1983). The role of high spatial frequencies in face perception. Perception, 12, 195-201.

Goffaux, V., Jemel, B., Rossion, J., & Schyns, P. (2003). ERP evidence for task modulations on face perceptual processing at different spatial scales. Cognitive Science, 27, 313-325.

Gonzalez R. C., & Wintz P. (1987) Digital image processing (2nd edition) Reading, MA: Addison-Wesley.

Hall, J. A. (1978). Gender effects in decoding nonverbal cues. Psychological Bulletin, 85, 845-857.

Haxby, J. V., Hoffman, E. A., & Gobbini, M. I. (2000). The distributed human neural system for face perception. Trends in Cognitive Sciences, 4, 223-233.

Holmes, A., Green, S., & Vuilleumier, P. (2005). The involvement of distinct visual channels in rapid attention towards fearful facial expressions. Cognition & Emotion, 19, 899-922.

Le Gal. P. M., & Bruce, V. (2002). Evaluating the independence of sex and expression in judgement of faces. Perception & Psychophyisics, 64, 230-243.

Lundqvist, D., & Litton, J. E. (1998). The Averaged Karolinska Directed Emotional Faces. AKDEF, CD ROM from Department of Clinical Neuroscience, Psychology Section, Karolinska Institutet, ISBN 91-630-7164-9.

Morrison, D. J., & Schyns, P. G. (2001). Usage of spatial scales for the categorization of faces, objects, and scenes. Psychonomic Bulletin & Review, 8, 454-469.

Nasanen, R. (1999). Spatial frequency bandwidth used in the recognition of facial images. Vision Research, 39, 3824-3833.

O'Toole, A Roark, D., & Abdi, H. (2002). Recognizing moving faces: a psychological and neural synthesis. Trends in Cognitive Sciences, 6, 261-266 .

Penton-Voak, I., Allen, T., Morrison, E., Gralewski, L. & Campbell, N. (2007). Performance on a face perception task is associated with empathy quotient scores, but not systemizing scores or participant sex. Personality and Individual Differences, 43, 2229-2236.

Rotter, N, & Rotter, G. (1988). Sex differences in the encoding and decoding of negative facial emotions, Journal of Nonverbal Behavior, 12, 139-148.

Ruiz-Soler, M., & Beltran, F. (2006). Face perception: An integrative review of the role of spatial frequencies. Psychological Research, 70, 273-292.

Sato, W., Kochiyama, T., Yoshikawa, S., Naito, E., & Matsumura, M. (2004). Enhanced neural activity in response to dynamic facial expressions of emotion: An fMRI study. Cognitive Brain Research, 20, 81-91.

Schyns, P., & Oliva, A. (1999). Dr. Angry and Mr. Smile: when categorization flexible modifies the perception of faces in rapid visual presentations. Cognition, 69, 243-265.

Sierra-Vazquez, V., Serrano-Pedraza, I., & Luna, D. (2006). The effect of spatial-frequency filtering on the visual processing of global structure. Perception, 35, 1583-1609.

Stromeyer, C. F., & Julesz, B. (1972). Spatial-frequency masking in vision: critical bands and spread of masking. Journal of the Optical Society of America, 62, 1221-1232.

Vuilleumier, P., Armony, J., Driver, J., & Dolan, R. (2003). Distinct spatial frequency sensitivities for processing faces and emotional expressions. Nature Neuroscience, 6, 624-631.

Yoshikawa, S., & Sato, W. (2008). Dynamic facial expressions of emotion induce representational momentum. Cognitive, Affective, & Behavioral Neuroscience, 8, 25-31.

Received January 21, 2009

Revision received October 21, 2009

Accepted November 10, 2009

Luis Aguado (1), Ignacio Serrano-Pedraza (2), Sonia Rodriguez (1), and Francisco J. Roman (1)

(1) Universidad Complutense (Spain)

(2) Newcastle University (UK)

This work was supported by Project SEJ2006-01576/PSIC, from the Spanish Ministerio de Ciencia y Tecnologia. Correspondence concerning this article should be addressed to Luis Aguado. Facultad de Psicologia. Campus de Somosaguas. 28223 Madrid. (Spain). Phone:+34-913943161. E-mail: laguado@psi.ucm.es
COPYRIGHT 2010 Universidad Complutense de Madrid
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2010 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Aguado, Luis; Serrano-Pedraza, Ignacio; Rodriguez, Sonia; Roman, Francisco J.
Publication:Spanish Journal of Psychology
Date:Nov 1, 2010
Words:7322
Previous Article:Familiarity changes as a function of perceptual shifts.
Next Article:An exploratory study of phonological awareness and working memory differences and literacy performance of people that use AAC.

Terms of use | Privacy policy | Copyright © 2019 Farlex, Inc. | Feedback | For webmasters