Printer Friendly

Naming faces: a multidisciplinary and integrated review.

There is no doubt as to the social relevance to humans of the ability to associate faces and names. Knowing a person's name makes it possible to refer to that person in order to start an interaction, or to have some reference to people who are not present. For this reason, it is not surprising that face identification and naming have been a common area of study from a wide range of perspectives, including cognitive psychology and several branches of cognitive neuroscience. The aim of this review is to present an integrated view of cognitive models and neuroscientific results in order to explain how, where and when the different processes involved in face naming take place.

In the first section of this paper, we describe the assumptions of the more recent models developed to explain face naming, and provide a detailed explanation of the distinct features of each process involved in face naming. From our perspective, any theoretical model aimed at comprehending the complex cognitive processes must take into account how the brain works. Therefore, the second section of this article provides a review of the data from neuroscientific studies that support the assumptions of the models explained in the first section, as well as neuroanatomical models of face processing. The third section presents an integrated view of the cognitive models and the data provided by electromagnetic studies (when) and functional neuroimaging studies (where), in which the different processes involved in naming faces are related to brain activity. Finally, we will provide a review of the aspects that call for further research.

The face naming process: Cognitive models

Naming faces is a complex cognitive experience that involves several processes, including perception, memory retrieval and linguistic processing. In this sense, existing models have drawn on two perspectives in cognitive psychology: face processing and psycholinguistics.

In 1986, Bruce and Young proposed the first functional model of face recognition and naming. This model considered a series of stages in face processing. Bruce and Young proposed four stages following presentation of the face: (a) structural coding, i.e. the construction of a visual percept of the face; (b) comparison of this visual percept with representations of faces stored in the face recognition units (FRUs), from which it would be possible to (c) access the person identity nodes (PINS) containing semantic information about that person, which would make it possible to (d) access the name code, i.e. the lexical unit corresponding to that person's name.

This cognitive model has had a major influence on scientific literature regarding face identification, and continues to be the main reference in face processing studies. However, behavioural studies revealed that it does not adequately explain access to person-specific semantic information, and face naming. For example, Bruce and Young's model does not explain why a stimulus must be presented in the same sensory modality in order to produce repetition priming (an advantage in processing a stimulus when it has been presented recently), whereas semantic priming (an advantage in processing a stimulus when a semantic-related stimulus has been presented recently) may be produced by using semantic-related stimuli in different sensory modalities (for a review of these data, see Valentine, Brennen, & Bredart, 1996).

Therefore, some modifications have been made, by adapting the original model to interactive activation and competition (IAC) models, and focusing more specifically on the linguistic processes involved in face naming (Burton, Bruce, & Johnston, 1990; Valentine et al., 1996). As shown in Figure 1, the more recent cognitive models of face naming (Ellis & Lewis, 2001; Valentine et al., 1996) assume that: a) the PINs themselves do not contain semantic information, but are token markers that must be activated in order to access the person-specific information, and b) the different units in each store (PINs, semantic units, lexical units ...) send excitatory activation to the units in other stores that they are connected to, but inhibit the units in the same store. In the IAC models, the biographical semantic information is divided into substores (occupation, nationality, etc.), preventing that the inhibitory connections within the store affect the different features of the same person. According to these models, once the PINs are activated, it is possible to access simultaneously the different person-specific information stores, including the semantic and linguistic (lexical and phonological) information.

As regards access to linguistic information (known as 'formulation' in language production models), the face naming models are supported by the main psycholinguistic models, in which the existence of two stages is considered. The first stage, lexical selection, would consist of selection of lemmas (Levelt, 2001), which are abstract, multimodal representations of the words (applicable to writing and oral speech) that contain the syntactic information about the words, but not the phonological information. The second stage, the lexical form codification, would consist of activation of lexemes, which are phonological representations of those lemmas.

However, the processes involved in formulation appear to differ between common and proper names, with access to the latter being more difficult. Some authors explain this effect in terms of uniqueness of proper names with respect to common names, which is related to the fact that common names have more semantic connections (see Valentine et al., 1996). In addition, some models (Burke, MacKay, Worthley, & Wade, 1991; Valentine et al., 1996) explain this greater difficulty as being caused by differences in the lexical storing of common and proper names. In the store of common names, the lexical nodes (lemmas) directly activate the phonological nodes (lexemes). However, in the store of proper names, a lexical node, the proper name phrase (a representation of the complete name) must be activated beforehand; this node will then send an activation message to the lexical nodes of name and surname separately, and finally, these lemmas will send an activation message to the phonological nodes.

[FIGURE 1 OMITTED]

As mentioned before, the activation by lemmas would be followed by the access to the phonological nodes, that is, the lexemes. The activation of the lexemes would bring about the creation of a phonetic plan and the consequent articulation of the name of the person (Levelt, 2001; Valentine et al., 1996), thereby completing the face-naming process.

Multidisciplinary data supporting the cognitive models

The different branches of cognitive neuroscience have provided evidence to confirm the existence and independence of the different stores and processes involved in face naming, according to the cognitive models explained above. As regards the first stages, studies exploring the disorder known as prosopagnosia have provided major evidence supporting the models. Patients suffering from prosopagnosia are not able to recognise famous or familiar people (or even themselves, in extreme cases) when they see their faces, even though they are able to identify that they are looking at a face, and can recognise the person through the voice, or by the facial expression (indicating that there are two paths involved in face processing, one which is identity-related, and another related to facial expression).

The literature on this disorder has provided a description of patients with difficulties in analyzing unknown faces, and it has also been found that such patients do not achieve a sensation of familiarity from a known face. This disorder has been referred to as aperceptive prosopagnosia (De Renzi, Faglioni, Grossi, & Nichelli, 1991; Lopera, 2000), and is thought to be the result of a failure in the structural codification of the face. However, other patients are capable of performing correct structural analysis of the face (able to identify race, age or sex from facial features), and have unimpaired memory of people they know (i.e., the information about these people is preserved), but are incapable of recognizing them by seeing their faces. This disorder has been called associative prosopagnosia, or prosopamnesia (De Renzi et al., 1991; Lopera, 2000), and is probably a consequence of a failure in comparison at the FRUs level (i.e., a failure in comparing the visual percept with the representations of faces stored).

In the same way as studies of prosopagnosic patients have provided evidence supporting the independence of the face recognition processes, other disorders have revealed that face identification is an isolated stage. Patients with ,face semantic amnesia (Lopera, 2000) are capable of creating a visual percept of a familiar face, describing the features of the face, and even describe a feeling of familiarity; however, they are not capable of accessing the identity of the face, either by contextual signs or by listening to the voice.

As already mentioned, according to the more recent face naming models, person-specific semantic and lexical information are stored separately. In line with this hypothesis, Seidenberg et al. (2002) found that patients with unilateral epilepsy in the right temporal pole had deficits in face recognition, semantic memory and naming, while patients with unilateral epilepsy in the left temporal pole only had deficits in face naming. Tsukiura et al. (2002) also found that patients with language-dominant temporal lobectomy showed impaired ability to retrieve people's names, whereas patients with language-nondominant temporal lobectomy had difficulty in associating newly-learned faces and names. These studies indicate that the right temporal pole is indispensable for facial identification and accessing semantic information, while access to the name requires the participation of both anterior temporal lobes.

Face processing studies have also shown that phonological information appears to have its own neural substrates. Huddy, Schweinberger, Jentzsch and Burton (2003) consecutively presented two photographs of faces to participants in a task that involved deciding if the people shown had the same occupation (semantic comparison), or if the names had the same number of syllables (phonological comparison). The waveforms of the event-related potentials (ERPs) revealed topographical differences in the N400 amplitude, a component that appears to be associated with the processing of semantic incongruity (Kutas & Federmeier, 2000; Kutas & Hillyard, 1980) and the retrieval of semantic memory (Herzmann & Sommer, 2007; Kutas & Federmeier, 2000). The authors referred to a symmetrical posterior topography in the semantic comparison, and a left anterior topography in the phonological comparison, which would support the hypothesis that semantic and phonological retrieval have distinct neural bases.

On the basis of results of cognitive neuroscience studies, some authors have proposed neuroanatomical models of face processing (Damasio, Tranel, Grabowski, Adolphs, & Damasio, 2004; Gobbini & Haxby, 2007; Ishai, 2008). In general terms, these authors all agree in considering that face processing (including face identification and naming, as well as detecting face expression and emotion--outside the scope of this review) is performed by a brain network. This network includes a visual processing core system formed by the inferior occipital areas, the face fusiform area (for invariant face features) and the posterior superior temporal sulcus (for variable face features, such as eye gaze). According to Gobbini and Haxby (2007), there is an extended network involved in the retrieval of the information of the person, including the posterior superior temporal sulcus and the temporo-parietal junction (personal traits, intentions ...), the precuneus (episodic memory retrieval) and anterior temporal areas (biographical information, including the name).

Face-naming cognitive processes: When and where in the brain

In this section, we propose an integrated view of the results of neuropsychological, neuroimaging and ERP studies, and the face naming models discussed in the previous sections (see Fig. 2). Few reports have been published that include all the cognitive processes involved in face naming, although there are more studies that evaluate each isolated process. It is important to stress that describing isolated processes is not the same as affirming the existence of a serial sequence; as previously mentioned, some of the processes appear to follow a parallel progression, such as access to semantic and lexical information (as reported in electrophysiological studies, see e.g., Abdel Rahman, van Turennout, & Levelt, 2003).

Face structural codification

At about 100 ms after presentation of the face, the perception of pictorial codes in the general domain is related to the arrival of striated and peristriated visual cortices (Allison, Puce, Spencer, & McCarthy, 1999), the electrophysiological correlate of which is the ERP PI or P100 component (Di Russo, Martinez, Sereno, Pitzalis, & Hillyard, 2001), a positive wave with its maximum amplitude at occipital electrodes.

The first ERP component to be specifically related to face visual processing was N170 (Bentin, Allison, Puce, Perez, & McCarthy, 1996), a negative wave with maximum amplitude at occipital-temporal electrodes and a mean latency of about 170 ms. Some studies have reported that face stimuli produce a larger N170 amplitude than object stimuli (Bentin et al., 1996); furthermore, configurational changes in faces, as the case of inverted faces, although not in objects, affect the N170 amplitude (Bentin et al., 1996). However, N170 amplitude was not affected by facial features related to gender (Mouchetant-Rostaing, Giard, Bentin, Aguera, & Pernier, 2000), race (Caldara et al., 2003), or non-perceptual characteristics, such as the familiarity of the face (Bentin & Deouell, 2000). For this reason, the N170 component has been related to the structural coding of faces used to extract the visual features of the face and the construction of a representation of the face (Bentin et al., 1996).

[FIGURE 2 OMITTED]

The main neural source of N170 appears to be the fusiform gyrus (McCarthy, Puce, Belger, & Allison, 1999; Allison et al., 1999; Itier & Taylor; 2004). In fact, neuroimaging studies in healthy participants, as well as intracranial recordings in epileptic patients, have shown greater activation in the bilateral fusiform gyrus (Allison et al., 1999; Barbeau et al., 2008; Gorno-Tempini et al., 1998; McCarthy et al., 1999) in relation to face perception; as a result, some authors have named this area the fusiform facial area (Allison et al., 1999; McCarthy et al., 1999), although other authors have indicated that this area becomes activated, to a lesser extent, in response to other stimuli such as animals or objects (Haxby et al., 2001). Gauthier, Behrmann and Tarr (1999) consider that the facial fusiform area is activated by objects that the participant perceives as distinctive, and therefore, by faces.

Face Recognition

Face repetition induces modulations in the ERP waveforms between 200 and 300 ms, showing maximum amplitudes at anterior electrode sites, such as N240 (Smith & Halgren, 1987), or the <<early repetition effect>> (Pfutze, Sommer, & Schweinberger, 2002; Schweinberger, Pfutze, & Sommer, 1995), or at posterior electrode sites, such as N250r (e.g., Bindeman, Burton, Leuthold, & Schweinberger, 2008; Herzmann, Schweinberger, Sommer, & Jentzsch, 2004; Herzmann & Sommer, 2007; Schweinberger, Pickering, Jentzsch, Burton, & Kaufmann, 2002), or the visual memory potential (Begleiter, Porjesz, & Wang, 1995). Such modulations, which are probably different names for the same ERP component, show a smaller amplitude in response to unfamiliar rather than to famous faces, and a smaller amplitude in response to the latter than to the faces of people from the participant's environment (Herzmann et al., 2004); for this reason, they have been related to access to stored face representations, and therefore, to the activation of FRUs.

Another component in this time range, P250, which is a positive wave with maximum amplitude at parietal-occipital sites, has been related to face recognition, as it has a larger amplitude with normal faces than with thatcherized faces (faces in which the eyes and mouth are inverted, which look relatively normal when inverted, but grotesque when the face is shown upright), in contrast to previous components such as N170 (Milivojevic, Clapp, Johnson, & Corballis, 2003).

Face recognition therefore occurs about 250 ms after presenting the stimulus. As regards the neural substrates of this process, several studies have shown that modulations of the ERP waveforms may be originated in the ventral temporal cortex, particularly in the fusiform gyrus (Eger, Schweinberger, Dolan, & Henson, 2005; Schweinberger et al., 2002), which is consistent with the role of this region according to other authors (Barbeau et al., 2008; Palermo & Rhodes, 2007). In addition, the medial temporal lobe appears to play a major role in this process, as shown by greater activation in response to familiar faces with respect to unfamiliar faces, revealed by intracraneal recordings in this time interval (Barbeau et al., 2008).

Access to Person-Specific Semantic Information

According to the model of Valentine et al. (1996), once the identity of the person (that is, the PIN) has been accessed, it is possible to access the different person-specific information stores. From an empirical point of view, however, access to the PIN may only be assessed by measuring the access to semantic and lexical information.

The access to person-specific information occurs after recognition of the face. As a result, the semantic search (as well as the lexical search) would start from 250-300 ms.

Comparison of the ERPs between familiar and unfamiliar faces has also revealed differences in the N400 component interval. Smith and Halgren (1987) found a negative deflection, N445, with smaller amplitude in response to familiar faces than to unfamiliar faces; the authors suggested that this effect was related to a greater semantic processing with familiar rather than unfamiliar faces.

Face repetition and face semantic priming have been analyzed in some studies. The ERP waveforms showed a reduced amplitude (named the late repetition effect, which is thought to be a modulation of the N400 component) between 300 and 600 ms when the face was repeated; however, on the contrary to the early repetition effect, the ERP waveforms also had a smaller amplitude when the face followed a semantically-related face, e.g. the face of a person with the same profession as the target face (Pfutze et al., 2002; Schweinberger, 1996; Schweinberger et al., 1995). This is why the late repetition effect (and therefore the N400) has been taken as an index of the activation of the knowledge about a person (Herzman & Sommer, 2007; Neumann & Schweinberger, 2008).

Furthermore, in a face-naming task, Diaz, Lindin, Galdo-Alvarez, Facal & Juncos-Rabaddn (2007) identified a positive wave between 450 and 550 ms, with maximum amplitude in posterior electrode sites, which was related to the access to person-specific information, and consequently, with the access to PINs, as this component did not show any differences between a successful naming condition and the tip-of-the-tongue state, a phenomenon characterized by a failure in name access while other person-specific information is available (Burke et al., 1991).

In brief, these data indicate that person-specific semantic information is available from 300 ms to 600 ms after the face is presented, and consequently, in line with Bentin and Deouell's (2000) interpretation, the PINs have already been accessed in the N400 interval.

Several neuroimaging studies (e.g., Gorno-Tempini et al., 1998; Palermo & Rhodes, 2007) have reported the activation of bilateral anterior temporal areas during access to person-specific semantic information, although other studies attached importance to the right hemisphere in semantic retrieval from faces (Tsukiura et al., 2002). Other brain regions that have been associated with person-specific semantic information retrieval are the posterior cingulate cortex and the angular gyrus (Gorno-Tempini et al., 1998), whereas dorsolateral prefrontal areas are thought to be involved in retrieval and maintaining the retrieved information in memory (Simons & Spiers, 2003; Tsukiura et al., 2002).

Access to the Lexical Information (Lemmas)

Lexical selection also takes place, in parallel to access to semantic information, at between 300 and 600 ms. As expected, because of the linguistic nature of this process, most studies refer to a preponderance of the left hemisphere in this stage. Neuroimaging studies have related activation of the left supramarginal gyrus (Campanella et al., 2001) and the posterior cingulate cortex (Shah et al., 2001) in face-name association tasks. Also, as previously mentioned, neuropsychological and neuroimaging studies (Gorno-Tempini et al., 1998; Tsukiura et al., 2002; Tsukiura et al., 2006) have related the proper name stores to left anterior temporal regions,. Other studies also propose that the Broca area (the posterior inferior frontal gyrus) may play a role in this process (Kemeny et al., 2006).

Access to Phonological Information (Lexemes)

Few studies have assessed the access to the lexemes store (to the phonological information) with faces as stimuli. In a comparison of semantic and phonological retrieval conditions, Huddy et al. (2003) found different topographies in an N400-like component, between 450 and 650 ms. Recently, Diaz et al. (2007) found differences between the ERP traces of a successful name access condition and the tip-of-the-tongue state (characterized by insufficient phonological activation) between 550 and 750 ms, an interval in which a positive parietal component was observed. Therefore, according to these results, phonological access may occur between 450 and 750 ms after the face is presented.

Retrieval of phonological information has been associated with activation of the left posterior superior temporal region, the Wernicke area (Indefrey & Levelt, 2004; Keller, Carpenter, & Just, 2001; Kemeny et al., 2006), as well as left inferior parietal areas (Keller et al., 2001), although some authors believe that both hemispheres are involved (Soros et al., 2006).

The Broca area appears to be involved in retrieval of phonological information, probably in selecting the phonological features (Keller et al., 2001; Kemeny et al., 2006). Paulesu, Frith and Frackowiak (1996) described a possible memory circuit that includes a phonological store, with the insula and the supramarginal gyrus involved in this circuit.

Creation of Phonetic Plan and Articulation

As the phonological nodes (lexemes) are activated, a phonetic plan can be created and the motor response elicited to emit the name associated with the presented face, or name articulation. A number of studies have indicated that several areas are involved in this process, including the primary motor (Blank, Scott, Murphy, Warburton, & Wise, 2002; Indefrey & Levelt, 2004) and somatosensorial cortices (Damasio et al., 2004; Indefrey & Levelt, 2004). Other areas possibly involved in articulatory processing are the supplementary motor area (Blank et al., 2002; Indefrey & Levelt, 2004; Kemeny et al., 2006) and the pre-supplementary motor area (Blank et al., 2002), which are likely neural sources of the bereitschaftpotential (readiness potential) recorded in articulatory ERP studies (Bujan, Lindin, & Diaz, in 2009; Tarkka, 2001). The insula and the Broca area may also be involved in articulation (Kemeny et al., 2006; Soros et al., 2006).

Summary

In summary, after presentation of a face, the visual information is transmitted to the visual cortices in the occipital lobe. At about 100 ms, the perception of pictorial codes in the general domain is related to arrival at the visual striated and peristriated visual cortices, reflected by the PI component in the ERP waveforms. At about 170 ms, a memory trace is created from the structural configuration of the face, a process that has been related to the ERP N 170 component, for which the bilateral facial fusiform area is the most likely neural source. Access to the FRUs, and consequently facial recognition, may occur about 250 ms after seeing a face (as reflected by several ERP modulations: N240, the early repetition effect, N250r, visual memory potential), with involvement of ventral temporal areas and the medial temporal lobe in the process. Access to person-specific semantic information may take place between 300 ms and 600 ms, as reflected by N400 modulations and the late repetition effect, and has been related to activation of anterior temporal areas, the posterior cingulate cortex and angular gyrus, while the dorsolateral prefrontal cortex may participate in the retrieval and maintenance of semantic information in memory. Lexical selection will take place in parallel and involve different regions in the left hemisphere, such as the supramarginal gyrus, the posterior cingulate cortex, and, especially, anterior temporal regions. The search for and retrieval of phonological information may take place between 450 and 750 ms after the presentation of a face, with involvement of the Wernicke area and a left inferior parietal region. The insula and the supramarginal gyrus may also be part of the phonological store, and the Broca area may play a role in the selection of lexemes. Finally, the creation of a phonetic plan and the name articulation may involve the primary motor and somatosensorial cortices, the supplementary motor area and the pre-supplementary motor area, as well as the insula and the Broca area.

Future lines of research and conclusions

Despite the behavioural and neurophysiological data supporting the main hypotheses of the cognitive models that attempt to explain the face naming process, there are still a number of controversial points that call for further research.

Firstly, it is still a matter of debate whether there are inhibitory connections between the different nodes within the same store. The IAC models consider that the connections between the different elements of the same store are inhibitory, whereas the main psycholinguistic language production models (Burke et al., 1991; Levelt, 2001) maintain that all connections between and within stores are excitatory. Computer simulations have not helped in this matter, as simulations for both the IAC models (Burton et al., 1990; Valentine et al., 1996) and language production models (Levelt, 2001) have shown equivalent reaction times to those obtained in behavioural studies. As Pulvermuller (1999) pointed out, it is difficult to adapt a model that does not contemplate inhibition processes in brain functioning. Nevertheless, as far as we are concerned, there are no studies in which this matter has been explored by application of a neuroscientific method.

Another area of debate is the existence of bilateral connections between the lexical and phonological stores. Serial models (Levelt, 2001) consider that lemmas activate lexemes, but lexemes cannot activate lemmas; however, the interactive activation models of language production (Dell & O'Seaghdha, 1992) consider that the stores have bidirectional connections. Unfortunately, it is difficult to obtain results that support either hypothesis. For example, there are two ways of explaining the phonological priming effect in the tip-of-the-tongue states (it is possible to induce tip-of-the-tongue resolutions by using phonological cues; Burke et al., 1991; Diaz et al., 2007): according to the IAC models, the effect would be caused by an interaction between lemmas and lexemes, whereas according to the serial stage models, the effect would be caused by the activation of the hypoactivated lexemes, completing the activation that was initiated unidirectionally from the lemmas. Further research should be carried out in this field to clarify what occurs.

Acknowledgements

This work was financially supported by the Spanish Ministerios: Educacion y Ciencia and Ciencia e Innovacion (SEF2007-67964-C02-02), and the Galician Conselleria de Innovacion e Industria (PGIDIT07PXIB211018PR).

Fecha recepcion: 18-8-08 * Fecha aceptacion: 8-4-09

References

Abdel Rahman, R., van Turennout, M., & Levelt, W.J. (2003). Phonological encoding is not contingent on semantic feature retrieval: An electrophysiological study on object naming. Journal of Experimental Psvehology: Learning, Memory and Cognition, 29, 850-860.

Allison, T., Puce, A., Spencer, D.D., & McCarthy, G. (1999). Electrophysiological studies of human face perception. I: Potentials generated in occipitotemporal cortex by face and non-face stimuli. Cerebral Cortex, 9, 415-430.

Barbeau, E.J., Taylor, M.J., Regis, J., Marquis, P., Chauvel, P., & Liegeois-Chauvel, C. (2008). Spatio temporal dynamics of face recognition. Cerebral Cortex, 18, 997-1009.

Begleiter, H., Porjesz, B., & Wang, W.Y. (1995). Event-related brain potentials differentiate priming and recognition to familiar and unfamiliar faces. Electroencephalography and Clinical Neurophysiology, 94, 41-49.

Bentin, S., Allison, T., Puce, A., Perez, E., & McCarthy, G. (1996). Electrophysiological studies of face perception in humans. Journal of Cognitive Neuroscience, 8, 551-565.

Bentin, S., & Deouell, L.Y. (2000). Structural encoding and identification in face processing: ERP evidence for separate mechanisms. Cognitive Neuropsychology, 17, 35-54.

Bindemann, M., Burton, M., Leuthold, H., & Schweinberger, S.R. (2008). Brain potential correlates of face recognition: Geometric distortions and the N250r brain response to stimulus repetitions. Psychophysiology, 45, 535-544.

Blank, S., Scott, S., Murphy, K., Warburton, E., & Wise, R., (2002). Speech production: Wernicke, broca and beyond. Brain, 125, 1829-1838.

Bruce, V., & Young, A. (1986). Understanding face recognition. British Journal of Psychology, 77, 305-327.

Bujan, A., Lindin, M., & Diaz, F. (2009). Movement-related potentials in a face naming task: Influence of the tip-of-the-tongue state. International Journal of Psychophysiology, 72, 235-245.

Burke, D.M., MacKay, D.G., Worthley, J.S., & Wade, E. (1991). On the tip of the tongue: What causes word finding failures in young and older adults? Journal of Memory and Language, 30, 542-579.

Burton, A.M., Bruce, V., & Johnston, R.A. (1990). Understanding face recognition with an interactive activation model. British Journal of psychology, 81, 361-380.

Caldara, R., Thutb, G., Servoirc, P., Michelb, C.M., Boveta, P., & Renault, B. (2003). Face versus non-face object perception and the 'other-race' effect: A spatio-temporal event-related potential study. Clinical Neurophysiology, 114,515-528.

Campanella, S., Joassin, F., Rossion, B., De Volder, A., Bruyer, R., & Crommelinck, M. (2001). Association of the distinct visual representations of faces and names: A PET activation study. Neuroimage, 14, 873-882.

Damasio, H., Tranel, D., Grabowski, T., Adolphs, R., & Damasio, A.R. (2004). Neural systems behind word and concept retrieval. Cognition, 92, 179-229.

Dell, G.S., & O'Seaghdha, P.G. (1992). Stages of lexical access in language production. Cognition, 42, 287-314.

De Renzi, E., Faglioni, P., Grossi, D., & Nichelli, P. (1991). A perceptive and associative forms of prosopagnosia. Cortex, 27, 213-221.

Diaz, F., Lindin, M., Galdo-Alvarez, S., Facal, D., & Juncos-Rabadan, O. (2007). An event-related potentials study of face identification and naming: The tip-of-the-tongue state. Psychophysiology, 44, 50-68.

Di Russo, E, Martinez, A., Sereno, M.I., Pitzalis, S., & Hillyard, S. (2001). Cortical Sources of the Early Components of the Visual Evoked Potential. Human Brain Mapping, 15, 95-111.

Eger, E., Schweinberger, S.R., Dolan. R.J., & Henson, R.N. (2005). Familiarity enhances invariance of face representations in human ventral visual cortex: fMRI evidence. Neuroimage, 26, 1128-1139.

Ellis, H.D., & Lewis, J.B. (2001). Capgras delusion; A window on face recognition. Trends in Cognitive Sciences, 5, 149-156.

Gauthier, L, Behrmann, M., & Tarr, M.J. (1999). Can face recognition really be dissociated from object recognition? Journal of Cognitive Neuroscience, 11, 349-370.

Gobbini, M.I., & Haxby, J.V. (2007). Neural systems for recognition of familiar faces. Neuropsychologia, 45, 32-41.

Gorno-Tempini, M.L., Price, C.J., Josephs, O., Vandenberghe, R., Cappa, S.F., Kapur, N., & et al. (1998). The neural systems sustaining face and proper-name processing. Brain, 121, 2103-2118.

Haxby, J.V., Gobbini, M.I., Furey, M.L., Ishai, A., Schouten, J.L., & Pietrini, P. (2001). Distributed and overlapping representations of faces and objects in ventral temporal cortex. Science, 293, 2425-2430.

Herzmann, G., Schweinberger, S.R., Sommer, W., & Jentzsch, I. (2004). What's special about personally familiar faces'? A multimodal approach. Psychophysiology, 41, 688-701.

Herzmann, G., & Sommer, W. (2007). Memory-related ERP components for experimentally learned faces and names: Characteristics and parallel-test reliabilities. Psychophysiology, 44, 262-276.

Huddy, V., Schweinberger, S.R., Jentzsch, L, & Burton A.M. (2003). Matching faces for semantic information and names: An event-related brain potentials study. Cognitive Brain Research, 17, 314-326.

Indefrey, P., & Levelt, W.J. (2004). The spatial and temporal signatures of word production components. Cognition, 92,101-144.

Ishai, A. (2008). Let's face it: It's a cortical network. Neuroimage, 40, 415-419.

Itier, R.J., & Taylor, M.J. (2004). N170 or N1? Spatiotemporal differences between object and face processing using ERPs. Cerebral Cortex, 14, 132-142.

Keller, T., Carpenter, P.A., & Just, M.A. (2001). The neural bases of sentence comprehension: An fMRI examination of syntactic and lexical processing. Cerebral Cortex, 11, 223-237.

Kemeny, S., Xu, J., Park, G.H., Hosey, L.A., Wettig, C.M., & Braun, A.R. (2006). Temporal dissociation of early lexical access and articulation using a delayed naming task: An fMRI study. Cerebral Cortex, 16, 587-595.

Kutas, M., & Federmeier, K.D. (2000). Electrophysiology reveals semantic memory use in language comprehension. Trends in Cognitive Science, 4, 463-470.

Kutas, M., & Hillyard, S.A. (1980). Reading senseless sentences: Brain potentials reflect semantic anomaly. Science, 207, 203-205.

Levelt, W.J. (2001). Spoken word production: A theory of lexical access. Proceedings of the National Academy of Sciences of the United States of America, 98, 13464-13471.

Lopera, F. (2000). Procesamiento de caras: bases neurologicas, trastornos y evaluacion. Revista de Neurologia, 30, 486-490.

McCarthy, G., Puce, A., Belger, A., & Allison, T. (1999). Electrophysiological studies of human face perception. It: Response properties of face-specific potentials generated in occipitotemporal cortex. Cerebral Cortex, 9, 431-444.

Milivojevic, B., Clapp, W.C., Johnson, B.W., & Corballis, M.C. (2003). Turn that frown upside down: ERP effects of thatcherization of misorientated faces. Psychophysiology, 40, 967-978.

Mouchetant-Rostaing, Y., Giard, M.H., Bentin, S., Aguera, PE., & Pernier, J. (2000). Neurophysiological correlates of face gender processing in humans. European Journal of Neuroscience, 12, 303-310.

Neumann, M.F., & Schweinberger, S.R. (2008). N250r and N400 correlates of immediate famous face repetition are independent of perceptual load. Brain Research, 1239, 181-190.

Palermo, R., & Rhodes, G. (2007). Are you always on my mind? A review of how face perception and attention interact. Neuropsychologia, 45, 75-92.

Paulesu, E., Frith, C.D., & Frackowiak, R.S. (1996). The neural correlates of the verbal component of working memory. Nature, 362, 342-345.

Pfutze, E.M., Sommer, W., & Schweinberger, S.R. (2002). Age-related slowing in face and name recognition: Evidence from event-related brain potentials. Psychology and Aging, 17, 140-160.

Pulvermuller, F. (1999). Lexical access as a brain mechanism. Behavioral and Brain Sciences, 22, 52-54.

Schweinberger, S.R. (1996). How Gorbachev primed Yeltsin: Analyses of associative priming in person recognition by means of reaction times and event-related brain potentials. Journal of Experimental Psychology: Learning, Memory, and Cognition, 22, 1383-1407.

Schweinberger, S.R., Pfutze, E.M., & Sommer, W. (1995). Repetition priming and associative priming of face recognition: Evidence from event-related potentials. Journal of Experimental Psychology: Learning, Memory, and Cognition, 21, 722-736.

Schweinberger, S.R., Pickering, E.C., Jentzsch, L, Burton, A.M., & Kaufmann, J.M. (2002). Event-related brain potential evidence for a response of inferior temporal cortex to familiar face repetitions. Cognitive Brain Research, 14, 398-409.

Seidenberg, M., Griffith, R., Sabsevitz, D., Moran, M., Haltiner, A., Bell, B., & et al. (2002). Recognition and identification of famous faces in patients with unilateral temporal lobe epilepsy. Neuropsychologia, 40, 446-456.

Shah, N.J., Marshall, J.C., Zafiris, O., Schwab, A., Zilles, K., Markowitsch, H.J., & et al. (2001). The neural correlates of person familiarity. A functional magnetic resonance imaging study with clinical implications. Brain, 124, 804-815.

Simons, J.S., & Spiers, H.J. (2003). Prefrontal and medial temporal lobe interactions in long-term memory. Nature Reviews Neuroscience, 4, 637-648.

Smith, M.E., & Halgren, E. (1987). Event-related potentials elicited by familiar and unfamiliar faces. En R. Johnson Jr., J.W. Rohrbaugh & R. Parasuraman (Eds.): Current trends in event-related potential research (EEG Suppl 40) (pp. 422-426). Amsterdam: Elsevier.

Soros, P., Guttman Sokoloff, L., Bose, A., McIntosh, A.R., Graham, S.L., & Stuss, D.T. (2006). Clustered functional MRI of overt speech production. Neuroimage, 32, 376-387.

Tarkka, I.M. (2001). Cerebral sources of electrical potentials related to human vocalization and mouth movement. Neuroscience Letters, 298, 203-206.

Tsukiura, T., Fuji, T., Fukatsu, R., Otsuki, T., Okuda, J., Umetsu, A., & et al. (2002). Neural basis of the retrieval of people's names: Evidence from brain-damaged patients and fMRI. Journal of Cognitive Neuroscience, 14, 922-937.

Tsukiura, T., Mochizuki-Kawai, H., & Fujii, T. (2006). Dissociable roles of the bilateral anterior temporal lobe in face-name associations: An event-related fMRI study. Neuroimage, 30, 617-626.

Valentine, T., Brennen, T., & Bredart, S. (1996). The cognitive psychology of proper names. On the importance of being Ernest. London: Routledge.

Santiago Galdo Alvarez, Monica Lindin Novo and Fernando Diaz Fernandez

Universidade de Santiago de Compostela

Correspondencia: Santiago Galdo Alvarez Facultade de Psicoloxia Universidade de Santiago de Compostela 15782 Santiago de Compostela (Spairo E-mail: santiago.galdo@usc.es
COPYRIGHT 2009 Colegio Oficial De Psicologos Del Principado De Asturias
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2009 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Galdo Alvarez, Santiago; Lindin Novo, Monica; Diaz Fernandez, Fernando
Publication:Psicothema
Date:Nov 1, 2009
Words:5842
Previous Article:Leader charisma and affective team climate: the moderating role of the leader's influence and interaction.
Next Article:The effect of amitriptyline on inhibitory avoidance in mice is dose-dependent.
Topics:

Terms of use | Privacy policy | Copyright © 2020 Farlex, Inc. | Feedback | For webmasters