Feature integration theory
Feature integration theory is a theory of attention developed in 1980 by Anne Treisman and Garry Gelade that suggests that when perceiving a stimulus, features are "registered early, automatically, and in parallel, while objects are identified separately" and at a later stage in processing. The theory has been one of the most influential psychological models of human visual attention.
Stages
[edit]According to Treisman, the first stage of the feature integration theory is the preattentive stage. During this stage, different parts of the brain automatically gather information about basic features (colors, shape, movement) that are found in the visual field. The idea that features are automatically separated appears counterintuitive. However, we are not aware of this process because it occurs early in perceptual processing, before we become conscious of the object.
The second stage of feature integration theory is the focused attention stage, where a subject combines individual features of an object to perceive the whole object. Combining individual features of an object requires attention, and selecting that object occurs within a "master map" of locations. The master map of locations contains all the locations in which features have been detected, with each location in the master map having access to the multiple feature maps. These multiple feature maps, or sub-maps, contain a large storage base of features. Features such as color, shape, orientation, sound, and movement are stored in these sub-maps [1][2].When attention is focused at a particular location on the map, the features currently in that position are attended to and are stored in "object files". If the object is familiar, associations are made between the object and prior knowledge, which results in identification of that object. This top-down process, using prior knowledge to inform a current situation or decision, is paramount in either identifying or recognizing objects.[3][4] In support of this stage, researchers often refer to patients with Balint's syndrome. Due to damage in the parietal lobe, these people are unable to focus attention on individual objects. Given a stimulus that requires combining features, people with Balint's syndrome are unable to focus attention long enough to combine the features, providing support for this stage of the theory.[5]
Treisman distinguishes between two kinds of visual search tasks, "feature search" and "conjunction search". Feature searches can be performed fast and pre-attentively for targets defined by only one feature, such as color, shape, perceived direction of lighting, movement, or orientation. Features should "pop out" during search and should be able to form illusory conjunctions. Conversely, conjunction searches occur with the combination of two or more features and are identified serially. Conjunction search is much slower than feature search and requires conscious attention and effort. In multiple experiments, some referenced in this article, Treisman concluded that color, orientation, and intensity are features for which feature searches may be performed.
As a reaction to the feature integration theory, Wolfe (1994) proposed the Guided Search Model 2.0. According to this model, attention is directed to an object or location through a preattentive process. The preattentive process, as Wolfe explains, directs attention in both a bottom-up and top-down way. Information acquired through both bottom-up and top-down processing is ranked according to priority. The priority ranking guides visual search and makes the search more efficient. Whether the Guided Search Model 2.0 or the feature integration theory are "correct" theories of visual search is still a hotly debated topic.
Experiments
[edit]To test the notion that attention plays a vital role in visual perception, Treisman and Schmidt (1982) designed an experiment to show that features may exist independently of one another early in processing. Participants were shown a picture involving four objects hidden by two black numbers. The display was flashed for one-fifth of a second followed by a random-dot masking field that appeared on screen to eliminate "any residual perception that might remain after the stimuli were turned off".[6] Participants were to report the black numbers they saw at each location where the shapes had previously been. The results of this experiment verified Treisman and Schmidt's hypothesis. In 18% of trials, participants reported seeing shapes "made up of a combination of features from two different stimuli",[7] even when the stimuli had great differences; this is often referred to as an illusory conjunction. Specifically, illusory conjunctions occur in various situations. For example, you may identify a passing person wearing a red shirt and yellow hat and very quickly transform him or her into one wearing a yellow shirt and red hat. The feature integration theory provides explanation for illusory conjunctions; because features exist independently of one another during early processing and are not associated with a specific object, they can easily be incorrectly combined both in laboratory settings, as well as in real life situations.[8]
As previously mentioned, Balint's syndrome patients have provided support for the feature integration theory. Particularly, Research participant R.M., who had Bálint's syndrome and was unable to focus attention on individual objects, experiences illusory conjunctions when presented with simple stimuli such as a "blue O" or a "red T." In 23% of trials, even when able to view the stimulus for as long as 10 seconds, R.M. reported seeing a "red O" or a "blue T".[9] This finding is in accordance with feature integration theory's prediction of how one with a lack of focused attention would erroneously combine features.
If people use their prior knowledge or experience to perceive an object, they are less likely to make mistakes, or illusory conjunctions. To explain this phenomenon, Treisman and Souther (1986) conducted an experiment in which they presented three shapes to participants where illusory conjunctions could exist. Surprisingly, when she told participants that they were being shown a carrot, lake, and tire (in place of the orange triangle, blue oval, and black circle, respectively), illusory conjunctions did not exist.[10] Treisman maintained that prior-knowledge played an important role in proper perception. Normally, bottom-up processing is used for identifying novel objects; but, once we recall prior knowledge, top-down processing is used. This explains why people are good at identifying familiar objects rather than unfamiliar.
Reading
[edit]When identifying letters while reading, not only are their shapes picked up but also other features like their colors and surrounding elements. Individual letters are processed serially when spatially conjoined with another letter. The locations of each feature of a letter are not known in advance, even while the letter is in front of the reader. Since the location of the letter's features and/or the location of the letter is unknown, feature interchanges can occur if one is not attentively focused. This is known as lateral masking, which in this case, refers to a difficulty in separating a letter from the background.[11]
See also
[edit]Notes
[edit]- ^ Kristjánsson, Árni; Egeth, Howard (2020-01-01). "How feature integration theory integrated cognitive psychology, neurophysiology, and psychophysics". Attention, Perception, & Psychophysics. 82 (1): 7–23. doi:10.3758/s13414-019-01803-7. ISSN 1943-393X. PMID 31290134.
- ^ Chan, Louis K. H.; Hayward, William G. (2009). "Feature integration theory revisited: Dissociating feature detection and attentional guidance in visual search". Journal of Experimental Psychology: Human Perception and Performance. 35 (1): 119–132. doi:10.1037/0096-1523.35.1.119. ISSN 1939-1277. PMID 19170475.
- ^ Nobre, Kia; Kastner, Sabine (2014). The Oxford Handbook of Attention. OUP Oxford. ISBN 978-0-19-967511-1.
- ^ Chan, Louis K. H.; Hayward, William G. (2009). "Feature integration theory revisited: Dissociating feature detection and attentional guidance in visual search". Journal of Experimental Psychology: Human Perception and Performance. 35 (1): 119–132. doi:10.1037/0096-1523.35.1.119. ISSN 1939-1277. PMID 19170475.
- ^ Cohen, Asher; Rafal, Robert D. (1991). "Attention and Feature Integration: Illusory Conjunctions in a Patient with a Parietal Lobe Lesion". Psychological Science. 2 (2): 106–110. doi:10.1111/j.1467-9280.1991.tb00109.x. ISSN 0956-7976. JSTOR 40062648. S2CID 145171384.
- ^ Cognitive Psychology, E. Bruce Goldstein, P 105
- ^ Cognitive Psychology, E. Bruce Goldstein, P 105
- ^ Treisman, A. Cognitive Psychology 12, 97-136 (1980)
- ^ Friedman-Hill et al., 1995; Robertson et al., 1997.
- ^ Illusory words: The roles of attention and of top–down constraints in conjoining letters to form words. By Treisman, Anne; Souther, Janet. Journal of Experimental Psychology: Human Perception and Performance, Vol 12(1), Feb 1986, 3-17.
- ^ Anne Treisman and Garry Gelade (1980). "A feature-integration theory of attention." Cognitive Psychology, Vol. 12, No. 1, pp. 97-136.
References
[edit]- Anne Treisman and Garry Gelade (1980). "A feature-integration theory of attention." Cognitive Psychology, 12 (1), pp. 97–136.
- Anne Treisman and Hilary Schmidt (1982). "Illusory conjunctions in the perception of objects." Cognitive Psychology, 14, pp. 107–141.
- Anne Treisman and Janet Souther (1986). "Illusory words: The roles of attention and of top–down constraints in conjoining letters to form words." Journal of Experimental Psychology: Human Perception and Performance, 12 (1), pp. 3–17
- Anne Treisman (1988). "Features and objects: the fourteenth Bartlett Memorial Lecture." Quarterly Journal of Experimental Psychology, 40A, pp. 201–236.
- Anne Treisman and Nancy Kanwisher (1998). "Perceiving visually presented objects: recognition, awareness, and modularity." Current Opinion in Neurobiology, 8, pp. 218–226.
- J. M. Wolfe (1994). "Guided Search 2.0: A revised model of visual search." Psychonomic Bulletin & Review, 1, pp. 202–238