In psychology and cognitive neuroscience, pattern recognition describes a cognitive process that matches information from a stimulus with information retrieved from memory.[1]
Pattern recognition occurs when information from the environment is received and entered into short-term memory, causing automatic activation of a specific content of long-term memory. An early example of this is learning the alphabet in order. When a carer repeats ‘A, B, C’ multiple times to a child, utilizing the pattern recognition, the child says ‘C’ after they hear ‘A, B’ in order. Recognizing patterns allows us to predict and expect what is coming. The process of pattern recognition involves matching the information received with the information already stored in the brain. Making the connection between memories and information perceived is a step of pattern recognition called identification. Pattern recognition requires repetition of experience. Semantic memory, which is used implicitly and subconsciously, is the main type of memory involved with recognition.[2]
Pattern recognition is not only crucial to humans, but to other animals as well. Even koalas, who possess less-developed thinking abilities, use pattern recognition to find and consume eucalyptus leaves. The human brain has developed more, but holds similarities to the brains of birds and lower mammals. The development of neural networks in the outer layer of the brain in humans has allowed for better processing of visual and auditory patterns. Spatial positioning in the environment, remembering findings, and detecting hazards and resources to increase chances of survival are examples of the application of pattern recognition for humans and animals.[3]
There are six main theories of pattern recognition: template matching, prototype-matching, feature analysis, recognition-by-components theory, bottom-up and top-down processing, and Fourier analysis. The application of these theories in everyday life is not mutually exclusive. Pattern recognition allows us to read words, understand language, recognize friends, and even appreciate music. Each of the theories applies to various activities and domains where pattern recognition is observed. Facial, music and language recognition, and seriation are a few of such domains. Facial recognition and seriation occur through encoding visual patterns, while music and language recognition use the encoding of auditory patterns.
Theories
Template matching
Template matching theory describes the most basic approach to human pattern recognition. It is a theory that assumes every perceived object is stored as a "template" into long-term memory.[4] Incoming information is compared to these templates to find an exact match.[5] In other words, all sensory input is compared to multiple representations of an object to form one single conceptual understanding. The theory defines perception as a fundamentally recognition-based process. It assumes that everything we see, we understand only through past exposure, which then informs our future perception of the external world.[6] For example, A, A, and A are all recognized as the letter A, but not B. This viewpoint is limited, however, in explaining how new experiences can be understood without being compared to an internal memory template.[citation needed]
Prototype matching
Unlike the exact, one-to-one, template matching theory, prototype matching instead compares incoming sensory input to one average prototype.[citation needed] This theory proposes that exposure to a series of related stimuli leads to the creation of a "typical" prototype based on their shared features.[6] It reduces the number of stored templates by standardizing them into a single representation.[4] The prototype supports perceptual flexibility, because unlike in template matching, it allows for variability in the recognition of novel stimuli.[citation needed] For instance, if a child had never seen a lawn chair before, they would still be able to recognize it as a chair because of their understanding of its essential characteristics as having four legs and a seat. This idea, however, limits the conceptualization of objects that cannot necessarily be "averaged" into one, like types of canines, for instance. Even though dogs, wolves, and foxes are all typically furry, four-legged, moderately sized animals with ears and a tail, they are not all the same, and thus cannot be strictly perceived with respect to the prototype matching theory.
Multiple discrimination scaling
This section may be confusing or unclear to readers. In particular, it is unclear what is meant by the distance of 50%. (August 2018) |
Template and feature analysis approaches to recognition of objects (and situations) have been merged / reconciled / overtaken by multiple discrimination theory. This states that the amounts in a test stimulus of each salient feature of a template are recognized in any perceptual judgment as being at a distance in the universal unit of 50% discrimination (the objective performance 'JND'Template:Clarification needed[7]) from the amount of that feature in the template.[8]
Recognition by components theory
Similar to feature detection theory, recognition by components (RBC) focuses on the bottom-up features of the stimuli being processed. First proposed by Irving Biederman (1987), this theory states that humans recognize objects by breaking them down into their basic 3D geometric shapes called geons (i.e. cylinders, cubes, cones, etc.). An example is how we break down a common item like a coffee cup: we recognize the hollow cylinder that holds the liquid and a curved handle off the side that allows us to hold it. Even though not every coffee cup is exactly the same, these basic components helps us to recognize the consistency across examples (or pattern). RBC suggests that there are fewer than 36 unique geons that when combined can form a virtually unlimited number of objects. To parse and dissect an object, RBC proposes we attend to two specific features: edges and concavities. Edges enable the observer to maintain a consistent representation of the object regardless of the viewing angle and lighting conditions. Concavities are where two edges meet and enable the observer to perceive where one geon ends and another begins.
The RBC principles of visual object recognition can be applied to auditory language recognition as well. In place of geons, language researchers propose that spoken language can be broken down into basic components called phonemes. For example, there are 44 phonemes in the English language.
Top-down and bottom-up processing
Top-down processing
Top-down processing refers to the use of background information in pattern recognition.[9] It always begins with a person’s previous knowledge, and makes predictions due to this already acquired knowledge.[10] Psychologist Richard Gregory estimated that about 90% of the information is lost between the time it takes to go from the eye to the brain, which is why the brain must guess what the person sees based on past experiences. In other words, we construct our perception of reality, and these perceptions are hypotheses or propositions based on past experiences and stored information. The formation of incorrect propositions will lead to errors of perception such as visual illusions.[9] Given a paragraph written with difficult handwriting, it is easier to understand what the writer wants to convey if one reads the whole paragraph rather than reading the words in separate terms. The brain may be able to perceive and understand the gist of the paragraph due to the context supplied by the surrounding words.[11]
Bottom-up processing
Bottom-up processing is also known as data-driven processing, because it originates with the stimulation of the sensory receptors.[10] Psychologist James Gibson opposed the top-down model and argued that perception is direct, and not subject to hypothesis testing as Gregory proposed. He stated that sensation is perception and there is no need for extra interpretation, as there is enough information in our environment to make sense of the world in a direct way. His theory is sometimes known as the "ecological theory" because of the claim that perception can be explained solely in terms of the environment. An example of bottom up-processing involves presenting a flower at the center of a person's field. The sight of the flower and all the information about the stimulus are carried from the retina to the visual cortex in the brain. The signal travels in one direction.[11]
Seriation
In psychologist Jean Piaget's theory of cognitive development, the third stage is called the Concrete Operational State. It is during this stage that the abstract principle of thinking called "seriation" is naturally developed in a child.[12] Seriation is the ability to arrange items in a logical order along a quantitative dimension such as length, weight, age, etc.[13] It is a general cognitive skill which is not fully mastered until after the nursery years .[14] To seriate means to understand that objects can be ordered along a dimension,[12] and to effectively do so, the child needs to be able to answer the question "What comes next?"[14] Seriation skills also help to develop problem-solving skills, which are useful in recognizing and completing patterning tasks.
Piaget's work on seriation
Piaget studied the development of seriation along with Szeminska in an experiment where they used rods of varying lengths to test children's skills.[15] They found that there were three distinct stages of development of the skill. In the first stage, children around the age of 4 could not arrange the first ten rods in order. They could make smaller groups of 2–4, but could not put all the elements together. In the second stage where the children were 5–6 years of age, they could succeed in the seriation task with the first ten rods through the process of trial and error. They could insert the other set of rods into order through trial and error. In the third stage, the 7-8-year-old children could arrange all the rods in order without much trial and error. The children used the systematic method of first looking for the smallest rod first and the smallest among the rest.[15]
Development of problem-solving skills
To develop the skill of seriation, which then helps advance problem-solving skills, children should be provided with opportunities to arrange things in order using the appropriate language, such as "big" and "bigger" when working with size relationships. They should also be given the chance to arrange objects in order based on the texture, sound, flavor and color.[14] Along with specific tasks of seriation, children should be given the chance to compare the different materials and toys they use during play. Through activities like these, the true understanding of characteristics of objects will develop. To aid them at a young age, the differences between the objects should be obvious.[14] Lastly, a more complicated task of arranging two different sets of objects and seeing the relationship between the two different sets should also be provided. A common example of this is having children attempt to fit saucepan lids to saucepans of different sizes, or fitting together different sizes of nuts and bolts.[14]
Application of seriation in schools
To help build up math skills in children, teachers and parents can help them learn seriation and patterning. Young children who understand seriation can put numbers in order from lowest to highest. Eventually, they will come to understand that 6 is higher than 5, and 20 is higher than 10.[16] Similarly, having children copy patterns or create patterns of their own, like ABAB patterns, is a great way to help them recognize order and prepare for later math skills, such as multiplication. Child care providers can begin exposing children to patterns at a very young age by having them make groups and count the total number of objects.[16]
Facial pattern recognition
Recognizing faces is one of the most common forms of pattern recognition. Humans are extremely effective at remembering faces, but this ease and automaticity belies a very challenging problem.[17][18] All faces are physically similar. Faces have two eyes, one mouth, and one nose all in predictable locations, yet humans can recognize a face from several different angles and in various lighting conditions.[18]
Neuroscientists posit that recognizing faces takes place in three phases. The first phase starts with visually focusing on the physical features. The facial recognition system then needs to reconstruct the identity of the person from previous experiences. This provides us with the signal that this might be a person we know. The final phase of recognition completes when the face elicits the name of the person.[19]
Although humans are great at recognizing faces under normal viewing angles, upside-down faces are tremendously difficult to recognize. This demonstrates not only the challenges of facial recognition but also how humans have specialized procedures and capacities for recognizing faces under normal upright viewing conditions.[18]
Neural mechanisms
Scientists agree that there is a certain area in the brain specifically devoted to processing faces. This structure is called the fusiform gyrus, and brain imaging studies have shown that it becomes highly active when a subject is viewing a face.[20]
Several case studies have reported that patients with lesions or tissue damage localized to this area have tremendous difficulty recognizing faces, even their own. Although most of this research is circumstantial, a study at Stanford University provided conclusive evidence for the fusiform gyrus' role in facial recognition. In a unique case study, researchers were able to send direct signals to a patient's fusiform gyrus. The patient reported that the faces of the doctors and nurses changed and morphed in front of him during this electrical stimulation. Researchers agree this demonstrates a convincing causal link between this neural structure and the human ability to recognize faces.[20]
Facial recognition development
Although in adults, facial recognition is fast and automatic, children do not reach adult levels of performance (in laboratory tasks) until adolescence.[21] Two general theories have been put forth to explain how facial recognition normally develops. The first, general cognitive development theory, proposes that the perceptual ability to encode faces is fully developed early in childhood, and that the continued improvement of facial recognition into adulthood is attributed to other general factors. These general factors include improved attentional focus, deliberate task strategies, and metacognition. Research supports the argument that these other general factors improve dramatically into adulthood.[21] Face-specific perceptual development theory argues that the improved facial recognition between children and adults is due to a precise development of facial perception. The cause for this continuing development is proposed to be an ongoing experience with faces.
Developmental issues
Several developmental issues manifest as a decreased capacity for facial recognition. Using what is known about the role of the fusiform gyrus, research has shown that impaired social development along the autism spectrum is accompanied by a behavioral marker where these individuals tend to look away from faces, and a neurological marker characterized by decreased neural activity in the fusiform gyrus. Similarly, those with developmental prosopagnosia (DP) struggle with facial recognition to the extent they are often unable to identify even their own faces. Many studies report that around 2% of the world’s population have developmental prosopagnosia, and that individuals with DP have a family history of the trait.[18] Individuals with DP are behaviorally indistinguishable from those with physical damage or lesions on the fusiform gyrus, again implicating its importance to facial recognition. Despite those with DP or neurological damage, there remains a large variability in facial recognition ability in the total population.[18] It is unknown what accounts for the differences in facial recognition ability, whether it is a biological or environmental disposition. Recent research analyzing identical and fraternal twins showed that facial recognition was significantly higher correlated in identical twins, suggesting a strong genetic component to individual differences in facial recognition ability.[18]
Language development
Pattern recognition in language acquisition
Recent[when?] research reveals that infant language acquisition is linked to cognitive pattern recognition.[22] Unlike classical nativist and behavioral theories of language development,[23] scientists now believe that language is a learned skill.[22] Studies at the Hebrew University and the University of Sydney both show a strong correlation between the ability to identify visual patterns and to learn a new language.[22][24] Children with high shape recognition showed better grammar knowledge, even when controlling for the effects of intelligence and memory capacity.[24] This is supported by the theory that language learning is based on statistical learning,[22] the process by which infants perceive common combinations of sounds and words in language and use them to inform future speech production.
Phonological development
The first step in infant language acquisition is to decipher between the most basic sound units of their native language. This includes every consonant, every short and long vowel sound, and any additional letter combinations like "th" and "ph" in English. These units, called phonemes, are detected through exposure and pattern recognition. Infants use their "innate feature detector" capabilities to distinguish between the sounds of words.[23] They split them into phonemes through a mechanism of categorical perception. Then they extract statistical information by recognizing which combinations of sounds are most likely to occur together,[23] like "qu" or "h" plus a vowel. In this way, their ability to learn words is based directly on the accuracy of their earlier phonetic patterning.
Grammar development
The transition from phonemic differentiation into higher-order word production[23] is only the first step in the hierarchical acquisition of language. Pattern recognition is furthermore utilized in the detection of prosody cues, the stress and intonation patterns among words.[23] Then it is applied to sentence structure and the understanding of typical clause boundaries.[23] This entire process is reflected in reading as well. First, a child recognizes patterns of individual letters, then words, then groups of words together, then paragraphs, and finally entire chapters in books.[25] Learning to read and learning to speak a language are based on the "stepwise refinement of patterns"[25] in perceptual pattern recognition.
Music pattern recognition
Music provides deep and emotional experiences for the listener.[26] These experiences become contents in long-term memory, and every time we hear the same tunes, those contents are activated. Recognizing the content by the pattern of the music affects our emotion. The mechanism that forms the pattern recognition of music and the experience has been studied by multiple researchers. The sensation felt when listening to our favorite music is evident by the dilation of the pupils, the increase in pulse and blood pressure, the streaming of blood to the leg muscles, and the activation of the cerebellum, the brain region associated with physical movement.[26] While retrieving the memory of a tune demonstrates general recognition of musical pattern, pattern recognition also occurs while listening to a tune for the first time. The recurring nature of the metre allows the listener to follow a tune, recognize the metre, expect its upcoming occurrence, and figure the rhythm. The excitement of following a familiar music pattern happens when the pattern breaks and becomes unpredictable. This following and breaking of a pattern creates a problem-solving opportunity for the mind that form the experience.[26] Psychologist Daniel Levitin argues that the repetitions, melodic nature and organization of this music create meaning for the brain.[27] The brain stores information in an arrangement of neurons which retrieve the same information when activated by the environment. By constantly referencing information and additional stimulation from the environment, the brain constructs musical features into a perceptual whole.[27]
The medial prefrontal cortex – one of the last areas affected by Alzheimer’s disease – is the region activated by music.
Cognitive mechanisms
To understand music pattern recognition, we need to understand the underlying cognitive systems that each handle a part of this process. Various activities are at work in this recognition of a piece of music and its patterns. Researchers have begun to unveil the reasons behind the stimulated reactions to music. Montreal-based researchers asked ten volunteers who got "chills" listening to music to listen to their favorite songs while their brain activity was being monitored.[26] The results show the significant role of the nucleus accumbens (NAcc) region – involved with cognitive processes such as motivation, reward, addiction, etc. – creating the neural arrangements that make up the experience.[26] A sense of reward prediction is created by anticipation before the climax of the tune, which comes to a sense of resolution when the climax is reached. The longer the listener is denied the expected pattern, the greater the emotional arousal when the pattern returns. Musicologist Leonard Meyer used fifty measures of Beethoven’s 5th movement of the String Quartet in C-sharp minor, Op. 131 to examine this notion.[26] The stronger this experience is, the more vivid memory it will create and store. This strength affects the speed and accuracy of retrieval and recognition of the musical pattern. The brain not only recognizes specific tunes, it distinguishes standard acoustic features, speech and music.
MIT researchers conducted a study to examine this notion.[28] The results showed six neural clusters in the auditory cortex responding to the sounds. Four were triggered when hearing standard acoustic features, one specifically responded to speech, and the last exclusively responded to music. Researchers who studied the correlation between temporal evolution of timbral, tonal and rhythmic features of music, came to the conclusion that music engages the brain regions connected to motor actions, emotions and creativity. The research indicates that the whole brain "lights up" when listening to music.[29] This amount of activity boosts memory preservation, hence pattern recognition.
Recognizing patterns of music is different for a musician and a listener. Although a musician may play the same notes every time, the details of the frequency will always be different. The listener will recognize the musical pattern and their types despite the variations. These musical types are conceptual and learned, meaning they might vary culturally.[30] While listeners are involved with recognizing (implicit) musical material, musicians are involved with recalling them (explicit).[2]
A UCLA study found that when watching or hearing music being played, neurons associated with the muscles needed for playing the instrument fire. Mirror neurons light up when musicians and non-musicians listen to a piece.[31]
Developmental issues
Pattern recognition of music can build and strengthen other skills, such as musical synchrony and attentional performance and musical notation and brain engagement. Even a few years of musical training enhances memory and attention levels. Scientists at University of Newcastle conducted a study on patients with severe acquired brain injuries (ABIs) and healthy participants, using popular music to examine music-evoked autobiographical memories (MEAMs).[29] The participants were asked to record their familiarity with the songs, whether they liked them and what memories they evoked. The results showed that the ABI patients had the highest MEAMs, and all the participants had MEAMs of a person, people or life period that were generally positive.[29] The participants completed the task by utilizing pattern recognition skills. Memory evocation caused the songs to sound more familiar and well-liked. This research can be beneficial to rehabilitating patients of autobiographical amnesia who do not have fundamental deficiency in autobiographical recall memory and intact pitch perception.[29]
In a study at University of California, Davis mapped the brain of participants while they listened to music.[32] The results showed links between brain regions to autobiographical memories and emotions activated by familiar music. This study can explain the strong response of patients with Alzheimer’s disease to music. This research can help such patients with pattern recognition-enhancing tasks.
False pattern recognition
The human tendency to see patterns that do not actually exist is called apophenia. Examples include the Man in the Moon, faces or figures in shadows, in clouds, and in patterns with no deliberate design, such as the swirls on a baked confection, and the perception of causal relationships between events which are, in fact, unrelated. Apophenia figures prominently in conspiracy theories, gambling, misinterpretation of statistics and scientific data, and some kinds of religious and paranormal experiences. Misperception of patterns in random data is called pareidolia. Recent researches in neurosciences and cognitive sciences suggest to understand 'false pattern recognition', in the paradigm of predictive coding.
See also
Notes
References
- ↑ Eysenck, Michael W.; Keane, Mark T. (2003). Cognitive Psychology: A Student's Handbook (4th ed.). Hove; Philadelphia; New York: Taylor & Francis. ISBN 9780863775512. OCLC 894210185. Retrieved 27 November 2014.
- ↑ 2.0 2.1 Snyder, B. (2000). Music and memory: An introduction. MIT press.
- ↑ Mattson, M. P. (2014). Superior pattern processing is the essence of the evolved human brain. Frontiers in neuroscience, 8.
- ↑ 4.0 4.1 Shugen, Wang (2002). "Framework of pattern recognition model based on the cognitive psychology". Geo-spatial Information Science. 5 (2): 74–78. doi:10.1007/BF02833890. ISSN 1009-5020. S2CID 124159004.
- ↑ "Perception and Perceptual Illusions | Psychology Today". www.psychologytoday.com. Retrieved 2023-08-16.
- ↑ 6.0 6.1 "Top-down and bottom-up theories of perception - Cognitive Psychology". cognitivepsychology.wikidot.com. Retrieved 2023-08-16.
- ↑ Torgerson, 1958
- ↑ Booth & Freeman, 1993, Acta Psychologica
- ↑ 9.0 9.1 "Visual Perception Theory In Psychology". 2022-11-03. Retrieved 2023-08-16.
- ↑ 10.0 10.1 "Bottom-up and Top-down Processing: A Collaborative Duality | Psych 256: Introduction to Cognitive Psychology". sites.psu.edu. Retrieved 2023-08-16.
- ↑ 11.0 11.1 "Top-Down VS Bottom-Up Processing". explorable.com. Retrieved 2023-08-16.
- ↑ 12.0 12.1 Kidd, Julie K.; Curby, Timothy W.; Boyer, Caroline E.; Gadzichowski, K. Marinka; Gallington, Deborah A.; Machado, Jessica A.; Pasnak, Robert (2012). "Benefits of an Intervention Focused on Oddity and Seriation". Early Education & Development. 23 (6): 900–918. doi:10.1080/10409289.2011.621877. ISSN 1040-9289. S2CID 143509212.
- ↑ Berk, L. E. (2013). Development through the lifespan (6th ed.). Pearson. ISBN 9780205957606
- ↑ 14.0 14.1 14.2 14.3 14.4 Curtis, A. (2002). Curriculum for the pre-school child. Routledge. ISBN 9781134770458
- ↑ 15.0 15.1 Inhelder, B., & Piaget, J. (1964). Early growth of logic in the child; classification and seriation, aby Bärbel Inhelder and Jean Piaget. New York: Routledge and Paul.
- ↑ 16.0 16.1 Basic Math Skills in Child Care: Creating Patterns and Arranging Objects in Order. Retrieved from Extension Articles on 2017-10-20 http://articles.extension.org/pages/25597/basic-math-skills-in-child-care:-creating-patterns-and-arranging-objects-in-order
- ↑ Sheikh, Knvul. "How We Save Face--Researchers Crack the Brain's Facial-Recognition Code". Scientific American. Retrieved 2023-08-16.
- ↑ 18.0 18.1 18.2 18.3 18.4 18.5 Duchaine, B. (2015). Individual differences in face recognition ability: Impacts on law enforcement, criminal justice and national security. APA: Psychological Science Agenda. Retrieved from: http://www.apa.org/science/about/psa/2015/06/face-recognition.aspx
- ↑ Wlassoff, V. (2015). How the Brain Recognizes Faces. Brain Blogger. Retrieved from: http://brainblogger.com/2015/10/17/how-the-brain-recognizes-faces/
- ↑ 20.0 20.1 "Identifying the Brain's Own Facial Recognition System". Retrieved 2023-08-16.
- ↑ 21.0 21.1 McKone, E., et al. (2012). A critical review of the development of face recognition: Experience is less important than previously believed. Cognitive Neuropsychology. doi 10.1080/02643294.2012.660138
- ↑ 22.0 22.1 22.2 22.3 "Language Ability Linked to Pattern Recognition". VOA. 2013-05-29. Retrieved 2023-08-16.
- ↑ 23.0 23.1 23.2 23.3 23.4 23.5 Kuhl, Patricia K. (2000-10-24). "A new view of language acquisition". Proceedings of the National Academy of Sciences. 97 (22): 11850–11857. Bibcode:2000PNAS...9711850K. doi:10.1073/pnas.97.22.11850. ISSN 0027-8424. PMC 34178. PMID 11050219.
- ↑ 24.0 24.1 University of Sydney. (2016, May 5). Pattern learning key to children's language development. ScienceDaily. Retrieved October 25, 2017 from http://www.sciencedaily.com/releases/2016/05/160505222938.htm
- ↑ 25.0 25.1 Basulto, D. (2013, July 24). Humans are the world’s best pattern-recognition machines, but for how long? Retrieved October 25, 2017 from http://bigthink.com/endless-innovation/humans-are-the-worlds-best-pattern-recognition-machines-but-for-how-long
- ↑ 26.0 26.1 26.2 26.3 26.4 26.5 Lehrer, Jonah. “The Neuroscience Of Music.” Wired, Conde Nast, 3 June 2017, www.wired.com/2011/01/the-neuroscience-of-music/.
- ↑ 27.0 27.1 Levitin, D. J. (2006). This is your brain on music: The science of a human obsession. Penguin.
- ↑ "This Is Your Brain On Music: How Our Brains Process Melodies That Pull On Our Heartstrings". Medical Daily. 2014-03-11. Retrieved 2023-08-16.
- ↑ 29.0 29.1 29.2 29.3 "Why Do the Songs from Your Past Evoke Such Vivid Memories? | Psychology Today". www.psychologytoday.com. Retrieved 2023-08-16.
- ↑ Agus, T. R., Thorpe, S. J., & Pressnitzer, D. (2010). Rapid formation of robust auditory memories: insights from noise. Neuron, 66(4), 610-618.
- ↑ "How Do Our Brains Process Music?". Smithsonian Magazine. Retrieved 2023-08-16.
- ↑ Greensfelder, Liese (2009-02-23). "Study Finds Brain Hub That Links Music, Memory and Emotion". UC Davis. Retrieved 2023-08-16.
External links
- nAsagram - A web app for creating anagrams interactively.
Media related to Lua error in Module:Commons_link at line 63: attempt to index field 'wikibase' (a nil value). at Wikimedia Commons