aha!

Surface and Deep Processing: Cognitive Behaviors of Aha! Moments (Part II)

Surface and Deep Processing: Cognitive Behaviors of Aha! Moments (Part II)

This article is a continuation of a research entry from the May 24, 2021 edition:

The spectrum of early research on insight ranges from observing changes in behavior and understanding psychological patterns that influence learning (Bühler, 1907; Duncker & Lee, 1945; Wallas, 1926), to the present and how insight is a unique form of learning. There are a number of theories on insight; at present, no one theory dominates interpretation (Kounios & Beeman, 2015; Sternberg 1996). In spite of differences between theories, they share two principles: (a) sudden, conscious change in a person’s representation of a stimulus, situation, event, or problem (Davidson, 1995; Kaplan & Simon, 1990), and (b) the change occurs unexpectedly (Jung-Beeman, et al., 2004; Kounios & Beeman, 2014; Metcalfe, 1986). Further, a strong correlation has been demonstrated between moments of insight and increased engagement in learning, positive boost in mood, and greater likelihood of more moments of insight (Kizilirmak, Da Silva, Imamoglu, & Richardson-Klavehn, 2016; Kounios & Beeman, 2014). Aha! moments have been shown to increase and enhance memory performance (Ash, Jee, & Wiley, 2012; Auble, Franks, & Soraci, 1979; Danek, Fraps, von Müller, Grothe, & Öllinger, 2013; Dominowski & Buyer, 2000; Kizilirmak, Da Silva, Imamoglu, & Richardson- Klavehn, 2016), reliably grounded on insight’s proven ability to, “comprise associative novelty, schema, congruency, and intrinsic reward” (Kizilirmak et al., 2016, p. 1).

The observation and categorization of these moments can also be a source of valuable information for theorists and educators. Crocker and Algina (1986) demonstrate this operationalization in order to, “establish some rule of correspondence between the theoretical construct and observable behaviors that are legitimate indicators” (p. 4). The suddenness of Aha! moments makes observing behavioral changes (and subsequent changes in understanding) more dramatic and pronounced, as opposed to more gradual and deductively reasoned outcomes. Baker, Goldstein, and Heffernan (2010) have observed this distinction by studying the precise moment when understanding changes – graphing the precise moment of learning in humans. Baker et al. (2010) diagram the shift in surface to deep processing by showing the, “differences between gradual learning (such as strengthening of a memory association) and learning given to ‘eureka’ moments, where a knowledge component is understood suddenly” (p. 13).

Graph Aha!.png

Figure 5. A Single Student’s Performance on a Specific Knowledge Concept (Baker et al., 2010, p. 13)

Baker et al. explain that, “entering a common multiple” (left, Figure 5) results in a “spiky” graph, indicating eureka learning, while “identifying the converted value in the problem statement of a scaling problem” (right, Figure 5) results in a relatively smooth graph, indicating more gradual learning (p. 14).

Another important implication to consider is that deep processing seems to create greater investment in learning, along with more positive outcomes for students. Dolmans, Loyens, Marcq, and Gijbels (2016) have reviewed 21 different studies that reported on surface and deep processing strategies in relation to problem-based learning, and concluded that students using deep processing strategies use, “the freedom to select their own resources to answer the learning issues, which gives them ownership over their learning” (p. 1097). This ownership suggests a strong link between intrinsic and autonomous motivation, resulting in stronger and longer-lasting outcomes. Dolmans et al. also report that surface learning strategies with problem-based learning had a similar negative effect, stating:

a high perceived workload will more likely result in surface approaches to studying and might be detrimental for deep learning. Students who perceive the workload as high in their learning environment are more likely to display a lack of interest in their studies as well as exhaustion. This is particularly true for beginning [problem-based learning] students. (p. 1097)

“If we get the deep processing, we almost always get the surface, but with much richer and rewarding outcomes!”
— J. Littlejohn, Elementary School Math Instructor

The meta-analysis concluded by affirming these positive deep processing outcomes do not come at the cost of the various surface processing benefits (p. 1097). Deep processing strategies employed by learners have also been shown to boost long-term recall of information and wider conceptual understanding. Jensen, McDaniel, Woodard, and Kummer (2014) report that learners who utilized deep processing learning strategies while preparing for high-level assessments (i.e., problem solving, analysis, and evaluation) performed better than students that did not, and these students retained a, “deep conceptual understanding of the material and better memory for the course information” (p. 307). Jensen et al. (2014) have found that this higher level of cognitive processing and understanding also made transfer-appropriate processing more likely. This conclusion is supported by similar research conducted on learners using deep processing strategies and motivated by deeper conceptual understanding (Carpenter, 2012; Fisher & Craik, 1977; McDaniel, Friedman, & Bourne, 1978; McDaniel, Thomas, Agarwal, McDermott, & Roediger, 2013). Students using transfer-appropriate processing outcomes showed improved mastery and conceptual development greater than surface strategies and beyond the at-hand assessment; the gains were greater in current work and also in future assessments utilizing deep processing strategies. This developed processing strategy offers learners the greatest advantage in future outcomes. Studying Aha! moments in learning makes understanding surface processing and shifts into deep processing more probable, and the transfer-appropriate advantages more common, offering teachers a tremendous perspective into how to best develop pedagogy.

Now You See Me, Now You Don't: The Hidden Truth In Our Faces!

Facial Expression and Emotion (and the hidden truth of our faces)

Paul Ekman (1993) examines cross-cultural research on facial expression, seeking to elucidate further understanding about four key questions: (1) “What information does an expression typically convey? (2) Can there be emotion without facial expression? (3) Can there be a facial expression of emotion without emotion? (4) How do individuals differ in their facial expressions of emotion?” (p. 384) Ekman reaffirms the cross-cultural agreement on six primary areas of universal categorization of facial expression: fear, anger, disgust, sadness, and enjoyment. Ekman also makes clear that further research is necessary to explain, “the question of what the face can signal, not what information it typically does signal” (p. 387). Important to this research is Ekman’s assertion that, “facial expressions are more likely to occur when someone sees or hears a dynamic (moving) event and the beginning of the event is marked rather than very slow and gradual” (p. 388). Ekman claims that sometimes the only expression of emotion a person may exhibit might come from an area of the body other than the face, such as, “the voice, posture, or other bodily action” (p. 388). Ekman goes further by claiming that there is a possibility for an emotion to transpire without a facial or observable change in expression (p. 389). It may be that in situations where someone shows little or no observable change in expression that the emotional connection is weak, not present at all, or not entirely transferable to the person being observed. It is important to note that change may indeed be occurring, but these changes may be sub-visible, taking place at the micro-muscular level, indicating autonomic nervous system activity that is only detectable through sophisticated measurements with electromyography (EMG) sensors. Tomkins (1963) reports that facial activity is always part of an emotion, even when its appearance is inhibited. This could be based on cultural differences or any variety of other factors. The intensity of the emotional reception is somewhat correlated with the fidelity of the expression.

FACS.gif

Ekman (1985/2009, 1992, 1993) reports that individuals can experience emotion without observable changes in facial expression. Sometimes a person will respond to a stimulus with a head nod, a clenched fist, change in posture, or by walking toward or away from a situation. Even more intriguing is the change in expression that can be communicated through spoken words and audible vocalizations (i.e., moans, screams, or sighs), without necessarily expressing a visible change in the face. Ekman (1993) shows that it is equally true that a person can fabricate an expression of emotion without actually feeling an emotion (p. 390). Ekman states that, “although false expressions are intended to mislead another person into thinking an emotion is felt when it is not, referential expressions are not intended to deceive” (p. 390). It is most common to use referential expressions when referring to previous emotional experiences, specifically not experiences being felt currently. Examples of false emotional expressions aside from referential expressions are generally understood to be examples of deception. Efforts to deceive can be harmful or beneficial. A lie can conceal an important truth that harms a person in some manner. However, a lie can also allow a comedian to deliver a punchline at the appropriate time to maximize the intended comical effect, or give someone the courage to push past their fears when facing the insurmountable task of asking someone else to be their Valentine. The key is to fabricate expressions without specific emotional impetus.

Facial Action Coding System

Ekman and Friesen (1978/2002) published the Facial Action Coding System (FACS) manual, with a robust revision in 2002. This publication is a comprehensive guide for measuring facial expressions and behaviors. The manual includes the complete 527-page guide to various facial expressions, a 197-page investigator’s guide, a score checker protocol (included for the FACS test, published and sold separately), and a variety of example photos and videos are also included. The manual is a comprehensive system for describing all observable facial movements; it breaks down facial expressions into individual components of muscle movements that represent changes in behavior and emotional response to a given stimulus. Subsequent publications have featured subtle and microexpressions. Whether you can see them or not, there are a great many truths hidden in the expressions of our faces. Are you looking closely enough to find them?!

FACS 2.jpg

Historical Instances of Measurement and Intervention in U.S. Schools (Part II)

Historical Instances of Measurement and Intervention in U.S. Schools (Part II)

this article is a continuation of a research entry from the July 30, 2019 edition:

The last two decades of the twentieth century brought greater influence from the federal government, along with greater potential for teachers to become more involved in decisions that might positively affect student outcomes. The Coleman Report, A Nation at Risk, as well as subsequent federal interventions in schools have led to further reform and legislation, but not until Public Law 107 – 110, commonly referred to as No Child Left Behind, did the federal government establish such a dominant presence and focused concern with measurable outcomes. In 2001, the law was introduced to Congress as, “an act to close the achievement gap with accountability, flexibility, and choice, so that no child is left behind” (NCLB, 2002, p. 1). The legislation was in effect until a bipartisan congress stripped away the federal requirements in 2015. This law focused on standards- based reforms in education, based on the belief that by setting high standards, making outcomes for students ambitious and clear, creating and monitoring measurable goals, schools and the students within them would experience greater, more consistent achievement (NCLB, 2002). All of these improvements are based on an understanding that the role of teachers would be a primary driver for positive change. In fact, the bill requires schools to attract, retain, and develop, “highly qualified” teachers. This phrase is used more than 60 times throughout the document (NCLB, 2002).

What was most promising about this legislation was the intent to open pathways for creative, innovative, and inspired teacher practices to promote learning outcomes. Thoughtful critics of the law such as Darling-Hammond (2007) acknowledge the potential in NCLB:

While recent studies have found that teacher quality is a critical influence on student achievement, teachers are the most inequitably distributed school resource. This first-time-ever recognition of students’ right to qualified teachers is historically significant. (p. 2)

Highly qualified teachers were the intended change-agents of the hoped-for successes in NCLB, with districts being charged with,

teacher mentoring from exemplary teachers, [...] induction and support for teachers, [...] incentives, including financial incentives, [...] innovative professional development programs, [...] tenure reform, merit-based pay programs, and testing of elementary school and secondary school teachers in the academic subjects that the teachers teach. (p. 1632)

However, where federal measures aimed to reverse negative trends and improve student outcomes, the emphasis on quality teachers and teaching quality still did not receive the attention necessary to dramatically increase student achievement and narrow the achievement gap in American schools. Generally speaking, critics have pointed out that the implementation of the law was in many respects counterproductive because it (a) did not adequately account for accumulated effects of mismanaged or underfunded schools, (b) narrowed the curriculum, precisely the opposite of what sensitive and nimble teaching practices ought to do when adjusting to students in their particular situations, and (c) brought too much focus upon testing and other measurement mechanisms. The most explicit feature of the law were the unpopular standardized tests, along with tactics like “drill and kill” for test preparation, which displaced creative attempts to nurture student learning and cognitive potential (Darling-Hammond, 2007; Dee & Jacob, 201; Hanushek & Raymond, 2005; Ladd & Lauen, 2010; Rustique-Forrester, 2005; Sunderman, Tracey, Kim & Orfield 2004).

Instead of placing teachers at the center of processes for better informing learning outcomes, and placing greater emphasis on surface, deep, and transfer-appropriate thinking strategies, schools and the teachers within them succumbed to the symptoms of surface processing, short-term memorization prioritization, and the hostile environment of overtaxing students with tests (Darling-Hammond, 2007). Rather than removing barriers that continue to obstruct learning potential in schools and open more opportunities for creative thinking, more frequent Aha! experiences, as well as more holistic means of supporting the development of a child’s full potential, the American education system remained unchanged from its former industrial model of generalized goals accompanied by generalized processes.