SEARCH

SEARCH BY CITATION

Abstract

  1. Top of page
  2. Abstract
  3. INTRODUCTION
  4. HEURISTICS AND DUAL-PROCESS THEORIES
  5. METHODOLOGY
  6. FINDINGS
  7. CONCLUSIONS AND IMPLICATIONS
  8. REFERENCES

The characterization of students' cognitive biases is of central importance in the development of curriculum and teaching strategies that better support student learning in science. In particular, the identification of shortcut reasoning procedures (heuristics) used by students to reduce cognitive load can help us devise strategies to foster the development of more analytical ways of thinking. The central goal of this study was thus to investigate the reasoning heuristics used by undergraduate chemistry students when solving a traditional academic task (ranking chemical substances based on the relative value of a physical or chemical property). For this purpose, a mixed-methods research study was completed based on quantitative results collected using a ranking-task questionnaire and qualitative data gathered through semistructured interviews. Our results revealed that many study participants relied frequently on one or more of the following heuristics to make their decisions: recognition, representativeness, one-reason decision making, and arbitrary trend. These heuristics allowed students to generate answers in the absence of requisite knowledge; unfortunately, they often led students astray. Our results suggest the need to create more opportunities for college chemistry students to monitor their thinking, develop and apply analytical ways of reasoning, and evaluate the effectiveness of shortcut reasoning procedures in different contexts. © 2010 Wiley Periodicals, Inc. Sci Ed94:963–984, 2010


INTRODUCTION

  1. Top of page
  2. Abstract
  3. INTRODUCTION
  4. HEURISTICS AND DUAL-PROCESS THEORIES
  5. METHODOLOGY
  6. FINDINGS
  7. CONCLUSIONS AND IMPLICATIONS
  8. REFERENCES

Research on students' ideas about natural phenomena often asks participants to build explanations or make predictions and decisions under uncertainty (White & Gunstone, 1992). Students' knowledge and reasoning skills in the area of interest tend to be limited or underdeveloped (Driver, Leach, Millar, & Scott, 1996), and they are forced to make inferences or generate hypothesis without certitude, in a short time and with restricted resources. These conditions are similar to those in which people have to make many decisions in everyday life, or students are asked to answer questions in academic tests. Under such circumstances, people often rely on whatever cognitive resources they have available to come up with a plausible answer (Gigerenzer & Selten, 2001). One can thus suspect that their thinking may be strongly influenced, guided, and constrained by both their intuitive knowledge about the system of interest and shortcut reasoning procedures that reduce information-processing load. The nature of the latter type of cognitive constraints is the central focus of this paper.

Recent work in developmental psychology and cognitive science seems to support the view that the human mind operates on the basis of a variety of cognitive constraints that guide learning and reasoning in a given area (Hatano & Inagaki, 2000; Keil, 1990; Pozo & Gómez-Crespo, 1998; Wellman & Gelman, 1998). Solving problems, generating explanations, or building inferences seems to involve the activation or instantiation of a spectrum of constraints, from domain-general to domain-specific, from implicit to explicit, from highly restrictive to somewhat skeletal, which may act in complementary or competitive ways (Gelman, 1990; Gelman & Williams, 1998; Sebastià, 1989). The goal is not necessarily to achieve conceptual coherence, but rather local explanatory coherence during a specific task in a determined context (Sloman, 1996). Constrained knowledge systems allow us to make reasonable, adaptive inferences about the world given limited time and knowledge (Gigerenzer & Selten, 2001). They often generate acceptable answers with little effort, but sometimes lead to severe and systematic biases and errors.

Guided by these results on human development, learning, and reasoning, we have argued that many of the alternative conceptions and systematic errors that students exhibit in science domains seem to be the result of constrained reasoning under uncertainty (Talanquer, 2006, 2008, 2009). In particular, we have proposed that, for purposes of analysis, it is convenient to think of the implicit constraints that guide, support but also restrict student thinking as belonging to two major types: (a) implicit assumptions about the properties and behavior of the relevant entities in the domain (e.g., physical objects move on continuous paths; animal parts have purposes) and (b) shortcut reasoning procedures (heuristics) to build explanations, generate inferences, and make predictions and decisions with limited time and knowledge (e.g., when faced when two options, if one of them is recognized or familiar, select the recognized object). The identification and characterization of these assumptions and heuristics is critical in both devising educational strategies that can help students develop scientific ways of thinking, as well as in designing assessment tools that provide reliable evidence of student understanding (Talanquer, 2006).

Several educational researchers have highlighted the central importance of different types of cognitive constraints on student thinking in academic domains, referring to them in diverse ways: tacit or implicit presuppositions (Vosniadou, 1994), core hypothesis (Chi, 2008), background assumptions (Vosniadou, 2007), phenomenological primitives (diSessa, 1993), cognitive resources (Redish, 2004), core intuitions (Brown, 1993; Fischbein, 1987), or intuitive rules (Stavy & Tirosh, 2000). Some of them have argued that such constraints are organized in systems of knowledge (explanatory frameworks) that have some, but not necessarily all, of the characteristics of a theory (Vosniadou, 1994, 2007). Other authors support the view that our intuitive knowledge about the world is more fragmented, consisting of a large, diverse, and moderately organized collection of phenomenological ideas, commonly referred to as p-prims (diSessa, 1993). Despite differences in claims about coherence and level of integration of our intuitive knowledge, these different authors highlight the importance of clearly characterizing the basic cognitive elements or resources, such as implicit assumptions and heuristics, that guide and constrain reasoning and the construction of mental models in a given domain.

Research also suggests that different individuals can be expected to hold similar assumptions and reasoning heuristics in a given domain, although they may use them in different ways depending on prior knowledge and experiences, or the framing of a task (Siegler & Crowley, 1994; Talanquer, 2009). These factors introduce variability in the explanation and decision-making patterns of a given person or among individuals, who may select different cues to guide their reasoning about a phenomenon and generate different explanations, even though they may be guided by similar cognitive constraints. From this perspective, expertise in a certain field may not imply the substitution of heuristic-based reasoning by analytical reasoning, but increased ability to select proper cues to make quick judgments and decisions in specific contexts (Evans, 2006, 2008). For example, central findings in the novice–expert research literature indicate that although novices and experts recognize similar surface features, experts can use their prior knowledge to build meaningful understandings regardless of the form of representation (Glaser, 1989; Tanaka & Taylor, 1991). Expertise may also imply more developed metacognitive skills to monitor and control when and how to make some assumptions or apply certain types of reasoning (Klaczynski, 2004).

Different authors have detected a variety of implicit assumptions that children and students seem to have about the properties and behavior of entities and processes in the physical and the biological worlds (Baillargeon, 2008; diSessa, 1993; Inagaki & Hatano, 2006; Taber, 1998, 2003; Talanquer, 2006, 2009). In particular, Taber and coworkers (Taber, 1998, 2003, 2008, 2009; Taber & Tan, 2007) have identified several primitive knowledge structures used by chemistry students to build explanations and make predictions about the stability and reactivity of chemical substances. However, only a few studies have paid attention to the more procedural cognitive constraints (heuristics) that help chemistry students reduce cognitive load when building inferences or making decisions in academic tasks (Taber & Bricheno, 2009). Thus, the central goal of this study was to expand our knowledge in this area by investigating the implicit shortcut reasoning procedures used by undergraduate general chemistry students when ranking chemical substances based on the relative value of a physical (solubility in water; melting and boiling points) or chemical (acidity, basicity) property. These are complex tasks for a novice chemist given that they require the identification and coordination of multiple cues for their successful completion. Better understanding of the implicit reasoning strategies used by students to make decisions when working on these types of tasks is central for the development of teaching strategies that can more effectively scaffold and assess student thinking.

HEURISTICS AND DUAL-PROCESS THEORIES

  1. Top of page
  2. Abstract
  3. INTRODUCTION
  4. HEURISTICS AND DUAL-PROCESS THEORIES
  5. METHODOLOGY
  6. FINDINGS
  7. CONCLUSIONS AND IMPLICATIONS
  8. REFERENCES

Research on human reasoning has shown that people make inferences and decisions in their daily lives by using processes that are relatively simple to apply (Gilovich, Griffin, & Kahneman, 2002). These shortcut reasoning procedures, also called heuristics, reduce the information-processing load (Roberts, 2004; Shah & Oppenheimer, 2008). They tend to be fast and frugal procedures in that they take little time to apply and use only a small amount of the available information (Todd & Gigerenzer, 2000). In general, heuristics simplify reasoning by reducing the amount of information to be processed or by providing implicit rules of thumb for how and where to look for information, when to stop the search, and what to do with the results. They are said to be ecologically rational because they exploit structures of information in the task environment. Although heuristics do not always lead to the correct solution, they usually provide reasonable, satisfactory answers. However, they also seem to be responsible for many systematic biases and errors in situations that require more elaborate, analytical processing.

Many heuristics are task-specific reasoning procedures because they can only be applied to certain type of tasks, but they are domain-general in the sense that they can be employed in a variety of domains (Roberts, 2004). For example, shortcut procedures such as the representativeness heuristic (i.e., judge things as being similar based on how closely they resemble each other on first appearances) are useful for categorization purposes in many domains, but not applicable in tasks where one has to select between two or more options. In this latter type of situations, a recognition heuristic (i.e., if one of the objects is recognized and the others are not, then infer that the recognized object has the higher value) may be more useful (Goldstein & Gigerenzer, 2002). However, some heuristics could be domain-specific as prior knowledge in a certain area may provide the basis for valuable shortcuts in reasoning. For example, the heuristic “when having stomach problems, first think of what you ate” frequently allows us to make successful diagnoses without a full medical analysis.

The pervasive use of heuristics in reasoning and decision making has been explained using dual-processing accounts of human cognition (Evans, 2006, 2008; Sloman, 1996). Dual-process theories propose that there are two distinct modes of thinking or processing, commonly labeled System 1 and System 2, which may run in series or parallel in our mind. The first of these systems includes processes that are preconscious, implicit, automatic, fast, and effortless, whereas processes in System 2 are conscious, explicit, controlled, slow, and high effort. Processes in System 2 require access to a single, capacity-limited central working memory whereas Type 1 processes do not. Thus, Type 2 processes may be expected to correlate with individual differences in cognitive capacity and be disrupted by working memory load, whereas Type 1 processes are independent of general intelligence and working memory capacity.

System 1 and System 2 modes of reasoning correspond, respectively, to our commonsense notions of intuitive and analytical thinking. The implicit cognitive constraints described in preceding paragraphs may be ascribed to System 1, which produces quick automatic nonnormative responses, whereas System 2 processes are responsive to rational norms. In general, it is believed that heuristic-based responses control behavior unless analytical reasoning intervenes with more effortful reasoning (Evans, 2006). However, System's 2 intervention to alter or inhibit the default responses generated by System 1 will depend on the extent to which the answer is judged not satisfactory (i.e., there is a good reason to reject it). These types of interventions are more likely to occur when individuals have strong background knowledge in the domain, high cognitive ability, or a disposition to be critical or reflective (Evans, 2008).

Although most of the research on heuristic reasoning has been completed in nonacademic contexts, there is evidence, particularly from investigations in mathematics' education (Fischbein, 1987; Gillard, Van Dooren, Schaeken, & Verschaffel, 2009; Leron & Hazzan, 2006; Stavy & Tirosh, 2000), that this mode of thinking is also common in the classroom setting. In particular, the intuitive rules theory of Stavy and Tirosh (2000) is relevant to our work as it describes systematic human errors and biases in comparison tasks (e.g., Which is larger … ? What is stronger… ?) in which heuristics of the form more A–more B or same A–same B are commonly used to make predictions or generate explanations. The central conclusion of this theory is that students' responses are many times determined by irrelevant external features of the tasks, rather than by related concepts and ideas. Recently, Taber and coworkers (Taber, 2009; Taber & Bricheno, 2009; Taber & Tan, 2007) have explored the shortcut reasoning strategies used by chemistry students while working on different types of tasks. These authors suggest that students' reliance on heuristics may be linked to the complexity of many chemistry tasks that require the coordination of knowledge at many levels. However, our understanding in this area is limited.

METHODOLOGY

  1. Top of page
  2. Abstract
  3. INTRODUCTION
  4. HEURISTICS AND DUAL-PROCESS THEORIES
  5. METHODOLOGY
  6. FINDINGS
  7. CONCLUSIONS AND IMPLICATIONS
  8. REFERENCES

Goals and Research Questions

The central goal of this study was to investigate the intuitive heuristics used by general chemistry undergraduate students when solving ranking tasks. Our investigation was guided by the following research question:

  • What heuristics do undergraduate chemistry students use when asked to rank chemical substances based on the expected relative values of different physical (solubility in water; melting and boiling points) and chemical (acidity, basicity) properties?

Setting and Participants

This study was conducted in a public university in the southwestern United States. The student body, consisting of more than 29,000 undergraduate students, is approximately 53% female and 47% male with close to 32% of them from Hispanic and other minority groups. The chemistry department at this institution offers a two-semester general chemistry sequence for science and engineering majors with an average enrollment of 1,500 students each semester divided in several groups of up to 300 students each. Most of these students are freshmen or sophomores with an ethnic and gender makeup similar to that of the entire university.

All of the study participants were enrolled in the second semester of the general chemistry sequence (GCII) at the college level. Thus, they could be expected to recognize the submicroscopic factors that affect and determine the nature of chemical bonding and intermolecular forces in different types of substances and materials (e.g., ionic, covalent, metallic, aqueous solutions), as well as the impact of those factors on macroscopic physical properties. Two main phases of data collection, a ranking-task questionnaire and individual interviews, were completed. The questionnaire was answered by students enrolled in three GCII groups (n1 = 215, n2 = 114, n3 = 85: N = 414) offered in two different spring semesters with similar student populations. The interviews were conducted with 34 volunteers (25 females and 9 males) from other equivalent GCII groups that were not asked to complete the questionnaire; thus, these participants had not seen the questions prior to the interview.

Research Instruments

To answer the research question, a mixed-methods research study was completed based on quantitative results collected using a ranking-task questionnaire and qualitative data gathered through semistructured interviews (Greene, Caracelli, & Graham, 1989). The expectation was that this research design would help us increase the interpretability and meaningfulness of the findings, as well as the validity of the results. Both of the research instruments included six questions that asked students to arrange three or four chemical compounds, represented in symbolic form, in order of increasing acid or base strength, melting or boiling points, or solubility in water (see Table 1).

Table 1. Properties and Substances Included in the Questionnaire and the Interview
QuestionaPropertyChemical CompoundsQuestionnaireInterview
  • In each case, the substances have been arranged in the correct order. The percentage of students who proposed the correct rankings in both the questionnaire and the interviews (N = 34) are also included.

  • a

    Common prompt: Arrange the following chemical compounds in order of increasing…

1Acid strengthH2S, HCl, HI24.9% (N = 413)20.6%
2Boiling pointHCl, HI, NaI, NaCl9.6% (N = 384)5.9%
3Base strengthMg(OH)2, Ca(OH)2, KOH36.0% (N = 408)20.6%
4Melting pointHCl, HBr, NaI, NaBr12.3% (N = 391)8.8%
5Boiling pointPH3, H2S, H2Se23.1% (N = 402)20.6%
6Solubility in waterMgO, BaO, NaCl, NaBr5.4% (N = 405)11.8%

Our selection of the sets of substances in Table 1 was not arbitrary; we purposely included common or familiar substances, such as NaCl and HCl, mixed with less familiar compounds exhibiting both explicit and implicit similarities and differences. For example, NaBr and NaCl have Na in their formulas (explicit) and are ionic compounds (implicit); the molecule of H2S has more hydrogen atoms than that of HCl (explicit), but both are covalent compounds (implicit). Our goal was to include cues that might trigger the application of heuristic reasoning based on our knowledge of common shortcut reasoning strategies in other areas (e.g., recognition, representativeness, more A–more B) and typical misconceptions in the field of chemistry. All of the selected substances have submicroscopic structures and macroscopic properties that a GCII student could derive based on concepts and ideas discussed in traditional general chemistry courses.

Data Collection

All of the interviews and the questionnaire were completed near the end of the academic semester, when the concepts and ideas needed for successful completion of the ranking tasks had been already introduced. For the questionnaire, data were collected in the actual classrooms according to the following procedure. Students were told that they were expected to individually answer six questions requiring them to arrange substances represented by their chemical formula in order of increasing value of a given property. The common prompt for all of the questions (see note in Table 1) was presented on a PowerPoint slide and explained in general terms. Students were told they would have 60 seconds to answer each question and asked to record their responses on the answer sheet. The different questions were presented one by one on individual slides that included the specific question prompt and the chemical formulas of the substances to arrange. For each question, students had to decide where to place each substance in a row of empty boxes included in the answer sheet. Given that research shows that formal reasoning and mathematics skills have a strong impact on students' understanding and success in solving chemistry problems (Lewis & Lewis, 2007; Tai, Sadler, & Loehr, 2005), a visual aid was included in the answer sheet to indicate where to place those substances with the lowest and highest values. All of the students had visual access to a large periodic table present in the classroom.

Since the goal of our investigation was to explore students' heuristic reasoning, we decided to constrain the time allotted to answer each question. Research on people's cognitive biases, first interpretations, and intuitive reasoning often relies on results from tasks completed under speeded conditions (Kelemen & Rosset, 2009; Rosset, 2008); it is expected that time limits will impair or constrain the action of control and monitoring mechanisms associated with analytical reasoning. The 60-second limit used in our study was selected based on our analysis of the results from 12 semistructured interviews completed before the application of the questionnaire in the large classes. Our analysis of these interviews revealed that none of the participants took less than 30 seconds to generate an answer, even in those cases in which the interviewees explicitly stated that they were guessing. Most students who provided some explanation to their answers took between 15 and 30 seconds to identify a relevant feature on which to base their decision. Most interviewees who used this initially identified feature to build an answer completed the task in 30–60 seconds. Students who reconsidered their answers while working in the task took more than 90 seconds to finish. Thus, we chose 60 seconds as a reasonable time limit given the stated goals of our investigation.

All of the interviews in our study were conducted using a format similar to that of the questionnaire, although in this case participants were instructed to think out loud as they answered each of the questions. In addition, no explicit time limit was imposed on students' responses and probing questions were asked by the interviewer when needed to clarify an answer or better elicit student reasoning. Interviewees had access to a periodic table and recorded their actual rankings on an answer sheet. Each of the interviews was audio-recorded and later transcribed and summarized. For reference and privacy purposes, a label was assigned to each of the participants based on their position on an alphabetical list. For example, the third student on this list was assigned the label S3. This labeling system has been used throughout the presentation of our results.

Data Analysis

Questionnaire answers were analyzed to quantify the frequency of different substance arrangements (A, B C; A, C, B; and so forth) proposed by the students in each of the six questions. This allowed us to determine the percentage of correct responses (see Table 1) and the most preferred rankings for each set of substances. Interview tapes and transcripts were reviewed and summarized on an individual question basis. The interviews were then reviewed a second time, and the summaries were edited and supplemented as necessary. The summaries were arranged according to the answer given for each question (specific ranking order), and the frequencies of specific answers were compared to those for the class questionnaires. All of the individual answer summaries were then analyzed using an iterative, nonlinear constant comparison method of analysis in which common ideas and reasoning strategies were identified within each answer type (Charmaz, 2006). During this analysis, particular attention was paid to things that the student said or did to make their decisions when ranking substances in each set. Different patterns of reasoning were identified and coded within each answer group and then compared to the patterns of reasoning used by students who generated different answers for a given question.

To ensure interrater reliability, eight student answers for each of the six questions were selected for a total of 48 question transcripts (24% of the total), with each of the interviewees represented at least once in the subsample. First, eight of the transcripts were individually coded by both authors and the results compared and discussed; the coding system was revised during this process. The authors then separately coded another set of transcripts, compared and discussed their results, repeating the process until achieving more than 90% agreement in two consecutive sets of eight answers. The resulting coding system and methodology was then applied by the first author to analyze the totality of the responses.

FINDINGS

  1. Top of page
  2. Abstract
  3. INTRODUCTION
  4. HEURISTICS AND DUAL-PROCESS THEORIES
  5. METHODOLOGY
  6. FINDINGS
  7. CONCLUSIONS AND IMPLICATIONS
  8. REFERENCES

To facilitate the presentation, interpretation, and discussion of our results, we will use students' answers to one of the six questions in our research instrument (solubility comparison; Question 6 in Table 1) as a primary reference to introduce our major claims, using the answers to the other ranking tasks as supporting evidence. Students' reasoning patterns during this particular task are illustrative of the major findings across the study. However, before analyzing the nature of the heuristics that we uncovered, it is useful to recognize the type of analytical reasoning that would be needed to provide a satisfactory answer to this reference question.

The Optimal Response

Research on judgment and decision making suggests that the search for optimal decisions may be modeled using the weighted additive rule (Shah & Oppenheimer, 2008). According to this model, analytical decision makers consider all of the available alternatives and cues for each alternative. For example, in completing the task “Arrange the following chemical compounds in order of increasing solubility in water: NaCl, NaBr, MgO, BaO,” the model suggests that finding the correct answer requires investing effort on five basic tasks:

  • 1.
    Identifying all relevant cues (e.g., recognizing that all of these compounds are ionic and that their physical properties are thus largely determined by the charge and size of their ions);
  • 2.
    Recalling and storing cue values (e.g., Na+, Cl, and Br are univalent ions; Mg2+, Ba2+, and O2− are divalent ions; ion size increases as we move down a family in the periodic table);
  • 3.
    Assessing the weight of each cue (e.g., for most ionic compounds consisting of monoatomic ions, solubility in water is larger when interactions among ions are weaker; interaction strength among ions is determined by Coulomb's law; the smaller the ion charge, the weaker the interaction; the larger the ion size, the weaker the interaction; ion charge differences have a larger impact than ion size differences on the relative values of physical properties);
  • 4.
    Integrating information for all alternatives (e.g., interactions among ions in NaCl and NaBr are weaker than those in MgO and BaO; interactions among ions in NaBr are weaker than those in NaCl; interactions among ions in BaO are weaker than those in MgO. Although O2− ions will react with water and form OH, this fact is not likely to alter the rankings);
  • 5.
    Comparing different alternatives and making a decision (e.g., solubility in water likely increases from MgO to BaO to NaCl to NaBr).

This type of reasoning is likely to demand a great cognitive effort from novice chemistry students. This may explain why only 5.4% of the surveyed students and 11.8% of the interviewees were able to generate a satisfactory answer (see Question 6 in Table 1), despite their course training in the application of this type of reasoning. Our results indicate that many interview participants relied on heuristics that facilitated the completion of the task, but frequently led them in a wrong direction. As suggested by the weighted additive model for optimal decision making (Shah & Oppenheimer, 2008), the types of reasoning strategies used by the students tended to involve one or several of the following effort-reduction methods: examining fewer cues, reducing the difficulty associated with retrieving cue values, simplifying the weighting principles for cues, integrating less information, or examining fewer alternatives.

Four Main Heuristics

Our analysis of the questionnaire and interview data revealed that many study participants relied frequently on one or more of the following heuristics to make their decisions during the ranking tasks: recognition, representativeness, one-reason decision making (ORDM), and arbitrary trend. The first three of these strategies can be considered domain-general as they have been identified in many other areas of human decision making (Gilovich et al., 2002), whereas the fourth heuristic is particular to chemistry. We will describe and discuss each of these strategies in detail in the following subsections. However, for reference purposes, we present in Table 2 a summary of the percentage of interviewees who used these different heuristics in completing each of the six tasks described in Table 1.

Table 2. Percentage of Interviewees (N = 34) Using a Given Heuristic at Least Once When Solving Each of the Six Ranking Tasks Described in Table 1
 Ranking Tasks
Heuristics123456
Recognition79.4%52.9%35.3%26.5%14.7%73.5%
Representativeness8.8%38.2%5.9%23.5%17.6%52.9%
One-reason decision making50.0%70.6%67.6%82.4%73.5%52.9%
Arbitrary trend11.8%14.7%41.2%11.8%44.1%29.4%
Recognition Heuristic

Consider the students' responses to the “solubility in water” ranking task as summarized in Figure 1, for both the questionnaire and the interview data. In this case, the two top proposed arrangements are characterized by the selection of NaCl as the most soluble substance; in fact, more than 73% of the students who answered the questionnaire or completed the interview made this choice. The following interview excerpts illustrate the type of reasoning used by interviewees to justify their selection of NaCl as the most soluble compound:

Ok, um, NaCl is soluble, um, NaBr… hmm. NaCl and NaBr. I think NaCl is more soluble just because of experience with it. (S20)

… I'm going to put NaCl first, just because it's salt. (S28)

Um, I know NaCl is soluble, it is just from knowing … . So, I would think NaCl is the most soluble. (S31)

thumbnail image

Figure 1. Most commonly proposed ranking orders for the solubility of chemical compounds in water and percentage of students who proposed them. The figure includes results from both the questionnaire (N = 405) and the interviews (N = 34).

Download figure to PowerPoint

Students' recognition or familiarity with NaCl as a soluble substance played a central role on their decision making in this particular task, and, as we will illustrate later, it served as an anchor from which many of the subsequent judgments and decisions were made. Our results suggest that students' selection of NaCl as the most soluble compound was likely based on a recognition heuristic of the form: “If one of several objects is recognized and the others are not, then infer that the recognized object has the higher value with respect to the criterion” (Goldstein & Gigerenzer, 2002). In general, this heuristic tends to be applied using recognition as the single decision cue, particularly when there is a perceived strong correlation between recognizing an object or event and having higher values on a given criterion (e.g., solubility in water). This correlation is established and reinforced by prior knowledge and experience. The recognition heuristic is closely related to other heuristics, such as availability and familiarity (Gilovich et al., 2002), that rely on information that is recognized, is familiar, or is easily processed to make decisions or build inferences. These types of effort-reduction strategies help diminish the number of cues that are considered when making a decision and lessen the difficulty associated with retrieving, storing, and weighting cue values.

The use of the recognition heuristic helps explain the large frequency of incorrect responses in Questions 1, 2, and 6 in which common compounds (NaCl and HCl) were assigned the highest value in the ranking sequences. In fact, our results indicate that in questions in which there was a perceived strong correlation between the recognized substance and the ranked property (e.g., NaCl and solubility in Question 6, HCl and acid strength in Question 1), the recognition heuristic was the primary strategy used by most interviewees to start building their rankings. So, in Question 1, for example, 23 of the 34 interview participants (67.6%) recognized HCl as a strong acid and assumed that it was the compound with the highest acid strength. Consider these typical justifications:

So I know from experience, I guess from lab, that HCl is potent. So I guess that is the most… I recognize that as one of the strong ones. (S24)

Ok, I know hydrochloric acid is a strong acid. It's a very strong acid, so I will put that there. (S33)

Given these results, one may suspect that a large fraction of the students who placed HCl as the strongest acid in the questionnaire (59.6% of N = 413) relied on the recognition heuristic to generate their answer. Moreover, results from the interviews, in which more than 79% of all the participants used the recognition heuristic in one form or another to answer Question 1, suggest that even some students who chose to rank H2S (62.2% questionnaire; 52.9% interviews) or HCl (6.3% questionnaire; 5.9% interviews) as the weakest acids in the series, may have used recognition and familiarity (or lack of) as a reasoning strategy:

S34: Well, just because I know that HCl is a strong acid. I recognize that as a strong acid, and I guess I've never even used H2S before.

I: Ok, so H2S is more of a default lowest.

S34: Yeah, not like knowing.

S4: I'd start with what I know is not really a strong acid, so I'd start with HCl because I don't think it is a strong acid.

I: So you don't think HCl is a strong acid. What makes you think that?

S4: Um, from lab, I know I've dropped that, and I know it didn't burn.

Although less frequently, the recognition heuristic was also used by a significant proportion of the interviewees in those questions in which familiar compounds were included but the correlation between recognition and criterion was not strong. For example, in Question 2 both HCl and NaCl have to be ranked according to their boiling points (in this case, NaCl is in fact the compound with the highest boiling temperature). During the interviews, recognition of the substance, or the substance's known properties (e.g., high solubility, high acid strength), were commonly mentioned as a justification for their ranking:

Ok, I've seen sodium chloride before, so I don't know what that [NaI] would be at room temperature. I would say the sodium chloride would have the highest boiling point, just because I've seen it before … And HCl I said was the strongest acid [in Question 1], so I would put that as the lowest boiling point. (S24)

That's just salt (pause). I'm gonna do the sodium chloride first [lowest] … cause I guess you are supposed to put it in water when you are cooking to make it boil faster. So it decreases the point at which, like you also put it on ice. (S8)

It is difficult to establish to what extent this type of reasoning influenced students' answers in Task 2 of the questionnaire, given that NaCl was actually the substance with the highest boiling point and that additional results from our study suggests that other heuristics may have also played a role on students' decisions. However, we should point out that NaCl (40.4% of N = 384) and HCl (21.9%) were the two substances most commonly selected as having the highest boiling point in the questionnaire answers, far above the 9.6% of correct rankings. Similar results were found for Question 4 in which HCl was included in the task. For this question, we found 20.0% (of N = 391) of the questionnaire participants and 32.4% of the interviewees selecting HCl as the substance with the highest melting point, based on justifications, in this latter case, of the type “… I always hear more about HCl and I do the bromide, so I will put the HCl at the top” (S27).

Research has shown that the extent to which the recognition heuristic is used depends heavily on task features (Newell & Shank, 2004); the results of our study support this finding. As shown in Table 2 for Questions 1 and 6, the stronger the correlation between recognition and criterion, the heavier the reliance on this heuristic to make decisions. However, our study suggests that the use of this heuristic in our ranking tasks extended beyond the recognition of a particular substance; some students also used this strategy based on the recognition of the individual elements present in a given compound. Consider these justifications for selecting NaCl as the most soluble compound in Task 1 and choosing Ca(OH)2 as a stronger base in Task 3:

I know that salt is soluble in water, so I am going to go with the sodium as the last two, the ones that are the most soluble in water … . Um, I know that chlorine readily dissolves in our [pool] water as well because I've watched it, so I am going to say that NaCl is most soluble. (S11)

I think that calcium is going to be the strongest base now, and I only think that because it is in milk, and I know that milk is very basic. (S11)

Recognition was often used by our interviewees as the first step in generating their rankings and as a fallback strategy when other attempts to differentiate substances failed. In most cases, it was used to place substances at the top, and occasionally at the bottom, of the rankings, which created an anchor for subsequent decisions.

Representativeness Heuristic

If we go back to the analysis of students' answers to Question 6 as summarized in Figure 1, we can notice that the two most frequent rankings place NaBr as the next most soluble substance after NaCl. Let us now analyze common student justifications for this placement:

I guess I would just put, I mean, without knowing any of the rules, I would put NaBr lower than NaCl because it is related to sodium. (S28)

Um, NaBr? Well because I kind of just thought since those are very similar, NaCl just from experience I think that that would be more soluble so NaBr would be the next thing that is soluble. (S20)

… so I am going to say that NaCl is most soluble and then since I stuck with the sodium definitely be really soluble, I'm going to say that bromine goes next. (S11)

These excerpts illustrate the application of another common reasoning strategy used by our study participants that can be identified as the representativeness heuristic. This effort-reduction strategy is based on assuming commonalities between objects of similar appearance (Gilovich et al., 2002), and it helps people lower the number of cues to consider in making decisions (e.g., ignore ion size and ion charge), reduce the difficulty associated with retrieving cue values (e.g., avoid comparing ion sizes for Cl and Br), and integrate less information (e.g., avoid comparing NaBr to MgO and BaO).

As was the case for the recognition heuristic, the use of the representativeness heuristic was task dependent. This strategy was most commonly used in tasks involving substances with common composition or surface structural features as revealed by their chemical formula (Questions 2, 4, and 6 in Table 1). The heuristic helped interviewees arrange chemical compounds in subgroups (e.g., NaCl, and NaI, or NaCl and HCl, depending on whether the presence of Na or Cl was considered most relevant) and make decisions based on comparisons between members within a group or across groups. However, the strategy was always used in conjunction with other methods, frequently recognition, that allowed students to first rank one of the substances and then use it as an anchor from which to base other decisions.

To illustrate these ideas, let us analyze the type of heuristic reasoning that led some interviewees to propose the arrangement: HI, HCl, NaI, NaCl for the substances in Question 2 (increasing boiling point), which was the most frequent ranking for this task (22.4% of N = 384 questionnaires; 26.5% interview). To start the task, the intuitive student could use recognition, or other strategies, as the basis for selecting NaCl as the substance with the highest boiling point. Then, representativeness would be applied to explicitly or implicitly group compounds into two sets, NaCl and NaI, and HI and HCl. Comparisons within groups would help place NaI next to NaCl, whereas comparison across groups would lead students to assume that given that NaCl had a higher boiling point than NaI, the boiling temperature of the compound most similar to NaCl, HCl, would be higher than that of HI.

Similar types of reasoning were displayed by interview participants in answering Question 4. Consider, for example, this justification for proposing the ranking: NaI , NaBr, HBr, HCl as the arrangement of these substances in order of increasing melting points:

Um, I wanted to put HCl first and then I just thought I would keep it the same by putting the hydrogens before the sodiums. (S6)

Given that the final outcome of this line of reasoning depends on both what substance is used as an anchor as well as what groups are formed, and considering the arbitrariness of many of our interviewees' choices, it is not surprising that questionnaire responses to Questions 2 and 4 were spread among many options (more than 10 different major rankings, none of them proposed by more than 30% of the questionnaire or interview participants).

One-Reason Decision Making Heuristic

The analysis of the students' rankings for the solubility of the compounds in Question 6 (see Figure 1) reveals that the arrangement: BaO, MgO, NaBr, NaCl was clearly preferred by a majority of our study participants. After our previous discussion, we may have a better understanding of why NaCl and NaBr were so frequently placed in the top positions. However, what type of reasoning could lead students to think that MgO is more soluble than BaO? Consider the following interview excerpts:

… After that, magnesium oxide and barium oxide. Hum. They are over here [on the periodic table]. For solubility in water, I'll say that magnesium will be the next because it is lighter and I think that it is closest to oxygen and hydrogen, which is what water's made of so it might be more soluble, it might fit in better with water… so, it has properties more similar to water and is more likely to dissolve.. (S11)

… Then, MgO and BaO. Um, I think MgO would be more soluble because it's smaller atoms and I don't know if that has anything to do with solubility but, um … . It would just be able to bond with hydrogen … like with the H2O molecules more. And I think BaO would just, I don't know, be more OK if it's by itself. Like, um, I don't know. I can't think of a different reason really besides Mg and Ba are pretty far apart on the group, so that's a big difference in size, and size would affect whether a water molecule could dissociate it. (S20)

In these two cases, students made their decision based on the identification of a single differentiating characteristic between substances that allowed them to somehow predict different behaviors with respect to the relevant criterion (in this case, solubility in water). For example, in the first excerpt the student seemed to pay attention to various features (e.g., mass, position in the periodic table) during the analysis. However, the final decision was ultimately based on the perceived similarity between magnesium and water, rather than on the analysis of the effect of each individual cue. In the second example, the student used atomic size as the sole factor that differentiates how the compared substances will interact with water. In general, size, weight, or electronegativity of the Mg and Ba atoms were frequently cited by the interviewees as single differentiating characteristics during this task, with a strong tendency to associate low solubility with large atomic size and weight, as well as low electronegativity.

These results illustrate the application of the heuristic that we found to be the most pervasive (see Table 2) in students' reasoning across tasks: ORDM, an effort-reduction strategy that helps people select between options based on the first cue found that favors one alternative over the others (Todd & Gigerenzer, 2000). This heuristic reduces the number of cues and alternatives that need to be considered in making a decision. The ORDM heuristic is often used in conjunction with simple stopping rules that help determine when to stop the search and how to make a final decision (Todd & Gigerenzer, 2000). In our study, most of the interviewees who used this heuristic stopped the search when they identified a cue that they could plausibly associate with the relevant ranking property (e.g., solubility, boiling point), based on either prior knowledge, experience, or intuition. If the cue that they selected only helped them differentiate between two substances in the task, then they would start the search again, find a different cue, and use it to rank the remaining compounds. In some cases, students referred to different factors during their analysis but did not consider their individual effects to make a decision. Instead, they thought of these factors as indicators of the existence of a single differentiating feature that could be used to make the decision (e.g., stability, reactivity). These cases were also coded as ORDM in our study.

Interview participants used a variety of explicit and implicit features as single cues to make their decisions. Common explicit features were the number and types of atoms in the chemical formula; on the other hand, atomic size, mass, and electronegativity were the most referenced implicit cues. Consider the following transcript excerpt from an interviewee's justification for his proposed ranking: PH3, H2S, H2Se, for boiling points in Question 5:

I. What is your first thought about this question?

S7: Um, whether the amount of hydrogens have an effect on boiling point.

I: Ok

S7: I am going to say that they do, so that would make the PH3 boil first, and then I am going to use the weight of the S and the Se to determine which one will boil first. So, the Se is heavier, so I am going to say that boils last. So then, PH3, H2S, and H2Se.

I: Ok, so you listed PH3 first.

S7: Yeah, it will boil first.

I: Ok, why?

S7: Just because it has the most hydrogens so I figured, I don't know. I am just going with the more hydrogen will boil easier.

This excerpt illustrates how some students selected and used different explicit and implicit cues that were applied independently of each other, and frequently in idiosyncratic ways, to rank substances in a given task. It also exemplifies what we found to be a common pattern for most interviewees: their reliance on atomic properties, either explicit or implicit, to make judgments about a compound's properties, rather than considering molecular properties such as geometry, polarity, or polarizability (e.g., only 3 of 34 interviewees used these latter types of cues in answering Question 5). Although this bias toward atomic versus molecular properties may be a consequence of the use of chemical rather than structural formulas in our questionnaire, this result implies that most students failed to consider structural features in the absence of explicit prompts.

The frequency of use of different types of cues was task dependent. For example, Questions 1, 3, and 5, which included compounds that differed in the number of common types of atoms or ions, triggered ORDM heuristic responses based on this explicit feature:

… . I think that [H2S] has more H's in it so it would be a little bit stronger than the HI. (S4 in Question 1)

… there's two OH's in the Mg and Ca, so that just makes me think it is a stronger base. (S5 in Question 3)

On the other hand, the application of the ORDM heuristic in Questions 2, 4, and 6 tended to rely mostly on implicit features (e.g., atomic size, mass, electronegativity, acid strength):

Because I just assumed that the strength of the acids … I would just say that the acids have a higher boiling point than a base or a non-acid. (S24 in Question 2)

… and then they are both attached to sodium, this is a lighter element, so it is probably going to take less to melt it. (S9 in Question 4)

As can be seen in all of these examples, interview participants who used the ORDM heuristic to make decisions often applied other heuristics, such as more A–more B or same A–same B (Stavy & Tirosh, 2000), to build associations between the selected cue and the ranked property. Students tended to associate large mass and size with higher resistance to change, high electronegativity with strong polarity, and high acid or base strength with extreme physical properties (e.g., very high or very low boiling points and melting points).

It should be noted that expert chemists actually rely on a variety of associative rules to make plausible predictions; these associations link the structural features of substances to their physical and chemical properties. For example, the more polar or polarizable molecules are, the higher the boiling and melting points of the substance; the larger the ion charge in an ionic compound, the higher its expected melting point. What our study revealed is that although interviewed students also used associative rules as a basic strategy to rank chemical substances, they often either built wrong associations or used them incorrectly. This claim is based on our quantitative analysis of the number of correct (the selected cue was actually related to the ranked property and it was used in a proper way), appropriate (the selected cue was actually related to the ranked property but was used incorrectly), or incorrect (the selected cue was not related to the ranked property) associations made by the interviewees. The results of this analysis are depicted in Figure 2, where we show the total number of correct, appropriate, and incorrect associations built by the interviewees in each of the ranking tasks.

thumbnail image

Figure 2. Number of correct, appropriate, and incorrect associations made by the interviewed students in each of the ranking tasks.

Download figure to PowerPoint

Overall, of the 194 associations built by the 34 interviewed students across all ranking tasks, only 61 (31.4%) were correct, with 16 (8.2%) appropriate identifications, and 117 (60.3%) incorrect uses. Even more troubling is the fact that of the 61 correct usages, 42 (67.7%) were attributed to only five students; 16 of the 34 interviewees (47.0%) built associations that were incorrect in all occasions. Associations were more commonly used in Questions 2–5, and it is also in these instances that more incorrect relationships were made. Question 5 was particularly problematic in this respect, likely due to students' lack of familiarity with the substances that they were asked to rank.

Arbitrary Trend Heuristic

Let us consider again the most common ranking arrangement for the solubility in water of the substances in Question 6: BaO, MgO, NaBr, NaCl, and analyze these alternative justifications for the placement of BaO at the bottom of the sequence:

So, I think it is BaO because it is lower on the periodic table than Mg because it's higher than NaBr because it's a little bit less stronger than NaCl. (S2)

Um, chlorine is above bromine on the Periodic Table, and I think that sodium chloride is really soluble and I think it is more soluble than NaBr, so … Cl is above Br… so I think that because magnesium is above barium it will also be more soluble. (S29)

These particular students based their ranking decisions on the relative position on the periodic table of the different atoms that comprised the relevant substances, together with the assumption of some sort of implicit periodic trend. The use of the relative location of individual atoms on the periodic table, implying the existence of one or more arbitrary “periodic trends” that were neither stated nor justified, was at the base of the ranking decisions of a few interview participants in our study. This strategy, labeled as the arbitrary trend heuristic, allowed students to reduce the number of cues they needed to examine and the effort of retrieving and weighting cue values.

Periodic trends are commonly used in chemistry to make predictions about the properties of chemical substances, but some interview participants seemed to overgeneralize their existence. Although the arbitrary trend heuristic was the least used of the major effort-reduction strategies identified in our study (see Table 2), some interviewees used it extensively. Of a total of 52 occurrences across all of the ranking tasks, 35 (67.3%) were associated with only 9 participants (26.5%). As was the case for the representativeness heuristic, the arbitrary trend heuristic was most often used in conjunction with other strategies, such as recognition and ORDM. Its use was also task dependent and occurred most frequently in Questions 3 and 5 in which the main explicit factor that differentiated all of the substances from each other was the presence of a single type of atom (e.g., PH3, H2S, H2Se). This question format seemed to reinforce the idea that differences could be explained based on the properties of individual atoms.

Students who used this heuristic often recognized the arbitrariness of their decision and acknowledged the possibility of alternative rankings, but without doubting the existence of an underlying trend that was never justified:

Um, I have a feeling it makes sense only in my head [pauses, looks at periodic table] Ok, if it goes in the direction of the periodic table, sulfur … if it goes in this direction, sulfur, phosphorus, and then selenium. So, I would say, it is either one or the other. (S24 justifying the sequence H2Se, PH3, H2S for increasing boiling points)

It's a guess … because if it is increasing in the left direction and increasing as you go up then it would be that [answer], but if it increases as you go down … it would be a different order. (S3 justifying the sequence H2S, H2Se, PH3 for increasing boiling points)

Because of the arbitrary nature of the type and directionality of the “trend” that was invoked when using this reasoning strategy (e.g., up and to the left or to the right and down, as in the preceding excerpts), its application introduced a large variability in different students' answers, even though they were using the same underlying effort-reduction method. For example, 15 of the 24 students interviewed (44.1%) use this heuristic in some form or another to answer Question 5. However, none of the five different types of rankings proposed by these students was selected by more than 30% of them. This wide distribution of responses was very similar to that corresponding to the larger sample of students (N = 402) who completed this task in the questionnaire. Our results suggest that the use of the arbitrary trend heuristic might have been partly responsible for this broad spectrum of answers.

CONCLUSIONS AND IMPLICATIONS

  1. Top of page
  2. Abstract
  3. INTRODUCTION
  4. HEURISTICS AND DUAL-PROCESS THEORIES
  5. METHODOLOGY
  6. FINDINGS
  7. CONCLUSIONS AND IMPLICATIONS
  8. REFERENCES

Research on students' commonsense or intuitive reasoning in the sciences in the past 30 years has been strongly influenced by the “implicit theories” (Vosniadou, 1994) and the “knowledge-in-pieces” (diSessa, 1993) frameworks. Despite differences in claims about coherence, transience, and stability of students' ideas, both research programs assume that student intuitive thinking is guided and constrained by preconscious cognitive structures identified as ontological and epistemological presuppositions at the base of “implicit theories” or as p-prims or cognitive resources in the “knowledge-in-pieces” approach. In either case, intuitive thinking is assumed to draw from preconscious conceptual structures and cognitive biases such as those characteristic of System 1 in dual-process theories (Taber, 2008). The heuristics identified in our study are typical examples of cognitive constraints or resources that can be expected to guide but also constrain student thinking when dealing with conventional academic tasks under conditions of limited time and knowledge.

A large proportion of the participants interviewed in the present study relied on heuristic strategies, rather than analytical thinking based on sound chemical principles, to make ranking decisions throughout the different tasks included in our questionnaire. These heuristics allowed them to reduce cognitive effort and generate answers in the absence of requisite knowledge; unfortunately, these cognitive constraints often led students astray. The overall ability of our interviewees to generate correct answers was low, and the rankings that they proposed resembled those of the larger group of general chemistry students who answered our questionnaire, despite the fact that interviewees were not constrained by time to generate their responses. These results suggest that heuristic-based reasoning may be the default strategy for many students in the absence of requisite knowledge or when confronted with low-stakes assessments as the ones used in our study. Although it is possible that poor mathematics or formal thinking skills may have caused students to misinterpret the task and thus provide incorrect answers (e.g., ranking substances from high to low values instead), results from the interviews suggest that the proportion of students making these types of mistakes was small.

The major shortcut reasoning strategies used by our participants included the following:

  • Recognition: The decision is based on the recognition of an object, which is assumed to have the highest value with respect to the relevant criterion (Goldstein & Gigerenzer, 2002).

  • Representativeness: The decision is made assuming commonalities between objects of similar appearance (Gilovich et al., 2002)

  • One-reason decision making: The decision is based on the first cue that favors one alternative over the others (Todd & Gigerenzer, 2000).

  • Arbitrary trend: The decision is made, in this domain-specific heuristic, based on the relative position of the composing atoms on the periodic table without further justification.

Of these heuristics, the domain-general strategies of recognition and ORDM were the most commonly used by our interviewees (see Table 2), although their application depended on specific task features. For example, the use of recognition was triggered by questions that included common substances, such as NaCl, with known high values in the ranking criterion (e.g., solubility). ORDM was pervasively used across tasks, although frequently based on incorrect associations. In general, all of the heuristics provided shortcuts for students to make decisions by examining fewer cues, reducing the effort of retrieving cue values or remembering weighting principles, integrating less information, and simplifying comparisons.

Some of the heuristics identified in our work resemble reasoning strategies described by other researchers working with students in different fields. For example, the tendency of novice science students to analyze problems or phenomena in a reductionist manner, based on a single factor (i.e., ORDM) without taking into account the individual effect of all relevant variables that may influence properties or behavior, has been identified by many authors as characteristic of commonsense reasoning in a variety of topics in both physics (Driver et al., 1996; Viennot, 2001) and chemistry (Taber & Tan, 2007; Talanquer, 2006; Stains & Talanquer, 2007). Similarly, students' reliance on surface features to make decisions together with their tendency to literally interpret chemical representations has been well documented in the research literature (Gilbert & Treagust, 2009). Stavy and Tirosh (2000) have suggested that the application of associative heuristic rules of the type more A–more B or same A–same B are at the base of many of the wrong answers and alternative conceptions generated by students in the sciences, and the use of heuristics has been highlighted recently as a core feature of intuitive thinking in mathematics (Gillard et al., 2009; Leron & Hazzan, 2006).

In this context, our results not only highlight the central role that heuristic-based reasoning seems to play in student thinking even in college science classrooms but also indicate that many of the heuristics that these students use to solve typical academic tasks at this level are similar to those identified in adult reasoning in nonacademic contexts (Shah & Oppenheimer, 2008). From this perspective, our investigation suggests that science educators and instructors would benefit from paying closer attention to the results from the extensive research on the psychology of judgment and decision making (Gigerenzer & Selten, 2001; Gilovich et al., 2002; Todd & Gigerenzer, 2000), as well as to the theoretical frameworks and methodological approaches that currently guide this type of research (Evans, 2008). Heuristic, or System 1 modes of reasoning are known to be preconscious, implicit, automatic, and fast, thus introducing additional challenges in their identification and characterization (Taber & Bricheno, 2009).

Given that scientific thinking is mainly based on the application of analytical reasoning to make inferences and generate explanations in specific contexts, one may claim that the analysis of students' heuristic reasoning tells us little about how students learn science or how to better teach science. However, this argument disregards the enormous difference that still exists between science as a practice and the teaching of science, particularly at the college level. For example, the current goal of most introductory college chemistry courses is to help students to master a set of concepts, ideas, and skills that are judged to be necessary for more advanced science courses (Hawkes, 2005). Many of the tasks that students face in these classrooms are similar to those used in our study: They require students to apply basic concepts and ideas to solve simple questions or problems without much contextualization or personal or social relevance. Although we may disagree with these educational goals and strategies, the investigation of how students think in these environments is crucial if we want to help college chemistry instructors to foster more meaningful learning in their classrooms. These instructors need to recognize and address what our study suggests is a pervasive use of heuristics in making judgments and decisions in conventional academic tasks at this educational level.

In general, we would claim that the identification of students' reasoning strategies, from heuristic based to analytical, from domain-general to domain-specific, is of central importance in science education if we want to develop curriculum and teaching strategies that better support their learning, as well as assessment tools to gather valid and reliable evidence of student understanding. For example, our study suggests that current curricular decisions and instructional strategies used in chemistry teaching are not creating effective learning opportunities for students to develop the types of analytical reasoning skills needed to use submicroscopic models of matter to make inferences or predictions about the physical and chemical properties of chemical substances. Students' overreliance on recognition, representativeness, and single associations in making ranking decisions reveals a superficial understanding of the relationship between submicroscopic structure and macroscopic properties. Their overuse and overgeneralization of trends and rules without much scientific basis exposes their strong tendency to rely on memory and algorithms to answer questions and solve problems (Bodner & Herron, 2002).

From the assessment design and interpretation perspectives, our results highlight the importance of paying attention to task content and structure, as these features influence the types of reasoning heuristics that may be triggered. The inclusion of common or familiar substances, objects, or processes in comparison or ranking tasks may be highly distractive for many novice students. Systematic differences in surface features (e.g., number of hydrogen atoms in a binary compound) are likely to attract the attention of intuitive thinkers. Comparison and ranking tasks that require students to pay attention to two or more implicit features simultaneously to make a decision are likely to be challenging for a majority of them. Ranking tasks in which the correct answers match intuitive expectations (e.g., familiar substances have high values in the relevant criterion, substances containing heavy atoms are more “resistant” to change, strong acids or bases have extreme properties, and so forth) may provide unreliable data of student understanding.

Helping students develop the types of understandings and reasoning skills that are required to generate more thoughtful responses in academic tasks may require work on different fronts. On the one hand, we should better train students to make judgments and decisions based on analytical models for optimal decision making, helping them recognize the relevant cues and weighting principles that are appropriate in different situations. On the other hand, given that heuristic reasoning is unconscious, automatic, fast, and cognitively economical, instructors may want to spend more time helping students develop skills for metacognitive intercession, monitor, and control. Research has shown that metacognitive skills function to regulate conflict between intuitive and analytical responses (Klaczynski, 2004). The products of intuitive thinking are temporarily available in working memory, which provides opportunities to reflect on and evaluate the adequacy of these responses. If this type of reasoning cannot be easily avoided, we may want students to acknowledge it, be able to distinguish responses generated by analytical and heuristic processes, and compare them to determine which is most appropriate in different contexts. Expertise in chemistry, or in any field in general, does not imply the absence of heuristic reasoning in making judgments and decisions, but the appropriate use of heuristics in proper contexts (Evans, 2006, 2008).

At a more global level, the results of our study call into question the educational effectiveness of the traditional curriculum and teaching strategies that characterize introductory college chemistry courses. In general, the first-year chemistry curriculum at most universities is still mostly fact based and encyclopedic, built upon a collection of isolated topics, focused too much on abstract concepts and algorithmic problem solving, and detached from the practices, ways of thinking, and applications of both chemistry research and chemistry education research in the 21st century (Hawkes, 2005). In these courses, didactic teaching tends to be the norm and teaching efforts are focused on both describing and explaining a large collection of facts and principles and helping students master a collection of isolated skills and procedures. In this type of environment, reliance on heuristic reasoning may be enough for many students to successfully complete traditional academic tasks and earn acceptable grades. Changing these reasoning patterns may thus require drastic educational reforms that foster the creation of learning environments that offer more opportunities for students to connect core concept and ideas and to develop analytical ways of thinking in the context of solving realistic problems from a chemical perspective using the powerful and productive models, techniques, and ways of thinking developed in the field.

REFERENCES

  1. Top of page
  2. Abstract
  3. INTRODUCTION
  4. HEURISTICS AND DUAL-PROCESS THEORIES
  5. METHODOLOGY
  6. FINDINGS
  7. CONCLUSIONS AND IMPLICATIONS
  8. REFERENCES
  • Baillargeon, R. (2008). Innate ideas revisited: For a principle of persistence in infant's physical reasoning. Perspectives on Psychological Science, 3(1), 213.
  • Bodner, G. M., & Herron, J. D. (2002). Problem-solving in chemistry. In J. K.Gilbert, O. deJong, R.Justi, D. F.Treagust, & J. H. vanDriel (Eds.), Chemical education: Towards research-based practice (pp. 235266). Dordrecht, The Netherlands: Kluwer.
  • Brown, D. (1993). Refocusing core intuitions: A concretizing role for analogy in conceptual change. Journal of Research in Science Teaching, 20(10), 12731290.
  • Charmaz, K. (2006). Constructing grounded theory: A practical guide through qualitative analysis. Thousand Oaks, CA: Sage.
  • Chi, M. T. H. (2008). Three kinds of conceptual change: Belief revision, mental model transformation, and ontological shift. In S.Vosniadou (Ed.), International handbook of research on conceptual change (pp. 6182). New York: Routledge.
  • diSessa, A. A. (1993). Toward an epistemology of physics. Cognition and Instruction, 10, 165255.
  • Driver, R., Leach, J., Millar, R., & Scott, P. (1996). Young people's images of science. Buckingham, England: Open University Press.
  • Evans, J. S. B. T. (2006). The heuristic-analytic theory of reasoning: Extension and evaluation. Psychonomic Bulletin & Review, 13(3), 378395.
  • Evans, J. S. B. T. (2008). Dual-processing accounts of reasoning, judgment, and social cognition. Annual Review of Psychology, 59, 255278.
  • Fischbein, E. (1987). Intuition in science and mathematics: An educational approach. Dordrecht, The Netherlands: Reidel.
  • Gelman, R. (1990). First principles organize attention to and learning about relevant data: Number and the animate-inanimate distinction as examples. Cognitive Science, 14(1), 79106.
  • Gelman, R., & Williams, E. (1998). Enabling constraints for cognitive development and learning: Domain specificity and epigenesis. In D.Kuhn & R.Siegler (Eds.), Cognition, perception and language. Handbook of child psychology (5th ed., Vol. 2, pp. 575630). New York: Wiley.
  • Gigerenzer, G., & Selten, R. (2001). Bounded rationality: The adaptive toolbox. Cambridge, MA: MIT Press.
  • Gilbert, J. K., & Treagust, D. (Eds.). (2009). Multiple representations in chemical education. Dordrecht, The Netherlands: Springer.
  • Gillard, E., Van Dooren, W., Schaeken, W., & Verschaffel, L. (2009). Dual processes in the psychology of mathematics and cognitive psychology. Human Development, 52, 95108.
  • Gilovich, T., Griffin, D., & Kahneman, D. (Eds.). (2002). Heuristics and biases: The psychology of intuitive judgment. Cambridge, England: Cambridge University Press.
  • Glaser, R. (1989). Expertise and learning: How do we think about instructional processes now that we have discovered knowledge structures? In D.Klahr & K.Kotovsky (Eds.), Complex information processing: The impact of Herbert A. Simon (pp. 269282). Hillsdale, NJ: Erlbaum.
  • Goldstein, D. G., & Gigerenzer, G. (2002). Models of ecological rationality: The recognition heuristic. Psychological Review, 109(1), 7590.
  • Greene, J. C., Caracelli, V. J., & Graham, W. F. (1989). Toward a conceptual framework for mixed-method evaluation designs. Educational Evaluation and Policy Analysis, 11(3), 255274.
  • Hatano, G., & Inagaki, K. (2000). Domain-specific constraints on conceptual development. International Journal of Behavioral Development, 24(3), 267275.
  • Hawkes S. J. (2005). Introductory chemistry needs a revolution. Journal of Chemical Education, 82, 16151616.
  • Inagaki, K., & Hatano, G. (2006). Young children's conceptions of the biological world. Current Directions in Psychological Science, 15, 177181.
    Direct Link:
  • Keil, F. C. (1990). Constraints on constraints: Surveying the epigenetic landscape. Cognitive Science, 14(1), 135168.
  • Kelemen, D., & Rosset, E. (2009). The human function compunction: Teleological explanations in adults. Cognition, 111, 138143.
  • Klaczynski, P. A. (2004). A dual-process model of adolescent development: Implications for decision making, reasoning, and identity. In R. V.Kail (Ed.), Advances in child development and behavior (pp. 73123), San Diego, CA: Academic Press.
  • Leron, U., & Hazzan, O. (2006). The rationality debate: Application of cognitive psychology to mathematics education. Educational Studies in Mathematics, 62, 105126.
  • Lewis, S. E., & Lewis, J. E. (2007). Predicting at-risk students in general chemistry: Comparing formal thought to a general achievement measure. Chemistry Education Research and Practice, 81(1), 3251.
  • Newell, B. R., & Shanks, D. R. (2004). On the role of recognition in decision making. Journal of Experimental Psychology-Learning Memory and Cognition, 30(4), 923935.
  • Osman, M., & Stavy, R. (2006). Development of intuitive rules: Evaluating the application of the dual system framework in understanding children's intuitive reasoning, Psychonomic Bulletin & Review, 13(6), 935953.
  • Pozo, J. I., & Gómez Crespo, M. A. (1998). Aprender y enseñar ciencia [Teaching and learning science]. Madrid, Spain: Morata.
  • Redish, E. F. (2004). A theoretical framework for physics education research: Modeling student thinking. In E. F.Redish & M.Vicentini (Eds.), Proceedings of the International School of Physics, “Enrico Fermi” course CLVI. Amsterdam: IOS Press.
  • Roberts, M. J. (2004). Heuristics and reasoning I: Making deduction simple. In J. P.Leighton & R. J.Stenberg (Eds.), The nature of reasoning. Cambridge, England: Cambridge University Press.
  • Rosset, E. (2008). It's no accident: Our bias for intentional explanations. Cognition, 108, 771780.
  • Sebastià, J. M. (1989). Cognitive constraints and spontaneous interpretations in physics. International Journal of Science Education, 11(4), 363369.
  • Shah, A. K., & Oppenheimer, D. M. (2008). Heuristics made easy: An effort-reduction framework. Psychological Bulletin, 134(2), 207222.
  • Siegler, R. S., & Crowley, K. (1994). Constraints on learning in non-privileged domains. Cognitive Psychology, 27, 194226.
  • Sloman, S. A. (1996). The empirical case for two systems of reasoning. Psychological Bulletin, 119(1), 322.
  • Stains, M., & Talanquer, V. (2007). Classification of chemical substances using particulate representations of matter: An analysis of student thinking. International Journal of Science Education, 29(5), 643661.
  • Stavy, R., & Tirosh, D. (2000). How students (mis-)understand science and mathematics: Intuitive rules. New York: Teachers College Press.
  • Taber, K. S. (1998). An alternative conceptual framework from chemistry education. International Journal of Science Education, 20(5), 597608.
  • Taber, K. S. (2003). Understanding ionisation energy: Physical, chemical and alternative conceptions. Chemistry Education: Research and Practice, 4(2), 149169.
  • Taber, K. S. (2008). Conceptual resources for learning science: Issues of transience and grain-size in cognition and cognitive structure. International Journal of Science Education, 30, 10271053.
  • Taber, K. S. (2009). College students' conceptions of chemical stability: The widespread adoption of a heuristic rule out of context and beyond its range of application. International Journal of Science Education, 31, 13331358.
  • Taber, K. S., & Bricheno, P. A. (2009). Coordinating procedural and conceptual knowledge to make sense of word equations: Understanding the complexity of a “simple” completion task at the learner's resolution. International Journal of Science Education, 31, 20212055.
  • Taber, K. S., & Tan, K.-C. D. (2007). Exploring learners' conceptual resources: Singapore A level students' explanations in the topic of ionisation energy. International Journal of Science and Mathematics Education, 5, 375392.
  • Tai, R. H., Sadler, P. M., & Loehr, J. F. (2005). Factors influencing success in introductory college chemistry. Journal of Research in Science Teaching, 42, 9871012.
  • Talanquer, V. (2006). Common sense chemistry: A model for understanding students' alternative conceptions. Journal of Chemical Education, 83(5), 811816.
  • Talanquer, V. (2008). Students' predictions about the sensory properties of chemical compounds: Additive versus emergent frameworks. Science Education. 92(1), 96114.
  • Talanquer, V. (2009). On cognitive constraints and learning progressions: The case of structure of matter. International Journal of Science Education, 31(15), 21232136.
  • Tanaka, J. W., & Taylor, M. (1991). Object categories and expertise—Is the basic level in the eye of the beholder. Cognitive Psychology, 23(3), 457482.
  • Todd, P. M., & Gigerenzer, G. (2000). Precis of simple heuristics that make us smart. Behavioral and Brain Sciences, 23, 727780.
  • Viennot, L. (2001). Reasoning in physics: The part of common sense. Dordrecht, The Netherlands: Kluwer.
  • Vosniadou, S. (1994). Capturing and modeling the process of conceptual change. Learning and Instruction, 4, 4569.
  • Vosniadou, S. (2007). The conceptual change approach and its re-framing. In S.Vosniadou, A.Baltas, & X.Vamvakoussi (Eds.), Re-framing the conceptual change approach in learning and instruction (pp. 115). Amsterdam: Earli/Elsevier.
  • Wellman, H. M., & Gelman, S. (1998). Knowledge acquisition in foundational domains. In D.Kuhn & R.Siegler (Eds.), Cognition, perception and language. Handbook of child psychology (5th ed, Vol. 2, pp. 523573). New York: Wiley.
  • White, R., & Gunstone, R. (1992). Probing understanding. London: The Falmer Press.